Bill Gourgey | Popular Science https://www.popsci.com/authors/bill-gourgey/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Thu, 23 Nov 2023 17:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 Bill Gourgey | Popular Science https://www.popsci.com/authors/bill-gourgey/ 32 32 How to trap cosmic rays in a jar like it’s 1951 https://www.popsci.com/science/cosmic-rays-in-a-jar/ Thu, 23 Nov 2023 17:00:00 +0000 https://www.popsci.com/?p=482751
Projects photo
Popular Science

Wait! Before you recycle that peanut butter container, consider making a cloud chamber.

The post How to trap cosmic rays in a jar like it’s 1951 appeared first on Popular Science.

]]>
Projects photo
Popular Science

ENERGY NEVER STOPS radiating through space, or on Earth. For more than a decade, hundreds of millions of samples from the never-ending deluge of protons, nuclei, and other atomic debris have collected in the International Space Station’s cosmic ray bucket—an instrument called the Alpha Magnetic Spectrometer. Here at home, cloud chambers—like those used by CERN, the Switzerland-based European Organization for Nuclear Research—illuminate the universe’s invisible cosmic storm.

In March 1951, longtime Popular Science contributor Kenneth M. Swezey treated space enthusiasts and DIYers to a step-by-step guide to making a cloud chamber, using a peanut butter jar. “The secret of any cloud chamber is a supersaturated vapor,” Swezey wrote. “As atomic particles dart through this vapor, they condense molecules in their path, leaving visible droplets—like vapor trails of high-flying aircraft.”

The first cloud chamber was devised by physicist Charles Thomas Rees Wilson in 1895 to reproduce the airborne puffs and study their behavior. By 1910, he’d begun spying the trails of charged particles, which ionized the supersaturated air and caused water droplets to form. At about the same time, physicist Victor Hess determined that charged particles, which he dubbed cosmic rays, were entering Earth’s atmosphere from space, a discovery that earned him a Nobel Prize in 1936.

Despite their ubiquity, the origins of those celestial sparks remain a mystery, although supernovas and ordinary stars like our sun are suspected to be prime sources. Beams of energy collide with atoms in Earth’s upper atmosphere, spawning charged subatomic particles like pions, muons, electrons, and positrons, whose ionized trails show up as spindly lines in cloud chambers. Radiation here on Earth also generates cosmic rays.

When Swezey offered up his home chamber in the 1950s, its use seemed somewhat practical. Fears of nuclear war, spurred by the worsening Cold War, dominated headlines. A homemade cloud chamber can detect atomic particles from nearby explosions, not to mention alpha particles, a product of radioactive decay from sources like radon gas, and gamma rays from radium, which was still being painted onto watch dials until the 1970s.

march 1951 magazine cover
Popular Science’s March 1951 magazine cover depicted a house being ravaged by the blast wave of a nuclear bomb. Popular Science

To view the cosmic ray storm, start with a glass or plastic jar—the bigger the better. A dark background, such as black felt glued inside the base and lid, will enhance the experience. Saturate the material at the base with rubbing alcohol, close the lid, and place the jar upside down on a bed of dry ice. As the apparatus cools, vapor forms. Turn off the lights, then shine a flashlight through the jar. Thin lines should appear, some perfectly straight (high-energy muons, big enough to plow through the jar), others zigzagging (electrons and positrons, so small they pinball off surrounding particles), and still others like eraser smudges (radon-spawned alpha particles, heavy and highly charged so they gather an ionic entourage).

Our 1951 cloud chamber recipe will still work today, although CERN offers an updated instructional video that uses the same essential ingredients. Can’t find dry ice? Ready-made cloud chambers will work at regular freezer temperatures. All you need is nearly pure ethanol and hot water to generate the cloud (and a few hundred extra dollars to cover the equipment costs).

This story originally appeared in the High Issue of Popular Science. Read more PopSci+ stories.

The post How to trap cosmic rays in a jar like it’s 1951 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How to use 3D glasses from 1954 today https://www.popsci.com/diy/vintage-3d-glasses/ Tue, 07 Nov 2023 14:00:00 +0000 https://www.popsci.com/?p=586632
3D viewing glasses on blue background of vintage PopSci magazine images; illustration

This old-school idea on how to repurpose 3D viewers shows us how much—and how little—the technology has changed.

The post How to use 3D glasses from 1954 today appeared first on Popular Science.

]]>
3D viewing glasses on blue background of vintage PopSci magazine images; illustration

WHEN THE HOLOGRAM of Princess Leia, projected by the little droid R2-D2, appeared in the first Star Wars movie in 1977, the hopes and dreams of 3D-viewing enthusiasts likely soared. Even though the hologram was fictional, it was a glimpse of 3D’s supposed glasses-free future—never mind that the hologram itself was viewable only in 2D (later plans to reproduce the Star Wars films in 3D fell flat after just one remake). At the time, the typical way to view images in 3D was with cardboard-stock anaglyph glasses—the type with different-colored lenses: one red, the other green, cyan, or blue. 

The problem in 1977 was that 3D glasses were utterly useless other than in theaters and with specially produced movies. (This is still true today despite past attempts to sell 3D TVs and an increase in 3D video games.) Of course, in the 1970s, multicolored shades might have jived with bell-bottoms and beads, but the glasses’ effect would have triggered a headache in bright sunshine. That’s because the lenses would interfere with what our eyes and brain already know how to do—see the world three-dimensionally, or stereoscopically.

Still, as longtime Popular Science contributor Walter E. Burton explained in a July 1954 do-it-yourself story that described how to reuse these single-use items, discardable 3D viewers can offer “lots of entertainment value” even after the movie ends. Burton’s instructions were timely. Interest in 3D films was surging in the 1950s—so much so that the decade has since been referred to as the golden age of 3D cinema. After the 1950s, enthusiasm waned, experiencing brief resurgences in the early 1980s and 2010s, the latter inspired by James Cameron’s 3D release of Avatar. But between 1952 and 1954 (when the Popular Science tutorial was published), Hollywood released more than fifty 3D films, including Westerns like Devil’s Canyon (1953), monster movies like The Creature from the Black Lagoon (1954), and the popular horror film House of Wax (1953) starring Vincent Price. By 1954, spare 3D viewers would have been easy for DIYers to come by.

3D explained

Today we associate 3D viewers with movies, but they actually got their start nearly two centuries ago, in 1838, when British scientist Charles Wheatstone debuted his stereoscope—a cumbersome tabletop contraption that rendered 2D drawings (photography was still in its infancy) in 3D. The first portable 3D viewer was invented several years later by another British scientist, David Brewster; his device resembled a clunky version of the View-Master that debuted at the New York World’s Fair in 1939 and soon became a popular children’s toy that is still available today. It would be another century after Brewster, however, before 3D viewing made it into motion pictures in earnest. Before 1952, only one major 3D movie had been produced—a black-and-white silent movie, The Power of Love, in 1922. 

To create a 3D illusion on a 2D screen, the goal is to mimic what occurs naturally in the brain. Researchers are still working out the biological mechanism that enables us to perceive depth, but it’s based on the different views from our eyes, or binocular disparity. When the brain assembles the separate 2D images, it interprets them as one image with depth.

July 1954 cover of Popular Science magazine has man on personal flying gadget
The cover of the July 1954 issue featured a ride-along “kite” and an introduction to color TVs. Popular Science

The effect is reproduced in a theater by slightly offset simultaneous projections. For movies that rely on polarized eyewear, the projected images use polarized light. Light is an electromagnetic wave that travels primarily along two planes—vertical and horizontal. A polarized filter, or lens, blocks one of the planes, or phases. In polarized glasses, one lens blocks the vertical phase, the other the horizontal. As Burton explains in his 1954 instructions, “The two polarizers are set at right angles to each other. Cut the viewer apart, place one eyepiece in front of the other, and you’ll find that little or no light gets through.”

Light also travels in a spectrum of colors—remember ROYGBIV? For anaglyph 3D, the dual images are projected using colored filters so each image is viewable only through its matching lens (red filters will project red images viewable by the red lens, likewise for the other filter lens, which can be cyan, green, or blue), creating the same illusion of depth that our brains achieve on their own. The most common anaglyph lenses tend to pair red with cyan and magenta with green.

How to reuse 3D viewing glasses

Since the same 3D viewing glasses that were popular in the 1950s are still used in theaters today (the frames might be plastic instead of cardboard), DIYers can follow Burton’s instructions nearly seven decades later, although you might need substitutes for some household products. For instance, a pair of glasses can be turned into a kaleidoscope with the aid of two 1950s-style bouillon cube containers (tea cans might work in modern times). Cut a hole in each container cap and paste one polarized lens over each hole. Then cut a hole in the bottom of one can (you won’t need the second can for anything, only its cap) and cover that hole with clear cellophane (or plastic wrap) so light can shine through from the bottom. Drop bits of hard clear plastic inside. When the caps are stacked on top of each other as eyepieces and rotated, the bits of clear plastic will appear to change colors. 

What’s new on the 3D scene

Even though the 3D viewing experience has required the same polarized or anaglyph lenses for more than 70 years, 3D technology has advanced, especially in the last decade. One reason Avatar sparked a surge in 3D movies in the 2010s was that the film crew used new tools to ratchet up the illusion of depth, including motion-capture attire worn by the actors to offer multiple views of the same action, video game–quality computer-generated graphics designed for 3D depth, and stereoscopic cameras that captured scenes with dual images, one for each eye. Of course, on the viewing end, movie-goers still required the decades-old polarized lenses to see the effects, but the results were stunning. 

Virtual reality headsets like those from Meta, HTC, and Microsoft also offer 3D viewing. While the VR experience may be immersive and realistic, the cyborg-style headsets are not exactly practical apparel. You’re better off wearing polarized or anaglyph specs in public. Of course, 3D nirvana means no viewing accessory required. Instead, the tech would be in the displays, designed and built to render images stereoscopically. Computer makers Acer and Asus have developed such displays, but so far the effect hasn’t been compelling enough to catch on.

For now, hang on to those 3D movie specs. Burton’s instructions will still work for DIYers interested in optical projects. Perhaps you have your own contemporary ideas.

Read more PopSci+ stories.

The post How to use 3D glasses from 1954 today appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This gadget from 1930 let people ‘talk’ to the dead—with a magic trick https://www.popsci.com/diy/spiritphone-magic-trick-explained/ Tue, 15 Aug 2023 13:00:00 +0000 https://www.popsci.com/?p=560905
person holds hand to ear, old magazine illustration
Popular Science

How a Popular Science tutorial for building a ‘spiritphone’ tuned into the hype of the Golden Age
of Magic.

The post This gadget from 1930 let people ‘talk’ to the dead—with a magic trick appeared first on Popular Science.

]]>
person holds hand to ear, old magazine illustration
Popular Science

MAGIC FIRST TOOK SHAPE from the occult—from unseen forces once more popularly believed to flow from the spirit world to alter the course of mortal events. Throughout history, magicians were seen as aloof figures mysteriously granted secret knowledge to channel numinous power. In some cultures and times, magicians held sway as oracles and shamans; in others, they were shunned as sorcerers and witches—or worse. It wasn’t until the late 19th century that magic made a break from its mostly mystical roots. Interest in magic grew exponentially into the 20th century when it became a popular performing art, sparking decades of fantastic feats of illusion, conjuring, and escapology known as the Golden Age of Magic.

Given magic’s history, it is particularly apt that in 1930, in the midst of magic’s heyday, Popular Science offered readers do-it-yourself instructions for building a “spiritphone”—a gadget capable of making prophecies by dint of its apparent radio connection with “the land of the departed.”   

“The spiritphone,” wrote George S. Greene, “is easy to construct and still easier to operate, and is one of the most effective tricks for the amateur magician.” The trick’s premise is to guess the name of a famous person secretly picked by a member of the audience. 

Slips of blank paper are handed out, and each audience member jots down the name of a “departed hero or famous [person]” of their own choosing. The folded slips are then collected in a hat. A member of the audience is chosen at random to select a folded slip, without peering at the name. The magician hands that volunteer the spiritphone, but not before barely turning a fake screw at its base, which brings the name of a famous person into view on the spiritphone’s dial. The volunteer is then instructed to ask the spiritphone, via a receiver, what name is on the slip of paper. The spiritphone “responds,” and the volunteer announces to the audience what they “hear”—which really means what they see on the spiritphone’s display. To everyone’s delight, the spiritphone’s answer matches what’s written on the folded slip of paper. That’s because when the slips of paper are collected from the audience, with sleight of hand, the magician tucks them into the hat’s interior sweatband and replaces them with slips that all bear the same name, preselected by the magician. The spiritphone has the same name imprinted on the rotating display in its interior mechanism, which Greene’s instructions explain how to build.

February 1930 cover of Popular Science magazine
The cover of the February 1930 issue features home projects and asked if we should abolish speed laws. Popular Science

Greene was a longtime Popular Science contributor who covered the magic beat, regularly explaining how tricks worked. One such article, written in January 1929, “Famous Magic Tricks Explained,” garnered protest from readers who didn’t want the magazine to reveal what was behind the curtain and spoil the charm of mainstream magic’s spell.

For instance, Greene explained how escapologists, like the legendary Harry Houdini, could vanish from an enclosed tank filled with water. Such tanks, it turns out, had a concealed trap door connected to a man-sized tube that deposited the performer backstage. “To perform the feat,” Greene explained, “one must, of course, have the ability to stay under water for the minute or two required.” Houdini could definitely hold his breath, but did he possess supernatural abilities? According to Greene, the trick is in the prop. Magicians are “specialists in woodcraft and metalworking, electricity, and psychology, and the ideas worked out are, in many cases, equal in cleverness to the products of our modern inventors.” 

In Greene’s time, carnivals were a popular venue for magic, and fortune telling was a cornerstone of traveling performances. Remember the crystal-gazing Omaha magician who becomes the Wizard in L. Frank Baum’s The Wonderful Wizard of Oz (an American classic with magic and illusion at its core)? The rise of television after World War II offered magicians an opportunity to branch out from their vaudeville roots. Today, David Copperfield is perhaps one of the best-known practicing illusionists. The 2013 blockbuster movie Now You See Me took illusion to a whole new level with the assistance of magic consultant (yes, there is such a profession), David Kwong.

Do-it-yourselfers nostalgic for the simple but clever magical props popular nearly a century ago can still follow Greene’s detailed spiritphone instructions. Some woodworking knowledge is a prerequisite, and a few modernizations might make the trick more relatable for a contemporary audience. For instance, a Bluetooth earbud or headset could replace the tethered receiver. An enterprising DIY magician might even connect it to their smartphone so a prerecorded name could be whispered into the assistant’s ear to match the secret name on the spiritphone’s display. Oh, and you’ll want to bring your own hat. It’s not likely that anyone in a 2020s audience will be able to offer a 1920s-style felt hat equipped with a paper-slip-concealing interior sweatband. 

Read more PopSci+ stories.

The post This gadget from 1930 let people ‘talk’ to the dead—with a magic trick appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An electric cow, a robot mailman, and other automatons we overestimated https://www.popsci.com/technology/robot-fails/ Sat, 15 Jul 2023 11:00:00 +0000 https://www.popsci.com/?p=557015
Vintage robots
Predicting the future is fraught with peril. Popular Science

A look back at some robotic inventions that didn't quite get there.

The post An electric cow, a robot mailman, and other automatons we overestimated appeared first on Popular Science.

]]>
Vintage robots
Predicting the future is fraught with peril. Popular Science

In the series I Made a Big Mistake, PopSci explores mishaps and misunderstandings, in all their shame and glory.


In Hollywood, robots have come in many shapes and sizes. There’s the classic, corrugated-tubing-limbed Robot from the television series Lost In Space (1965); the clunky C-3PO and cute R2-D2, the Star Wars (1977) duo; the tough Terminator from The Terminator (1984) played by Arnold Schwarzenegger; the mischievous Johnny 5 from Short Circuit (1986); the kind-hearted, ill-fated Sonny in I, Robot (2004); and WALL-E (2008), the endearing trash-collecting robot. Robot-reality, however, still lags behind robot-fiction by quite a bit. Even Elon Musk’s October 2022 debut of Optimus—a distinctly masculine humanoid-frame robot prototype built by Tesla that, for the first time, wobbled along sans cables—failed to wow critics, who compared it to decades-old Japanese robotics and noted that it lacked any differentiating capabilities. 

And yet, automatons—self-propelled machines—are not new. More than two millennia ago, Archytas, an inventor from ancient Greece, built a pulley-activated wooden dove, capable of flapping its wings and flying a very short distance (a puff of air triggered a counterweight that set the bird in motion). Around the 12th century, Al-Jazari, a prolific Muslim inventor, built a panoply of automatons, including a water-powered mechanical orchestra—a harpist, a flutist, and two drummers—that rowed around a lake by means of mechanical oarsmen. Leonardo Da Vinci’s notebooks are peppered with detailed sketches of various automatons, including a mechanical knight capable of sitting up, waving its arms, and moving its head and purportedly debuted in 1495. But it was Czech playwright Karel Čapek in his 1920 play, R.U.R. (Rossum’s Universal Robots), who first coined the phrase “robot” as a distinct category of automaton. Robot comes from the Czech, robota, which means forced labor. As Popular Science editor, Robert E. Martin, wrote in December 1928, a robot is a “working automaton,” built to serve humans. Isaac Asimov enshrined Čapek’s forced-labor concept in his three laws of robotics, which first appeared in 1942 in his short story “Runaround.”

Predicting the future is fraught with peril, especially for the science writer enthralled by the promise of a new technology. But that hasn’t stopped Popular Science writers and editors from trying. Past issues are peppered with stories of robots ready to take the world by storm. And yet, our domestic lives are still relatively robot free. (Factory automation is another story.) That’s because we underestimate just how sophisticated humans can be, taking on menial tasks with ease, like sorting and folding laundry. Even in the 21st century, service and domestic robots disappoint: design-challenged, single-purpose machines, like the pancake-shaped vacuums that knock about our living rooms. Advances in machine learning may finally add some agility and real-world adaptability to the next generation of robots, but until we get there (if we get there), a look back at some of the stranger robotic inventions, shaped by the miscalculations and misguided visions of their human inventors, might inform the future. 

Robots for hire

Robots photo
Popular Science August 1940 Issue

Looking for “live” entertainment to punctuate a party, banquet, or convention? Renting out robot entertainers may have roots as far back as 1940, according to a Popular Science story that described the star-studded life of Clarence the robot. Clarence, who resembled a supersized Tinman, could walk, talk, gesture with his arms, and “perform other feats.” More than eight decades later, however, robot entertainers are only slightly more sophisticated than their 1940s ancestor, even if they do have sleeker forms. For instance, Disney deploys talking, arm-waving, wing-flapping robots to animate rides, but they’re still pre-programmed to perform a limited range of activities. Chuck E. Cheese, which made a name for itself decades ago by fusing high-tech entertainment with the dining experience, has been phasing out its once-popular animatronics. Pre-programmed, stiff-gestured animal robots seem to have lost their charm for kiddos. They still can’t dance, twirl, or shake their robot booties. Not until Blade Runner-style androids hit the market will robot entertainment be worth the ticket price.

Animatronics that smoke, drink, and—moo

Robots photo
Popular Science May 1933

In May 1933, Popular Science previewed the dawn of animatronics, covering a prototype bound for the 1934 Chicago World’s Fair. The beast in question was not prehistoric, did not stalk its prey, and had no teeth to bare, but it could moo, wink its eyes, chew its cud, and even squirt a glassful of milk. The robotic cow may have been World’s Fair-worthy in 1933, but by 1935, Brooklyn inventor Milton Tenenbaum upped the stakes when he introduced a life-like mechanical dummy that, according to Popular Science, was known for “singing, smoking, drinking, and holding an animated conversation.” Tenenbaum proposed using such robots for “animated movie cartoons.” Although Hollywood was slow to adopt mooing cows and smoking dummies, Tenenbaum may have been crystal-balling the animatronics industry that eventually propelled blockbuster films like Jaws, Jurassic Park, and Aliens. Alas, with the advent of AI-generated movies, like Waymark’s The Frost, released in March 2023, animatronic props may be doomed to extinction.

The robot mailman

Robots photo
Popular Science October 1976 Issue

In October 1976, Popular Science saw the automated future of office mail delivery, declaring that the “Mailmobile is catching on.” Mailmobiles were (past tense) automated office mail carts that followed “a fluorescent chemical that can be sprayed without harm on most floor surfaces.” Later models used laser-guidance systems to navigate office floors. Mailmobiles were likely doomed by the advent of email, not to mention the limitations of their singular purpose. But in their heyday they were loved by their human office workers, who bestowed them with nicknames like Ivan, Igor, and Blue-eyes. A Mailmobile even played a cinematic role in the FX series, The Americans. Despite being shuttered in 2016 by their manufacturer, Dematic, (the original manufacturer was Lear Siegler, who also made Lear jets), there’s no denying their impressive four decade run. Of course, the United States Postal Service employs automation to process mail, including computer vision and sophisticated sorting machines, but you’re not likely to see your mail delivered by a self-driving mail mobile anytime soon. 

Lawn chair mowers

Robots photo

Suburban homeowners would probably part with a hefty sum for a lawn-mowing robot that really works. Today’s generation of wireless automated grass-cutters may be a bit easier to operate than the tethered type that Popular Science described in April 1954, but they’re still sub-par when it comes to navigating the average lawn, including steep grades, rough turf, and irregular geometries. In other words, more than a half century after their debut, the heart-stopping price tags on robot lawn mowers are not likely to appeal to most homeowners. Sorry suburbanites—lawn-chair mowing is still a thing of the future.

Teaching robots

Robots photo
Popular Science May 1983 Issue

It was in the early 1980s that companies began to roll out what Popular Science dubbed personal robots in the May 1983 issue. With names like B.O.B, HERO, RB5X, and ITSABOX for their nascent machines, the fledgling companies had set their sights on the domestic service market. According to one of the inventors, however, there was a big catch: “Robots can do an enormous number of things. But right now they can’t do things that require a great deal of mechanical or cognitive ability.” That ruled out just about everything on the home front, except, according to the inventors and, by extension, Popular Science, “entertaining guests and teaching children.” Ahem. Teaching children doesn’t require a great deal of cognitive ability? Go tell that to a teacher. Gaffes aside, fast forward four decades and, with the capabilities of large language models demonstrated by applications like Open AI’s ChatGPT, we might be on the cusp of building robots with just enough cognitive ability to somewhat augment the human learning experience (if they ever learn to get the facts right). As for robots that can reliably fold laundry and cook dinner while you’re at work? Don’t hold your breath.

The post An electric cow, a robot mailman, and other automatons we overestimated appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A 1967 foot-powered tool you could build today—if you wanted to https://www.popsci.com/diy/diy-foot-pedal-history/ Tue, 25 Apr 2023 13:00:00 +0000 https://www.popsci.com/?p=535767
collage made from old magazine photograph and blueprints
Popular Science

This vintage Popular Science tutorial invokes spinning wheels and DIY guitar pedals.

The post A 1967 foot-powered tool you could build today—if you wanted to appeared first on Popular Science.

]]>
collage made from old magazine photograph and blueprints
Popular Science

FROM ANCIENT treadwheel cranes to modern guitar effects pedals, the creative energy of our feet has come a long way. Roman aqueducts, medieval castles, and Gothic cathedrals were raised, mega-stone by mega-stone, by machines powered by human-size hamster wheels. Treadles, or foot levers, made their debut in the Middle Ages to power looms and spinning wheels. The stair climber got its start in 1818 as a prison treadmill—not to intentionally torment England’s inmates (as sometimes alleged), but to put their feet to work, turning gears to pump water and grind corn. In the late 19th century, pedal power took a fresh turn as artisans used the wheels of stationary bikes to spin up their wood lathes, bandsaws, drill presses, and knife grinders. By the early 20th century, even percussionists were getting in on foot action, adding pedals to drum sets and possibly taking pedal-effects cues from the centuries-old piano.

By 1967, when Popular Science electronics editor Ronald M. Benrey offered instructions for building a footswitch to control handheld electric tools, pedal power looked altogether different. When electricity and combustion engines had rolled out half a century earlier, our feet could suddenly spin up powerful cranes or propel cars at dizzying speeds with little more than a tap. No more sweating, huffing, and puffing. But even in 1967, using footswitches in home workshops was something of a novelty, especially the speed-control variety. Of course, some stationary home tools, like sewing machines, had standardized footswitches decades earlier.

In fact, Benrey wasn’t the first Popular Science editor to offer pedal-power DIY instructions. In December 1943, longtime contributor Walter E. Burton explained how to motorize a treadle sewing machine. His instructions include an electric motor controlled by a mechanical footswitch that worked like a clutch, engaging and disengaging the motor from the sewing machine’s flywheel. Burton’s design might have been inspired by a Popular Science contributor who, nearly a decade earlier (October 1935), shared how he’d used a fan motor to automate his sewing machine, connecting it to the treadle. Perhaps the most innovative use of pedal power comes from a December 1880 Popular Science story that describes the use of wind- and water-powered motors (electricity was not readily available then) controlled by foot pedals to drive a variety of home workshop tools, including sewing machines. (Alternative energy DIYers might be inspired by 1880 domestic motor designs.)

Popular Science cover February 1967
The cover of the February 1967 issue of Popular Science put the pedal to the metal with car-centric stories and plenty of DIY projects. Popular Science

What makes Benrey’s 1967 foot pedal unique is its speed-control feature. In addition to a simple on-off switch (referred to as “full” power), Benrey describes how to add a variable setting. When the switch is flipped to “variable,” the speed of the tool, such as the electric drill featured in the story, can be controlled with your foot. The catch is that in variable mode, the foot pedal delivers only about three-quarters of the electric current required to run the tool at full speed—a limitation of the added electronics. For full power, Benrey’s added switch must be flipped from variable to full. 

Today, footswitches, including the variable-speed variety, like Benrey’s, can be purchased, ready to use, for under $20. Of course, DIY enthusiasts can build one using any number of instructional videos. And throwback DIYers nostalgic for 1960s-style ingenuity could even build Benrey’s model. When looking for a silicon-controlled rectifier (SCR), which varies the tool’s speed by translating foot pressure to a corresponding amount of electric current, today there are a variety of thyristors that will do the trick. Plus, being handy with a soldering iron and comfortable with wiring diagrams is really a prerequisite. Whether you choose to build your own switch or not, if you’re the creative type who regularly works with power tools, you might want to add pedal power to your arsenal—you’ll probably wonder why it took you so long to unleash the power in your feet. 

Read more PopSci+ stories.

The post A 1967 foot-powered tool you could build today—if you wanted to appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A DIY voice assistant from 1950 could mute a radio and control toy trains https://www.popsci.com/diy/diy-voice-assistant-vintage/ Tue, 24 Jan 2023 14:00:00 +0000 https://www.popsci.com/?p=506997
collage of old magazine images
Popular Science

The ambitious Popular Science tutorial required the use of a soldering iron.

The post A DIY voice assistant from 1950 could mute a radio and control toy trains appeared first on Popular Science.

]]>
collage of old magazine images
Popular Science

LONG BEFORE SIRI, there was Audrey. But even before Audrey, there was the blueprint for the name-challenged Voice-trol.

In June 1950, Popular Science contributing writer Karl Greif, an electronics technician from upstate New York, offered do-it-yourself instructions for building a voice-activation switch that ultimately became known as Voice-trol. At the time, voice activation was such a novelty that DIY was just about the only option available to enthusiasts. “There’s power in your voice,” Greif wrote. “It can be used to make many kinds of apparatus heed your wishes.” Today, voice-command alternatives abound, but for most “apparatus,” it takes at least some DIY mojo, such as the ability to install a hub and integrate it with appliances, to realize the power in your voice.

Greif’s 1950 instructions came with electrical schematics and a list of parts that included resistors, capacitors, switches, a transformer, and a microphone. Since his designs required users to dissect the electronic entrails of the device being voice-activated, his DIY voice activation was not for amateurs. Familiarity with soldering irons and voltmeters was a prerequisite. Still, his device responded to simple voice commands—or more accurately, to sounds—to control a toy train, mute a radio during commercials, or open a garage door. For instance, a single-syllable word like stop would trigger an electric relay and stop the train (any single-syllable word could do, or just a clap). A double-syllable word, like forward, would trigger the relay twice and start the train moving. Greif even gave instructions for a bell-ringing baby monitor. The voice-command unit could be placed beside a crib and wired to an alarm bell installed in a different room. Whenever the baby cried, the alarm bell would ring. Four years later, in the magazine Popular Electronics, Greif described a voice-activation prototype he’d developed and dubbed Voice-trol, which was designed to plug into then-popular toy train models with less effort and assembly. 

In 1952, Bell Labs debuted a much more sophisticated voice-command machine. Audrey, or Automatic Digit Recognizer, was a room-size computer capable of recognizing the spoken words for numbers zero through nine; it could even automatically dial the numbers.

Voice-control technology has come a long way since Voice-trol and Audrey. Yet even after more than half a century marked by major voice-technology milestones, voice-activated home appliances have not caught on the way Greif envisioned (with the exception of connected or “smart” TVs). While we’ve grown comfortable talking to our devices, powered by today’s popular voice assistants like Amazon’s Alexa, Apple’s Siri, Google’s Assistant, and Microsoft’s Cortana, they are used chiefly to control communications like texts and phone calls, or to operate virtual services like internet search, navigation, online shopping, and music. Unlike their 1950s ancestor, which could detect only sound, they are quite capable of parsing basic voice commands like “Call Mom” or “Play Dire Straits.” But when it comes to controlling physical objects like home appliances, voice activation takes a bit more effort. Not only do you have to take steps to set up such smart appliances, but every device also seems to have its own app and specific commands that require some getting used to, and the appliance may even require voice training if it is not connected to an established voice assistant like Alexa. Even then, some controllers like Google Nest require further direct training. What’s more, for Amazon and Google, at least, voice assistants have reportedly failed to turn a profit—ever. 

Still, if you’re the 2020s version of the 1950s voice enthusiast, the good news is that you won’t need a soldering iron. While it’s still possible to use Greif’s instructions to build his voice-control device, it would fall far short of what’s possible today. Plus, you might run into snags hacking the device into the guts of today’s tightly packed electronics, like a remote control train set or a clock radio. But a DIY diehard could build a basic voice-recognition command module from scratch (sort of), using a Raspberry Pi (ReSpeaker 2-Mics Pi HAT, for example, running AIY Google Voice Kit) to develop the voice assistant. Then add a custom keyword-spotting feature with Arduino Nano (such as 33 BLE Sense) running TinyML trained to parse basic keywords (like hey, PopSci). Or just go to AIY Google Voice Kit for a project tutorial. 

Fortunately, most major appliance manufacturers offer smart appliances that interact with apps and voice assistants. Popular Science explains how to voice-activate your home using voice assistant home hubs like Apple’s Homekit, Google’s Assistant, and Amazon’s Alexa. About seventy years after Voice-trol, however, it still takes some DIY know-how—in navigating wireless connectivity, custom apps, and device idiosyncrasies—to control physical objects with your voice.

Read more PopSci+ stories.

The post A DIY voice assistant from 1950 could mute a radio and control toy trains appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Please do not follow this 1953 tutorial for DIY scuba gear https://www.popsci.com/diy/build-your-own-scuba-gear/ Thu, 08 Dec 2022 14:00:00 +0000 https://www.popsci.com/?p=462401
collage of old magazine images showing scuba divers

A 1953 tutorial for building your own underwater breathing apparatus wouldn’t be very practical today, but it sure was optimistic.

The post Please do not follow this 1953 tutorial for DIY scuba gear appeared first on Popular Science.

]]>
collage of old magazine images showing scuba divers

WHEN POPULAR SCIENCE associate editor Herb Pfister offered up instructions for do-it-yourself scuba gear in 1953, exploring the deep sea for fun was such a new idea that the acronym SCUBA—self-contained underwater breathing apparatus—was only a year old. The man who coined the term, Christian Lambertsen, was an Army doctor during WWII and developed one of the early untethered underwater breathing devices. It was known as a closed-circuit rebreather, and it scrubbed CO2 from exhalations and recirculated oxygen for the US Navy’s “frogmen” divers. Lambertsen’s device was neither the first nor the last such apparatus. Japanese blacksmith Kinzo Ohgushi fashioned the first known underwater breathing machine in 1918, and marine explorer Jacques Cousteau and French engineer Émile Gagnan followed up Lambertsen’s contraption with the Aqua-Lung, which employed a regulator that delivered compressed air only when the wearer inhaled and vented CO2 into the surroundings.

By the mid-’50s, the experience of exploring the deep in this way was still quite uncommon, but ingenious DIYers wanted in on the action. Pfister described underwater diving as “a brand-new sensation, a feeling of really being out of this world.” In 23 photographed steps, he explained how to repurpose then-ordinary items like surplus CO2 tanks, high-pressure connectors from oxygen-therapy-equipment dealers, and aluminum sheets to build an aqualung—all for about $40 (about $438 in 2022 money).

Today, deep-sea diving is not the fledgling sport it was back then—though it is still a rather high-end hobby—and we recognize that the risks of building your own underwater breathing apparatus likely outweigh any novelty. Scuba gear is readily available for would-be explorers. Plus, some homemade projects, especially those requiring precision-fabricated parts like oxygen regulators, can cost more when you aren’t buying in bulk as manufacturers do. For instance, an economical scuba kit on Amazon will set you back $499, while the individual components can add up to thousands of dollars—especially if you want quality gear from top brands like Aqualung or Cressi.

Still, if you’re motivated to shoulder the potential costs and build your own dive gear from as close to scratch as possible, the parts are out there—although what was cheap and readily available in 1953, like brass pipes, may be expensive now. Besides, we must also consider advances in apparel, like the invention of buoyancy compensators—diving vests with air pockets that allow users to control their rise and descent. Such equipment makes things much easier on your spine than Pfister’s plywood plate and harness.

The biggest concern, however, is safety. Underwater diving regulations have changed quite a bit since the middle of the 20th century. After two people died in California in 1952, Scripps Institution of Oceanography, which led the way in scuba’s nonmilitary adoption in the US, developed a set of rules and regulations for the activity; these include air-quality standards and specifications for breathing masks and helmets. The first edition was released in 1954. Even so, it would be difficult to make any DIY gear conform to the requirements.

In the ’50s, though, Pfister was a bit optimistic as he doled out advice for diving newbies ready to take their garage aqualungs out for a plunge. “Using a diving lung,” he wrote, “is as safe as crossing a street.” He cautioned, however, that even crossing a street has its rules. “You, for the first time, are about to cross into a new medium—deep water.”

This story originally ran in the Fall 2022 Daredevil Issue of PopSci. Read more PopSci+ stories.

The post Please do not follow this 1953 tutorial for DIY scuba gear appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why we still don’t have a vaccine for the common cold https://www.popsci.com/health/history-common-cold-vaccine/ Wed, 02 Nov 2022 14:00:00 +0000 https://www.popsci.com/?p=483238
a black and white picture cut out of two men in lab coats looking at vials on a lab bench that has a microscope. in purple behind them is text of an old news article, with the headline "science closes in on the common cold"
'Science Closes In on the Common Cold' appeared in the November 1955 issue of Popular Science. Popular Science

For decades, scientists have been on the hunt for a universal common cold vaccine—and they're still searching.

The post Why we still don’t have a vaccine for the common cold appeared first on Popular Science.

]]>
a black and white picture cut out of two men in lab coats looking at vials on a lab bench that has a microscope. in purple behind them is text of an old news article, with the headline "science closes in on the common cold"
'Science Closes In on the Common Cold' appeared in the November 1955 issue of Popular Science. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

Feeling yucky? Runny nose, scratchy throat? Maybe a cough, with light chills and aches, possibly a low-grade fever? We’ve all been there. Statistically, everyone comes down with these symptoms multiple times a year. These past few years, it would be tempting to blame some variant of the COVID-19 coronavirus, or SARS-CoV-2, for such symptoms. However, there’s also a strong possibility that it’s a distant cousin on the human virus family tree, one that is responsible for more sick days and visits to the doctor each year than any other pathogen—rhinovirus. Common cold symptoms can be caused by many viruses, but the odds you’re fighting a rhinovirus are high: the virus accounts for as much as half of all common colds

Traditionally, there has been a certain seasonality to respiratory viruses in the US. Influenza tends to peak in fall and again in early spring, while common colds, such as respiratory syncytial viruses (RSVs), non-COVID coronaviruses, adenoviruses, and rhinoviruses pick up in mid-winter. But COVID-19 seems to have disrupted the normal pattern. “We typically see RSV at the peak of winter season,” says Richard Martinello, a respiratory virus specialist at Yale School of Medicine in Connecticut. “But our hospital’s already full. We’re trying to figure out where to put patients and how to care for them.” It’s not just RSV. “We’re actually seeing occasional kids with pretty severe rhinovirus infections,” Martinello adds, “and adults with severe rhinovirus infections in the hospital this year.” 

Every year, we’re encouraged to get our annual flu shot—and it seems COVID vaccinations are headed down a similar path. Yet, we don’t get one for the common cold. With more than a billion cases each year in the US alone—far more than any other virus, including COVID-19 and the flu combined—it’s hard to overstate the uplift a universal common cold vaccine would have. The hunt for such a vaccine began more than half a century ago, as Popular Science reported in November 1955.

Dating back to the 19th century, a slew of vaccines have been developed for many of humanity’s most pervasive pathogens, from the very first vaccine in 1798 for smallpox to cholera and typhoid in 1896 to the COVID-19 vaccines in 2020—but no common cold vaccine. 

In the 1950s, however, flush with the success of Jonas Salk’s polio vaccine, virologists were convinced it would be just a handful of years before the common cold would be eradicated by vaccine. In the 1955 Popular Science article, prolific virologist Robert Huebner estimated that a vaccine for the common cold might be available to the general public in as little as a year. While Huebner—who is credited with discovering oncogenes (genes with the propensity to cause cancer)—was successful in developing an adenovirus vaccine specifically for pharyngoconjunctival fever, he never fulfilled his quest for a common cold vaccine.

[Related: What’s the difference between COVID, flu, and cold symptoms?]

Although Popular Science’s story focused on Huebner’s 1953 discovery of adenovirus as a root cause for the common cold, it wasn’t until Winston Price’s 1956 discovery that virologists realized rhinovirus was the chief common-cold culprit. Since Price’s discovery, three species of rhinovirus have been discovered (A, B, and C), including more than 150 distinct strains. Plus, a majority of the known rhinovirus genomes have been sequenced in an effort to find commonalities that might serve as the basis for a universal vaccine.

“Considering there are more than 100 types of A and B rhinoviruses,” notes Yury Bochkov, a respiratory virus specialist at the University of Wisconsin School of Medicine and Public Health, “you would have to put all 100 types in one vial of vaccine in order to enable protection” against just A and B rhinoviruses. Add in all the C rhinovirus types (more than 50), then cram in RSV’s virus types (more than 40), and that same vaccine would have to be packed with more than 200 strains. Even then, it would only offer protection against about two-thirds of all common colds. “That was considered the major obstacle in development of those vaccines,” Bochkov says.

When it comes to manufacturing universal vaccines, scientists hunt for the lowest common denominator—a common trait that the vaccine can target—shared by all variants of a virus. Unfortunately, viruses aren’t that cooperative. Breaking them down to find common traits is not so easy. To trigger antibody production, human immune systems must be able to recognize those common viral traits as belonging to an intruder. That means the traits must be exposed, or on the surface of the virus. Traits locked inside the virus particle, or in its capsid structure, are not detectable until after the virus has begun to replicate, which is too late to avoid infection.

Antibodies, which are made of protein-based immunoglobulins such as IgM and IgG, are Y-shaped cells that continuously circulate through our blood, and latch onto invading pathogens, which are recognizable by certain sequences in their surface proteins. Antibodies are capable of disabling the invaders until the white blood cell, or leukocyte, troops can arrive to kill them. The goal of a universal vaccine is to not only find an antibody-triggering trait common across those many distinct types of the same virus, but also find a trait that is slow to mutate—or one that doesn’t mutate at all. In the cases of universal coronavirus and influenza vaccines currently under development, researchers have focused on more than just the surface protein, targeting other viral parts, such as the surface protein’s stalk, that are still detectable by our immune systems, but less likely to mutate from one variant to the next. 

Viruses travel light, in other words they don’t carry around the machinery to replicate on their own. Instead, they use their surface proteins to bind to our bodies’ cells, then trick them into replicating virus particles. Coronaviruses, for instance, are known for their distinctive spike surface proteins, which became the focus of COVID-19 vaccines. Similarly, rhinoviruses have their own distinguishing surface protein shaped like a cloverleaf, which plays an essential role in the virus’s ability to hijack cells and replicate. Unfortunately, surface proteins tend to mutate quickly, enabling viruses to shapeshift and evade detection by our immune systems. That’s a chief reason why flu vaccines, and now COVID-19 vaccines, must be updated at least annually.

Fortunately for RSV, scientists have identified such commonalities. RSV is considered among the most dangerous of common cold viruses, especially for infants and children who are susceptible to respiratory tract infections. After a failed human trial in the 1960s that led to the death of two infants, it took another half century before scientists identified an immutable common trait—RSV’s surface fusion protein, or F protein, that binds to cells. Now, four different vaccines are already in the final third phase of human trials. “And they’re working,” Martinello notes, “they’re working amazingly well. It’s a very exciting time for RSV right now.”

[Related: Is it flu or RSV? It can be tough to tell.]

But for a common cold vaccine to make a dent in annual infections, protection against rhinovirus must be developed, too. While progress has been made on RSV, the quest for a universal rhinovirus vaccine has received less attention. That may be changing.

Since the 1960s, there have been several human clinical trials of rhinovirus vaccine candidates, although none have been universal. Still, some results have been promising—one trial reduced symptomatic colds from 47 percent to 3.5 percent. However, the vaccines have only been effective on a few of the more than 150 strains. In the 2010s, researchers developed synthetic peptide immunogens capable of triggering immune responses in rabbits exposed to 48 different strains; peptides are the building blocks of proteins, which give cells their shape, and peptide immunogens attract antibodies, encouraging their production. In a 2019 study, researchers identified a way in mice to deprive rhinoviruses (and other viruses) of a specific enzyme they need to replicate. 

In 2016, a 50-valent rhinovirus vaccine, or 50 strains in one shot, was successfully trialed in rhesus macaques, and a vaccine with 25 strains in mice. But even if such vaccines make it into human trials, that leaves more than 100 unaccounted-for rhinovirus strains. 

“What if you could split [all the different strains] into several groups?” Bochkov says. “Then I think you would have higher chances of finding something that would be conserved within a group.” It’s like breaking fractions into similar groups and finding the least common denominator for each—or, in this case, separating out groups of strains with common traits and developing individual vaccines for each, which are all later combined into one super-packed vaccine. That’s precisely the direction research team’s like Bochkov’s are heading with rhinovirus species C. Once separate vaccines are developed for individual groups, they might be bundled into a single shot, which is called a polyvalent vaccine. This approach of targeting multiple strains in one shot has already been proven a successful way to control viral diseases. The annual flu vaccine, for instance, is a polyvalent vaccine designed to target three or four of the flu strains most likely to circulate in a given year. Similarly, the new bivalent COVID booster shots create an immune response to both the original strain of SARS-CoV-2, as well as recent Omicron strains.

[Related: New COVID Omicron boosters, explained]

Better tools for genome sequencing are also on the rise, including AI software that can be used to analyze surface proteins and predict possible mutations, like Google’s AlphaFold. This combined with mRNA platform technologies that expedite vaccine development makes Martinello and Bochkov optimistic that more respiratory virus vaccines will be developed in the coming years. “Maybe we’ll see a flu, COVID, RSV vaccine all combined in one,” Bochkov says, adding that “vaccination would be the way to go in fighting the common cold.”

Even as progress has been made on a universal flu vaccine and a universal coronavirus vaccine, the quest for a universal common cold vaccine has received less attention. That’s in part because public health efforts need to focus and allocate vaccine-development on the deadliest and most infectious pathogens first. As contagious as common cold viruses are—they spread through droplets that are airborne or left on surfacesCOVID-19 is at least 10 times deadlier than the flu, and the flu is deadlier than the common cold. Still, the common cold can lead to serious complications for people who are immunocompromised or have lung conditions, like asthma and chronic obstructive pulmonary disease. 

While the search for a universal common cold vaccine began several decades ago, it is not likely to be fulfilled anytime soon, despite recent advances like the RSV vaccine trials. So, keep those tissues handy and wash your hands frequently. Wearing face masks as a prevention tactic isn’t exclusive to fighting COVID—they also work against the spread of other respiratory illnesses, including the common cold. “We have to be cognizant of what the risks are and thoughtful about how we protect ourselves from getting sick,” Martinello notes. “If you are sick, stay home, keep your kids home, because you know when you’re out and about that’s how that’s how things further spread.” 

And when common cold vaccines do arrive, even if they’re virus-specific at first, don’t hesitate to get your jab.

The post Why we still don’t have a vaccine for the common cold appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Could quantum physics unlock teleportation? https://www.popsci.com/science/quantum-teleportation-history/ Thu, 20 Oct 2022 15:30:00 +0000 https://www.popsci.com/?p=479596
illustrations of a person being teleported in a 1960s style
The article 'Teleportation: Beam Me Up, Bob' appeared in the November 1993 issue of Popular Science. Popular Science

Physicists are making leaps in quantum teleportation, but it's still a long ways from 'Star Trek.'

The post Could quantum physics unlock teleportation? appeared first on Popular Science.

]]>
illustrations of a person being teleported in a 1960s style
The article 'Teleportation: Beam Me Up, Bob' appeared in the November 1993 issue of Popular Science. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

Jetpacks, flying cars, hoverboards, bullet trains—inventors have dreamt up all kinds of creative ways, from science fiction to science fact, to get from point A to point B. But when it comes to transportation nirvana, nothing beats teleportation—vehicle-free, instantaneous travel. If beam-me-up-Scotty technology has gotten less attention than other transportation tropes—Popular Science ran short explainers in November 1993 and September 2004—it’s not because the idea isn’t appealing. Regrettably, over the decades there just hasn’t been much progress in teleportation science to report. However, since the 2010s, new discoveries on the subatomic level are shaking up the playing field: specifically, quantum teleportation.

Just this month, the 2022 Nobel Prize in Physics was awarded to three scientists “for experiments with entangled photons,” according to the Royal Swedish Academy of Sciences, which selects the winners. The recipients’ work demonstrated that teleportation is possible—well, at least between photons (and with some serious caveats on what could be teleported). The physicists—Alain Aspect, John Clauser, and Anton Zeilinger—had independent breakthroughs over the last several decades. The result of their work not only demonstrated quantum entanglement in action but also showed how the arcane property could be a channel to teleport quantum information from one photon to another. While their findings are not anywhere close to transforming airports and train stations into Star Trek-style transporters, they have been making their way into promising applications, including quantum computing, quantum networks, and quantum encryption

“Teleportation is a very inspiring word,” says Maria Spiropulu, the Shang-Yi Ch’en professor of physics at the California Institute of Technology, and director of the INQNET quantum network program. “It evokes our senses and suggests that a weird phenomenon is taking place. But nothing weird is taking place in quantum teleportation.”

When quantum mechanics was being hashed out in the early 20th century between physicists like Max Planck, Albert Einstein, Niels Bohr, and Erwin Schrödinger, it was becoming clear that at the subatomic particle level, nature appeared to have its own hidden communication channel, called quantum entanglement. Einstein described the phenomenon scientifically in a paper published in 1935, but famously called it “spooky action at a distance” because it appeared to defy the normal rules of physics. At the time, it seemed as fantastical as teleportation, a phrase first coined by writer Charles Fort just four years earlier to describe unexplainable spectacles like UFOs and poltergeists.

“Fifty years ago, when scientists started doing [quantum] experiments,” says Spiropulu, “it was still considered quite esoteric.” As if in tribute to those scientists, Spiropulu has  a print honoring physicist Richard Feynman in her office. Feynman won the Nobel Prize in 1965 for his Feynman diagrams, a graphical interpretation of quantum mechanics.

Spiropulu equates quantum entanglement with shared memories. “Once you marry, it doesn’t matter how many divorces you may have,” she explains. Because you’ve made memories together, “you are connected forever.” At a subatomic level, the “shared memories” between particles enables instantaneous transfer of information about quantum states—like atomic spin and photon polarization—between distant particles. These bits of information are called quantum bits, or qubits. Classical digital bits are binary, meaning that they can only hold the value of 1 or 0, but qubits can represent any range between 0 and 1 in a superposition, meaning there’s a certain probability of being 0 and certain probability of being 1 at the same time. Qubits’ ability to take on an infinite number of potential values simultaneously allows them to process information much faster—and that’s just what physicists are looking for in a system that leverages quantum teleportation. 

[Related: Quantum teleportation is real, but it’s not what you think]

But for qubits to work as information processors, they need to share information the way classical computer chips share information. Enter entanglement and teleportation. By entangling subatomic particles, like photons or electrons—the qubits—and then separating them, operations can be performed on one that generates an instantaneous response in its entangled twin. 

The farthest distance to date that qubits have been separated was set by Chinese scientists, who used quantum entanglement to send information from Tibet to a satellite in orbit 870 miles away. On terra firma, the record is just tens of miles, traveling through fiber optic connections and through air (line of sight lasers).

Qubits’ strange behavior—acting like they’re still together no matter how far apart they’ve been separated—continues to puzzle but amaze physicists. “It does appear magical,” Spiropulu admits. “The effect appears very, ‘wow!’ But once you break it down, then it’s engineering.” And in just the past five years, great strides have been made in quantum engineering to apply the mysterious but predictable characteristics of qubits. Besides quantum computing advances made by tech giants like Google, IBM, and Microsoft, Spiropulu has been spearheading a government- and privately funded program to build out a quantum internet that leverages quantum teleportation. 
With some guidance from Spiropulu’s postdoctoral researchers at Caltech, Venkata R. (Raju) Valivarthi and Neil Sinclair, this is how state-of-the-art quantum teleportation would work (you might want to strap yourself in):

Step 1: Entangle

a diagram of an orange unlabeled circle representing a photon pointing towards a pyramid representing a crystal and getting split into two photons labeled one and two

Using a laser, a stream of photons shoots through a special optical crystal that can split photons into pairs. The pair of photons are now entangled, meaning they share information. When one changes, the other will, too.

Step 2: Open a quantum teleportation channel

a diagram of photon 1 and 2 connected by a dotted line representing the quantum channel. the photons are in two locations

Then, one of the two photons is sent over a fiber optic cable (or another medium capable of transmitting light, such as air or space) to a distant location.This opens a quantum channel for teleportation. The distant photon (labeled photon one above) becomes the receiver, while the photon that remains behind (labeled photon two) is the transmitter. This channel does not necessarily indicate the direction of information flow as the photons could be distributed in roundabout ways.

Step 3: Prepare a message for teleportation

a diagram of a message icon with an arrow pointing at a photon labeled three. above the arrow are some dots and lines representing that the message is encoded

A third photon is added to the mix, and is encoded with the information to be teleported. This third photon is the message carrier. The types of information transmitted could be encoded into what’s called the photon’s properties, or state, such as its position, polarization, and momenta. (This is where qubits come in, if you think of the encoded message in terms of 0s, 1s, and their superpositions.)

Step 4: Teleport the encoded message

a diagram of step four with the photons changing states

One of the curious properties of quantum physics is that a particle’s state, or properties, such as its spin or position, cannot be known until it is measured. You can think of it like dice. A single die can hold up to six values, but its value isn’t known until it’s rolled. Measuring a particle is like rolling dice, it locks in a specific value. In teleportation, once the third photon is encoded, a joint measurement is taken of the second and third photons’ properties, which means their states are measured at the same time and their values are locked in (like viewing the value of a pair of dice). The act of measuring changes the state of the second photon to match the state of the third photon. As soon as the second photon changes, the first photon, on the receiving end of the quantum channel, snaps into a matching state.

Now the information lies with photon one—the receiver. However, even though the information has been teleported to the distant location, it’s still encoded, which means that like an unrolled die it’s indeterminate until it can be decoded, or measured. The measurement of photon one needs to match the joint measurement taken on photons two and three. So the outcome of the joint measurement taken on photons two and three is recorded and sent to photon one’s location so it can be repeated to unlock the information. At this point, photons two and three are gone because the act of measuring photons destroys them. Photons are absorbed by whatever is used to measure them, like our eyes. 

Step 5: Complete the teleportation

step five diagram shows photons three and two whited out (meaning they are gone) and photon one with the message decoded

To decode the state of photon one and complete the teleportation, photon one must be manipulated based on the outcome of the joint measurement, also called rotating it, which is like rolling the dice the same way they were rolled before for photons one and two. This decodes the message—similar to how binary 1s and 0s are translated into text or numeric values. The teleportation may seem instantaneous on the surface, but because the decoding instructions from the joint measurement can only be sent using light (in this scenario over a fiber optic cable), the photons only transfer the information at the speed of light. That’s important because teleportation would otherwise violate Einstein’s relativity principle, which states that nothing travels faster than the speed of light—if it did, this would lead to all sorts of bizarre implications and possibly upend physics. Now, the encoded information in photon three (the messenger) has been teleported from photon two’s position (transmitter) to photon one’s position (receiver) and decoded.

Whew! Quantum teleportation complete. 

Since we transmit digital bits today using light, it might seem like quantum teleportation and quantum networks offer no inherent advantage. But the difference is significant. Qubits can convey much more information than bits. Plus, quantum networks are more secure, since attempts to interfere with quantum entanglement would destroy the open quantum channel.

Researchers have discovered many different ways to entangle, transmit, and measure subatomic information. Plus, they’re upgrading from teleporting information about photons, to teleporting information about larger-sized particles like electrons, and even atoms

[Related: Warp speed space travel just got a tiny bit more realistic]

But it’s still just information being transmitted, not matter—the stuff that humans are made of. While the ultimate dream may be human teleportation, it actually might be a good thing we’re not there yet. 

The Star Trek television and film franchise not only helped popularize teleportation but also glamorized it with a glittery dissolve effect and catchy transporter-tone. The Fly, on the other hand, a movie about teleportation gone wrong, painted a much darker, but possibly scientifically truer picture of teleportation. That’s because teleportation is really an act of reincarnation. Teleportation of living matter is risky business: It would require scanning the traveler’s information at the point of departure, transmitting that information to the desired coordinates, and deconstructing them at the point of departure while simultaneously reconstructing the traveler at the point of arrival—we wouldn’t want errant copies of ourselves on the loose. Nor would we want to arrive as a lifeless copy of ourselves. We would have to arrive with all our beating, breathing, blinking systems intact in order for the process to be a success. Teleporting living beings, at its core, is a matter of life and death.

Or not.

Formidable minds, such as Stephen Hawking, have proposed that the information, or vector state, that is teleported over quantum entanglement channels does not have to be confined to subatomic particle properties. In fact, entire blackholes’ worth of trapped information could be teleported, according to this theory. It gets weird, but by entangling two blackholes and connecting them with a wormhole (a space-time shortcut), information that disappears into one blackhole might emerge from the other as a hologram. Under this reasoning, the vector states of molecules, humans, and even entire planets could theoretically be teleported as holograms. 

Kip Thorne, a Caltech physicist who won the 2017 Nobel Prize in Physics for gravity wave detection, may have best explained the possibilities of teleportation and time travel as far back as 1988: “One can imagine an advanced civilization pulling a wormhole out of the quantum foam and enlarging it to classical size. This might be analyzed by techniques now being developed for computation of spontaneous wormhole production by quantum tunneling.” 

For now, Spiropulu remains focused on the immediate promise of quantum teleportation. But it won’t look anything like Star Trek. “‘Beam me up, Scotty?’ No such things,” she says. “But yes, a lot of progress. And it’s transformative.”

The post Could quantum physics unlock teleportation? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
High-speed rail trains are stalled in the US—and that might not change for a while https://www.popsci.com/technology/high-speed-trains-hyperloop-history/ Wed, 05 Oct 2022 10:00:00 +0000 https://www.popsci.com/?p=474817
a high speed rail train in black and white over a map of US rail lines in purple in the background
'Rapid Rails' appeared in the June 1992 issue of Popular Science. Popular Science

The US has more miles of railroad tracks than anywhere in the world, but establishing high-speed trains has been slow going.

The post High-speed rail trains are stalled in the US—and that might not change for a while appeared first on Popular Science.

]]>
a high speed rail train in black and white over a map of US rail lines in purple in the background
'Rapid Rails' appeared in the June 1992 issue of Popular Science. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

In just the past year, countries around the world have continued rolling out high-speed trains. France revealed its next generation high-speed train, TGV M, which is larger, more carbon efficient, and travels up to 220 mph. Italy unveiled direct high-speed rail links from Rome’s airport to Naples and Florence. China opened 140 new miles of high-speed rail, while also showcasing a line dedicated for the 2022 Winter Olympics. And Japan, which debuted the bullet train in 1964, will be opening a new 41-mile high-speed rail line from Takeo Onsen to Nagasaki. But here in the US, home to more than 150,000 miles of railroad tracks—the most in the world—it’s been high-speed rail crickets. 

To be fair, Amtrak did announce that its new top speed for its Acela train on northeast routes, or Northeast Corridor (NEC), is 150 mph on a 16-mile track segment in New Jersey—still shy of other high-speed rail like China’s recently upgraded Beijing-Wuhan line that zips between 190 to 220 mph. What’s more, California, Texas, Nevada, and the Northeast, all have rapid rail projects that have been sputtering along for years.

But three decades ago, in June 1992, Popular Science published a story that predicted high-speed rail would soon launch in major US regions, with more to follow. “Florida recently approved a plan to build a magnetically levitated, or maglev, train system that would begin operating in 1996,” wrote senior contributing editor Chris O’Malley, further adding that high-speed rail was going to be dashing through Texas as soon as 1998. Unfortunately, neither project came to fruition. Still, PopSci was not alone in covering the hope for high-speed rail in the US. In August 1992, Scientific American also ran a feature on the promise of maglev trains. In March 1990, The New York Times reported efforts to build a high-speed rail system linking Ohio cities, a project based on Florida’s plans for an anticipated 325-mile high-speed rail. But none of the high-speed rail plans or projects underway three decades ago succeeded. Zero. Despite the allure of quietly humming past changing scenery at 200 mph or more on an electrically and sustainably propelled ride, without having to navigate airport traffic and security lines, the US is not poised to install high-speed rail anytime soon, anywhere. 

“The US is really a very auto-centric country,” says Ian Rainey, a senior vice president at Northeast Maglev, a privately held company associated with Central Japan Railway. “When a lot of countries were investing in high speed rail in the 1950s, 60s, and 70s, the United States was building out the interstate highway system.” He adds that once such highway systems are built out, “you want to keep investing in them, keep them in good shape. And that takes money.” Money that could have been—and could still be—spent on rapid rails.

When it comes to achieving high transit speeds on terra firma, there are three main contenders, each requiring unique technology and engineering: high-speed rail (HSR), maglev, and hyperloop. If we were to place them on a rapid-rail reality meter, HSR would score a 10 out of 10 (widely available commercially, mature tech); maglev would earn a 5 (limited commercially, extensive prototypes); and, hyperloop would rank at 2, (early prototypes that are a long ways away from commercial deployment). 

Japan was the first to debut HSR in 1964, when it opened the Shinkansen (meaning new trunk line, also well known as a bullet train), between Tokyo and Osaka just in time for the ’64 Olympics. HSR’s advantage over other contenders is that it uses standard gauge tracks, although the tracks must be flat (low gradients) and straight to achieve its top speeds of 220 mph; any curves must be gentle. The trains (known as rolling stock) are also more streamlined than conventional trains, have more powerful engines, and some are designed to tilt as much 8 degrees to hug the track on turns. HSR trains can theoretically share tracks with regular trains, as long as route design and signaling systems support the speed disparities. Amtrak has been making such upgrades for decades to its NEC main line to accommodate its Acela trains. 

[Related: This super-fast jet train would tap into a whole new field of physics]

The problem with upgrading old tracks, regardless of location, is that they weren’t designed for high speed. “The alignment was laid out a long, long time ago,” Rainey notes. “So now you’ve got a fairly curved track, with a lot of residential and commercial area developed around it in the past 100 to 150 years. And so it’s very difficult to straighten it out.” That’s why Amtrak’s trains can’t achieve higher speeds, and probably never will on the NEC main line, which runs mostly above ground through densely populated regions.

Where HSR trains rarely travel faster than 220 mph, magnetic levitation, or maglev, has potential to achieve travel speeds greater than 300 mph, while being quieter and more energy efficient than HSR. Magnetic levitation transportation was first proposed by rocket-engine inventor Robert Goddard in a 1909 issue of Scientific American. Goddard’s early concept also incorporated features that can be found in today’s hyperloop designs, such as partial-vacuum tunnels to reduce drag. At the time, electromagnetic transportation possibilities were also being explored by other inventors, although none would even be prototyped until the 1970s. Maglev trains run on concrete guideways lined with electromagnets that repel the magnetized cars, elevating them millimeters to inches above the track (varies depending on the levitation technique). The motor, known as a linear induction motor, is not in the train, but on the guideway, using alternating magnetic poles like a conveyor belt to propel the train forward and slow it down (you can make your own mini model levitating train at home). Because maglev trains require entirely new guideways, cars, and power specifications, they must be built from scratch. Despite their decades-long allure, implementation costs can be prohibitive relative to HSR. Today there are only six operational maglev trains—three in China, two in South Korea, and one in Japan. Only one qualifies as high speed, China’s Shanghai maglev, which runs for 18.6 miles from a subway station to the airport, and reaches 268 mph during the 7-minute trip. 

a popular science magazine cover with an illustration of a high speed rail train. the title reads '310-mph maglev trains for U.S. cities rapid rails'
The June 1992 cover of Popular Science. Popular Science

As rapid rails reach for higher speed and efficiency, however, maglev may finally find a wider role—and offer a more appealing venture. Central Japan Railway has been perfecting a new kind of maglev powered by superconduction, which is capable of achieving speeds greater than 300 mph, well above HSR limits. Superconducting maglev uses a wire, or coil, chilled to -452°F to reduce electrical resistance and generate a magnetic force that is more powerful and requires less energy than a conventional electromagnet. This would allow for higher propulsion speeds. In Japan, plans are well underway to install a new superconducting maglev train alongside the renowned Shinkansen bullet train.

In the US, Rainey’s company, Northeast Maglev, has been collaborating with Central Japan Railway to build a superconducting maglev train between Washington, DC and New York City. On Amtrak, that trip currently takes 2 hours 35 minutes nonstop, but on a superconducting maglev train, passengers would arrive at their destination in about an hour. Since Amtrak’s mainline can’t accommodate very high speeds, Northeast Maglev sees an opportunity for a new 300 mph train running between the Northeast’s most populous cities. To avoid right-of-way issues, most of the new train line will run underground through deep tunnels. But maglev trains require large tunnels—even larger than the century-old, low-slung New York subway tunnels—having to accommodate multiple guideways and a high-speed form factor (straight and level). That’s where hyperloop tunnels may have an edge.

Hyperloop transportation is the most futuristic rapid-rail contender. Although Elon Musk is often credited for hyperloop designs with his 2013 Hyperloop Alpha whitepaper, the primary concept has been around over the past century. As early as 1909, Goddard developed a hyperloop design—outlining the core components of airtight tubes and cars propelled on a cushion that is either air or magnetic. Later in August 1961, PopSci published a story featuring cars traveling through aeroduct pipes on cushions of air. After Musk’s whitepaper, startups and investors poured money and interest into hyperloop designs, resulting in a trial of Virgin Hyperloop in November 2020 outside of Las Vegas, Nevada. But despite a resurgence, there are only a handful of prototypes underway around the world. 

[Related: The first hyperloop passengers just took a short but important ride]

“I think the [Loop] concept is intriguing and potentially makes a lot of sense,” Rainey says. But he doesn’t see it competing with maglev designs because, while high speed, hyperloops are geared toward individuals driving their own cars versus mass transit. That’s why their tunnels can be smaller and faster to bore.

For a country with the largest railway network in the world, it may seem counterintuitive that the US has been unable to debut a single high-speed rail system. That said, Amtrak markets its Acela train as high speed, which the company can claim because there is no industry standard for a train to be considered “high-speed rail.” But with a 150 mph top speed achievable only for short distances, and an overall 68 mph average, it’s really not much faster than a typical commuter train with short high-speed segments. By contrast, its European and Asian counterparts regularly top more than 200 mph with average speeds well over 100 mph. Despite its world-leading size, the US rail system moves mostly freight, not people. Less than 15 percent of US rail lines are used by passenger trains. When viewed through the lens of passenger-miles traveled by train, as a country the US does not even make the world’s top ten. One statistic kept by the US Bureau of Transportation explains why: In 2019, the US logged 3.75 trillion passenger-miles driving cars and motorcycles (add another 2 trillion for trucks), but commuted only 12.7 billion passenger-miles riding trains. On a US passenger-mile pie chart, train travel would be about the width of a hair.

As Rainey points out, the US has been a car-centric culture for more than a century. Not even Elon Musk, with his hyperloop hoopla, is likely to change high-speed rail’s destiny in the near future. But that doesn’t mean there’s no place in the US for rapid rails. When they do eventually arrive—and they’re coming, on the slow train—their scope will likely be limited to specific cities and travel routes, like the projects underway in California and Texas, or in highly congested regions like the Northeast. That’s because the high-speed rail experience and economics work best when a few key travel conditions are met: First, when trains can move city to city making few stops along the way and reducing travel time, meaning taking travelers quickly between cities that are dense urban centers with limited suburban sprawl; second, when there’s enough space for new or modified rail infrastructure, including underground; and, third, when the major competition—cars and planes—can no longer expand impeded by lack of sufficient roadway and airport capacity. But even if certain routes between select cities (like in-progress projects between Los Angeles and San Francisco in California) find popular passenger demand, it’s unclear if high-speed rail would catch on to the extent it has in Japan, Europe, and China. In the car-dominated, airport-saturated US, only a handful of places can check all the boxes. 

“If you can get that sweet spot of big populations that are 100 to 300 miles apart from each other,” Rainey says, “I think you’ve got a winner for high-speed rail.” Citing the billions of dollars allocated for high-speed rail in the Infrastructure and Investment Jobs Act passed by Congress in 2021, he adds that the US may be at a tipping point where some of the projects underway will finally come to fruition. As for Northeast Maglev, Rainey says, “maybe by early 2030s, we’ll able to buy a ticket from DC to Baltimore.”

The post High-speed rail trains are stalled in the US—and that might not change for a while appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Is it finally time for a permanent base on the moon? https://www.popsci.com/science/moon-base-history/ Wed, 21 Sep 2022 14:00:00 +0000 https://www.popsci.com/?p=471249
a black, white, and purple stylized illustration of an astronaut on the moon with equipment intended to make a moonbas
'A manned base on the moon?' appeared in the April 1952 issue of Popular Science. Popular Science

The upcoming Artemis mission is NASA's initial step to create a lunar outpost—but are we really ready to establish long-term bases beyond Earth?

The post Is it finally time for a permanent base on the moon? appeared first on Popular Science.

]]>
a black, white, and purple stylized illustration of an astronaut on the moon with equipment intended to make a moonbas
'A manned base on the moon?' appeared in the April 1952 issue of Popular Science. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

Lately, all eyes are turned towards the moon. NASA has another launch attempt tentatively scheduled next week for the highly-anticipated Artemis 1 uncrewed mission to orbit Earth’s satellite, one of the first steps to set up an outpost on the lunar surface. But humans—and science fiction writers—have long imagined a moon base, one that would be a fixture of future deep space exploration. About five years before Sputnik and 17 years before the Apollo missions, the chairman of the British Interplanetary Society, Arthur C. Clarke, penned a story for the 1952 April issue of Popular Science describing what he thought a settlement on the moon could look like. Clarke, who would go on to write 2001: A Space Odyssey in 1968, envisioned novel off-Earth systems, including spacesuits that would “resemble suits of armor,” glass-domed hydroponic farms, water mining and oxygen extraction for fuel, igloo-shaped huts, and even railways. 

“The human race is remarkably fortunate in having so near at hand a full-sized world with which to experiment,” Clarke wrote. “Before we aim at the planets, we will have had a chance of perfecting our techniques on our satellite.” 

Since Clarke’s detailed moon base musings, PopSci has frequently covered the latest prospects in lunar stations, yet the last time anyone even set foot on the moon was December 1972. Despite past false starts, like the Constellation Program in the early 2000s, NASA’s Artemis program aims to change moon base calculus. This time, experts say that the air—and attitude—surrounding NASA’s latest bid for the moon is charged with a different kind of determination. 

“You can talk to anyone in the [space] community,” says Adrienne Dove, a planetary scientist at the University of Central Florida. “You can talk to the folks who have been around for 50 years, or the new folks, but it just feels real this time.” Dove’s optimism doesn’t just come from the Artemis 1 rocket poised for liftoff at Kennedy Space Center. She sees myriad differentiating factors this time, including the collaboration between private companies and NASA, the growing international support for the space governance framework, the Artemis Accords, and the competition from rival nations like China and Russia to stake out a lunar presence. Perhaps one of the biggest arguments from moon base supporters is the need for a stepping stone to send humans even deeper into space. “We want to learn how to live on the moon so we can go to Mars,” Dove says.  

[Related: How Tiangong station will make China a force in the space race]

Mark Vande Hei, a NASA astronaut who returned to Earth in March 2022 after spending a US record-breaking 355 consecutive days on the International Space Station (ISS), underscores the opportunity. “We’ve got this planetary object, the moon, not too far away. And we can buy down the huge risk of going to Mars by learning how to live for long durations on another planetary object that’s relatively close.”

Ever since Sputnik made its debut as the first artificial satellite in 1957, the Soviet Union deployed several short-lived space stations; NASA’s Apollo Missions enabled humans to walk on the moon; NASA’s space shuttle fleet (now retired) flew 135 missions; the ISS has been orbiting the Earth for more than two decades; more than 4,500 artificial satellites now sweep through the sky; and a series of private companies, like SpaceX and Blue Origin, have begun launching rockets and delivering payloads into space. 

But no moon base. 

That’s because exploring the moon is not like exploring the Earth. Besides being 240,000 miles away on a trajectory that requires slicing through dense atmosphere while escaping our planet’s gravitational grip, and then traversing the vacuum of space, once on the moon, daily temperatures range between 250°F during the day and -208°F at night. Although there may be water in the form of ice, it will have to be mined and extracted to be useful. The oxygen deprived atmosphere is so thin it can’t shield human inhabitants from meteor impacts of all sizes or solar radiation. There’s no source of food. Plus, lunar soil, or regolith, is so fine, sharp, and electrostatically charged, it not only clogs machinery and lungs but can also cut through clothes and flesh

“It’s a very hostile environment,” says Dove, whose specialty is lunar dust. She’s currently working on multiple lunar missions, like Commercial Lunar Payload Services or CLPS, which will deploy robotic landers to explore the moon in advance of humans arriving on the future crewed Artemis missions. While Dove acknowledges the habitability challenges, she’s quick to cite a range of solutions, starting with the initial tent-pitching location: the moon’s south pole. “That region seems to be rich with resources in terms of ice, which can be used as water or as fuel,” Dove says. Plus, there’s abundant sunlight on mountain peaks, where solar panels could be stationed. She adds that “there might be some rare earth elements that can be really useful.” Rare earth elements—there are 17 metals in that category—are, well, rare on Earth, yet they’re essential to electronics manufacturing. Finding them on the moon would be a boon.

A PopSci story in July 1985 detailed elaborate plans proposed by various space visionaries to colonize the moon and make use of its resources. Among the potential technologies were laboratory and habitat modules, a factory to extract water and oxygen for subsistence and fuel, and mining operations for raw moon minerals—a precious resource that could come in handy and provide income for settlers. While NASA may provide the needed boost to get a moon base going, it’s the promise of an off-world gold rush for these rare, potentially precious elements that could solidify and expand it. 

“My hope is that this is just the beginning of a commercial venture on the Moon,” Vande Hei says. He’s looking forward to seeing how businesses will find ways to be profitable by making use of resources on the moon. “At some point, we’ve got to be able to travel and not rely on the logistics chain starting from Earth,” Vande Hei adds, taking the long view. “We’ve got to be able to travel places and use the resources.”

[Related: Space tourism is on the rise. Can NASA keep up with it?]

And space is lucrative. In 2020, the global space industry generated roughly $370 billion in revenues, a figure based mostly on building rockets and satellites, along with the supporting hardware and software. Morgan Stanley, the US investment bank, estimates that the industry could generate $1 trillion in revenue in less than two decades, a growth rate predicted to be driven in no small part by the US military’s new Space Command branch. But those rising numbers mostly reflect economic activity in Earth’s orbit and what it might take to get set up on the moon—but they do not reflect the potential to begin converting the moon into an economic powerhouse. What happens next is anyone’s guess. The big dollar signs are one reason, no doubt, that the tech moguls behind private ventures like SpaceX and Blue Origin are investing heavily in space now.

The progress towards deeper space travel—and potential long-term human colonization on the moon or beyond—begs for larger ethical and moral conversations. “It’s a little bit Wild West-y,” says Dove. Although the Outer Space Treaty of 1967 and the more recent Artemis Accords strive “to create a safe and transparent environment which facilitates exploration, science, and commercial activities for all of humanity to enjoy,” according to NASA’s website, there are no rules or regulations, for instance, to govern activities like mining or extracting from the moon valuable rare earth elements for private profit. “There’s a number of people looking at the policy implications and figuring out how we start putting in place policies and ethics rules before all of this happens,” Dove adds. But, if the moon does not cough up its own version of unobtanium—the priceless element mined in the film Avatar—or if regulations are too draconian, it will be difficult for a nascent moon-economy to sustain itself before larger and more promising planetary outposts, like Mars, come to fruition and utilize its resources. After all, the building and sustainability costs and effort have been leading obstacles of establishing a moon base ever since the Apollo program spurred interest in more concrete plans.

Dove’s not really worried that private companies will pull out of the space sector—there’s little doubt they will find a way to profit. Rather, she views politics as the moon base program’s chief vulnerability. “Politics always concerns me with any of these big endeavors,” she adds. Not only domestic politics but international politics will be at play. “We see that with the ISS.”

As a retired military officer who was living on the ISS with Russian cosmonauts when Russia invaded Ukraine, Vande Hei also worries about international conflicts derailing space programs. “If we have a world war in Europe, if we’re just struggling to exist [on Earth], exploring space is not going to be at the top of the priority list.” But he also sees a bright side. He views international competition—or a moon base race—as a healthy way to create a sense of urgency. Vande Hei estimates that “a moon base is something we could do within [this] generation.”

Dove also sees the opportunities that laboratory facilities on the moon could open up for future space research—including her own. “The moon is very interesting in terms of understanding the history of Earth,” she says. “I would love to go do science on the moon.”

The post Is it finally time for a permanent base on the moon? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The centuries-long quest to map the seafloor’s hidden secrets https://www.popsci.com/environment/seafloor-mapping-history/ Wed, 14 Sep 2022 14:00:00 +0000 https://www.popsci.com/?p=467995
a schematic of a satellite beaming down to a dish to study the ocean floor
'Mapping the sea' appeared in the February 1985 issue of Popular Science. Popular Science

Ocean explorers have long tried to survey the contours of the seafloor, but today's charts still pale in comparison to those of distant planets.

The post The centuries-long quest to map the seafloor’s hidden secrets appeared first on Popular Science.

]]>
a schematic of a satellite beaming down to a dish to study the ocean floor
'Mapping the sea' appeared in the February 1985 issue of Popular Science. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

In 1984, marine geologists finally got a long-anticipated glimpse of our planet unseen. After crunching satellite data for 18 months, a geophysicist at Columbia University’s Lamont-Doherty Geological Observatory, William Haxby, revealed a stunning new panorama of the seafloor. It was the first time anyone spied a worldwide picture of what lay beneath the ocean in such detail—volcanoes, underwater mountains, fracture zones, and trenches. “Haxby’s maps of the world’s seafloors reveal a terrain as diverse as any found on the seven continents,” journalist and science writer Marcia Bartusiak reported for that year’s February issue of Popular Science, capturing the scientific community’s palpable excitement. At the time, the Martian landscape was more familiar. 

Nearly four decades later, the surfaces of distant planets are still better imaged than Earth’s ocean floor. While exponential advances in computer processing power, considerably expanded satellite imaging capabilities, and the development of autonomous (robotic) underwater vehicles capable of reaching even the deepest ocean trenches have advanced deep sea exploration, a high resolution map of the vast expanse of our own planet’s crust that lies hidden beneath a watery cloak remains incomplete. That may be changing, and none too soon with climate change bearing down. With ocean waters covering more than 70 percent of Earth’s surface, having a clearer idea of the shape and composition of the sea floor will improve our ability to predict storm surge in hurricanes, forecast the path of tsunamis, calculate glacial melt, and monitor struggling marine habitats subject to commercial practices like fishery management and deep-sea drilling and mining.

“Seafloor mapping is critical to pretty much everything,” says Caitlin Adams, Operations Coordinator at the National Oceanic and Atmospheric Administration (NOAA) Office of Ocean Exploration and Research (OER), “from national security to blue economy [sustainable ocean] initiatives.” 

Ever since Russia’s Sputnik took to the sky in 1957, artificial satellite networks have employed electromagnetic waves like radar and Lidar to map terrestrial—and extraterrestrial—surfaces. But traditional radar works best on arid topography (like Mars, the Moon, and landmasses on Earth) because it can only penetrate a few meters into water, limiting the reach of eyes in the sky for waterlogged planets like our own. Since water mutes electromagnetic waves, there are only two ways to truly see beneath the sea without journeying down to the seafloor: sonar, or echo-sounding, and gravimetry, which detects gravitational anomalies caused by large objects. In both cases, direct measurement is required—the devices must be underwater (sonar) or at least close to the surface (gravity meter) to work, which means they can only be operated from the hull of a ship. Therein lies the rub. Mapping Earth’s 139 million square miles of seafloor could take as long as 1,000 years (estimates vary widely) for one ship continuously crisscrossing the ocean.  

Enter satellites. As Bartusiak explained in PopSci, the 1984 breakthrough came from a combination of a new satellite measurement technique known as gravity mapping, or satellite altimetry, and improved computer processing power, which enabled Haxby’s team at Lamont to produce their novel seafloor map in just a few years. 

Even without wind and waves, the surface of the Earth’s ocean is not like a fish tank: it would not be level. The ocean surface undulates, faintly following the ridges and rifts on the floor below. That’s because gravity’s sway over water becomes perceptible with masses as large as underwater volcanoes, underwater mountains (or seamounts), and trenches. Satellites equipped with sensitive altimeters, which measure sea surface height, can detect those subtle variations caused by seafloor topography. For instance, a 1,000-foot seamount will attract enough water to swell the ocean surface by as much as six inches. But since the ocean is a bit like a fluffy blanket, disguising all but rough contours, satellite-based gravity mapping has physical limitations. It can only detect large-scale objects, and then only approximates their shape. In 1984, exquisite detail in Haxby’s map meant that the smallest objects on the map were 20 miles across (about the size of Tanzania’s Mount Kilimanjaro). Anything smaller went undetected. While the precision of satellite altimetry has significantly improved in the ensuing decades, there’s only so much a blanket will reveal about what lies beneath. Today, the highest resolution satellite altimetry can achieve on its own is about 1 mile—or seven times the size of Egypt’s Great Pyramid. In contrast, terrestrial maps made from satellite imaging can be as detailed as 50 cm per pixel, or an object the size of a fire hydrant. Only sonar devices, mounted on ships and underwater vehicles, have the ability to produce high resolution seafloor maps.

[Related: Meet the marine geologist mapping the deepest point on Earth]

In 1912, when the Titanic sank in the North Atlantic, sonar had not been invented. The calamity inspired a flurry of echo-sounding innovation and in less than a decade sonar, which uses underwater sound projection to measure distances, became commonplace not only for maritime navigation and naval warfare but also seafloor mapping.

In 1957, Lamont researchers Marie Tharp and Bruce Heezen published the first comprehensive seafloor map of any ocean when they released their sonar-based physiographic map of the North Atlantic. It was like a crude ultrasound of the Earth’s ocean.

By revealing the major topographical features of the seafloor, some for the first time, like the Rift Valley of the mid-oceanic ridge, the map seemed to affirm German scientist Alfred Wegener’s continental drift theory, which had been dismissed when it was first proposed in 1912. By the late 1960s, and several sonar-based seafloor maps later, earth scientists had enough evidence to see that the planet’s surface had been fractured into sliding plates, just as Wegener proposed, drifting across the molten mantle below, crashing into one another, or slipping apart. Detailed maps of the seafloor held the key to plate tectonics. The consensus then, and now, was that more detail would reveal even more planetary secrets. 

“When we see smaller features,” says Shannon Hoy, Expedition Coordinator at NOAA OER, “we start to get more knowledge of the underlying geologic and oceanographic processes that are affecting our world.” For example, Hoy, who works with autonomous underwater vehicles (AUVs), points to the “Million Mounds” deep-sea coral reef ecosystem, first mapped in the 2010s, that runs along the Atlantic seaboard from South Carolina to Florida. At a depth of 2,000 feet, the corals, which grow just a few meters high, live in the dark and are fed by the Gulf Stream. It is the largest known deep-sea coral reef. With some living corals as old as 700 hundred years, and thousands of years of coral skeletons at its base, Million Mounds has been likened to an old-growth forest, rich with marine life. “You wouldn’t have seen that with satellite data,” Hoy notes. 

Sonar technology has advanced considerably since the 1950s and 60s. Today, multibeam systems project fan-shaped sound waves that can reach ocean depths of more than 6 miles. At depths of 2–4 miles, which represents nearly 75 percent of the ocean, multibeam sonar mounted to a ship’s hull can scan up to 5-mile swaths of seafloor at a time, delivering resolutions between 600–1,200 feet (the deeper the sounding, the lower the resolution)—considerably better than satellite altimetry. In shallow coastal regions, it can achieve between 100–325 foot resolution. And when mounted to AUVs, which get close to the sea floor, 1-meter resolution becomes possible.

In 2017, the United Nations held its inaugural Ocean Conference and declared the 2020s the Ocean Decade, challenging the world’s countries and companies to reverse the decline of the ocean. Among the global initiative’s 10 challenges is to “create a digital representation of the ocean.” At the time the UN made its announcement, only 6 percent of Earth’s ocean floor had been mapped and digitized using modern sonar. But a fresh initiative was announced at the conference to map the Earth’s entire seafloor by the end of this decade: Seabed 2030, a collaborative project sponsored by the General Bathymetric Chart of the Oceans, or GEBCO, and the Nippon Foundation of Japan. By June 2022, Seabed 2030 reported that 23.4 percent of the ocean had been mapped using modern sonar—almost quadrupling the coverage since 2017. Seabed 2030 collects sounding data from any ship willing to share, like NOAA’s research vessel Okeanos Explorer, which has enabled the map to be filled in so rapidly. 

“Going from 20 to 23 percent in the past year sounds insignificant,” notes Adams, citing the 2021 percent-complete figure. “But it’s more than the size of Europe. Every year, we’re chipping away at it.” But these mapping efforts will need all the ships they can get to cut the centuries estimated to canvas the whole ocean down to the eight years left in the decade. 

[Related: Jacques Cousteau’s grandson is building a network of ocean floor research stations]

“We don’t, as a project, have the resources to go out and do it ourselves,” says Jamie McMichael-Phillips, project director of Seabed 2030. “We do have the resources to take what people give us and put it on a map.” McMichael-Phillips credits Seabed 2030 with providing the inspiration that “encourages companies, industry, government, philanthropists, and scientists to go out and map the ocean.” Seabed 2030 will even supply recreational boaters who have sonar capability with a special device that captures the data from soundings, enabling them to participate. 

McMichael-Phillips agrees with Hoy that the detail provided by sonar mapping, the gold standard for visualizing the seafloor, offers far more insight into our world than satellite altimetry ever could. He cites several examples, like the 2022 discovery by ocean mappers of one of the world’s largest coral reefs off the coast of Tahiti. The 2-mile-long reef was found at a depth known as the ocean’s dimly lit Twilight Zone, between 100–200 feet.

Still, GEBCO’s publicly available map—a jumble of thin lines representing sonar coverage—has a long way to go. While McMichael-Phillips doesn’t anticipate any technological breakthroughs with sonar or satellite that would expedite seafloor mapping, he does see help coming from uncrewed surface vessels, or USVs, like NOAA’s SailDrone. Having people aboard a vessel, he notes, is one of its most limiting factors, not only weighing it down but also requiring frequent stops for supplies and to avoid hazardous conditions. “I’m a former Royal Navy hydrographic surveyor. I spent a lot of time operating in the Southern Ocean in some pretty hostile conditions,” says McMichael-Phillips. “So by going down the uncrewed route, you remove that limitation.” 

Hoy wouldn’t say whether she thought the Seabed 2030 project would meet its goal. “Ships are relatively small,” she notes, “and the ocean is very big.” But she credits Seabed 2030 with encouraging unprecedented data sharing and collaboration between organizations, creating momentum that will make a worldwide map achievable. 

Whether 2030 is realistic, the 2020s may prove to be the decade that the richest and strangest image yet of Earth’s missing and unfamiliar contours will come together, like the slow reveal of a distant alien world beamed across a murky ether. “A direct measurement map of our complete ocean,” says Hoy, “is going to really change the face of what we know.”

The post The centuries-long quest to map the seafloor’s hidden secrets appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
When will we finally have jetpacks? https://www.popsci.com/technology/when-will-we-finally-have-jetpacks/ Wed, 24 Aug 2022 14:00:00 +0000 https://www.popsci.com/?p=464609
Archival material from Popular Science's coverage of jetpack attempts
Our first story about jetpacks was published in March 1940, about George de Bothezat’s one-person helicopter—a simple frame with twin props spinning overhead, powered by “a lightweight gasoline engine.”. Popular Science

Jetpack designers today are learning from decades of trial and error.

The post When will we finally have jetpacks? appeared first on Popular Science.

]]>
Archival material from Popular Science's coverage of jetpack attempts
Our first story about jetpacks was published in March 1940, about George de Bothezat’s one-person helicopter—a simple frame with twin props spinning overhead, powered by “a lightweight gasoline engine.”. Popular Science

Nothing captures the human imagination quite like the thrill of flying. Not the middle-seat-on-a-plane kind of flying, pressed between the lucky passengers who scored the window and aisle seats, passive-aggressively vying for elbow space, while watching with resignation as an airplane icon creeps toward its destination on a crude digital GPS map plastered to the seatback in front. But rather, the kind of flying that would enable us to break free of gravity’s yoke and the roads that hem us in—soaring solo into the third dimension with air rushing over our limbs. This kind of unfettered flight inspired Hugo Gernsback’s August 1928 cover of the iconic science fiction magazine, Amazing Stories, with fictional spaceman hero Buck Rogers sporting a backpack that boosted him into the air. Inventors have been chasing these kinds of lightweight jetpacks ever since.

“The appeal of the jetpack is that it’s something you can stow in a locker, pull out, and put on to go somewhere,” says John Hansman, a professor of aeronautics and astronautics at MIT. “But the versions we’ve seen so far don’t make a lot of sense. They are just stunts.” Despite the array of fantastical, but perhaps impractical (and dangerous) jetpack designs, Hansman adds, “It’s an interesting time. There are a number of technologies coming online that could give you the capability of a jetpack.”

For more than eight decades, Popular Science has been chronicling the effort to get jetpacks off the ground. Our first story was published in March 1940 with Russian-born inventor George de Bothezat’s one-person helicopter—a simple frame with twin props spinning overhead, powered by “a lightweight gasoline engine.” The apparatus was controlled in the air by the pilot’s body, arm, and leg movements. De Bothezat died before he could actually build the invention. But his attempt was not entirely for the birds. In July 1945, we described the efforts of Boeing engineer Horace T. Pentecost, who improved upon de Bothezat’s design with a “flight stick” for steering. Pentecost rebranded the whirligig a “hoppicopter”—the gizmo only capable of staying in the air for short distances.

Popular Science began taking slow-to-evolve jetpacks more seriously in January 1952 when two pages of the magazine were devoted to the next generation of Pentecost’s hoppicopter: Gilbert Magill’s pinwheel. Magill kept the overhead rotors from the original design but upgraded the stick and added a seat and pilot safety gear (whew)—little more than a “crash helmet with a plastic face shield.” 

Aviation photo
Pinwheel, January 1952, Popular Science

The first jet engine-like breakthrough arrived in our December 1958 feature, which described the US military initiative “Project Grasshopper” that sought to upgrade a secret jump rocket into a flying belt. Jump rockets were a “solid-fuel device” that could be strapped around a soldier’s waist to enable them to jump distances as wide as a 50-foot river or leap up into a second-story window. Project Grasshopper’s goal was to extend the jumper’s air time using nitrogen-compressed gas canisters that could be “snapped in place of exhausted ones in less than a minute.” 

Aviation photo
Flying Belt, December 1958, Popular Science

Soon after, Bell Aerosystems’ engineer Wendell Moore made a big jetpack leap with the Rocket Belt (also funded by the US Army), a personal propulsion device spotlighted in a January 1966 rundown of James Bond gadgetry. Sean Connery strapped into Bell’s Rocket Belt for the opening escape scene in the 1965 Bond film Thunderball, rocketing away Buck Rogers-style from a villain’s chateau. The name Rocket Belt came from the rocket engine propulsion design. The engines run on distilled hydrogen peroxide and nitrogen gas, which is used to generate high-pressure, super-heated steam that thrusts the pilot skyward. Other designs of rocket belts have shown some staying power, at least for big-event stunts like the opening ceremony of the 1984 Los Angeles Olympics. We covered them as recently as March 2006, in a profile of homemade rocket-belt builder Juan Lozano. Even in 2006 we remained skeptical about their ability to bestow true jetpack-flight freedom, PopSci staff awarding Lozano’s rocket belt a reality meter score of 2 out of 10, meaning the chance of a commercial breakthrough was slim.

While momentum seemed to be rising with the rocket belt, jetpack development suddenly fell flat. Besides covering the occasional space-based jetpack (November 1971), there weren’t many new developments to report. Then in December 2008, New Zealand inventor Glenn Martin’s decades-long quest to build a ducted fan (non-jet), gasoline-powered jetpack paid off. Ducted fans rely on rotary-style propellers contained in a canister to direct thrust. Martin’s jetpack was so noteworthy that he earned a spot in the magazine’s coveted list of top innovators. 

A couple months later in February 2009, we covered the strange story of Swiss pilot Yves Rossy, or Wingman, who had attached mini-jet engines to a homemade wing, strapped himself in, and jumped out of a plane over the English Channel. While technically not a jetpack, Rossy’s wing did use a true jet engine, foreshadowing what was on the horizon.

By the 2010s, jetpacks and other personal aircraft innovation reached a new height—hoverboards, flyboards, and water-powered jetpacks, not to mention countless drones, had taken over. Then, after a seven decade journey from the solo-operated helicopter of the 1940s to the ducted fan of the 2000s, a true jet-engine jetpack finally emerged with a form factor and lift that would have raised Buck Rogers’s eyebrows. JetPack Aviation conducted the maiden flight of its wearable launcher around the Statue of Liberty in New York on November 3, 2015. All the while, New Zealand inventor Glenn Martin, continued to improve upon the ducted fan jetpack, finally attracting the kind of venture capital to expand his company Martin Jetpack and offer a consumer product.

By 2017, the promise of solo flight seemed so real—and potentially profitable—that Boeing sponsored a $2 million competition GoFly to encourage personal aircraft innovation that would bring them into the mainstream. The aerospace company saw promise in the growing progress in the technology, such as improvements in propulsion, light-weight materials, and control and stability systems.

In 2019, before the GoFly finals had concluded, we took another look at the jetpack reality meter and decided that, while progress had been made, the tech was still too noisy, too heavy, and too expensive. The GoFly judges seemed to agree because, while they doled out some awards for innovation, no one won the contest. None of the entrants met the contest’s basic size constraints and flying-time parameters, among others. John Hansman, who had been an initial advisor but dropped out, wasn’t surprised by the disappointing outcome.

According to Hansman, there are way too many things that can go wrong with a Buck Rogers-style jetpack. He calls it an extreme version of a motorcycle, but even more dangerous because of altitude and speed. Plus, jet engines are not the most effective way to get around Earth. “Jetpacks work great in space,” Hansman notes, “where the best way to get propulsion is to push a gas.” Where there’s an atmosphere, however, “a jet engine doesn’t compete very well with a rotor propeller.” Jetpacks require continuous thrust to stay aloft—therefore expending a lot of fuel—whereas propellers leverage Bernoulli’s principle, or the difference in air pressure above and below the rotor.

But jetpack designers today are learning from decades of trial and error. Hansman believes the field is on the cusp of a fresh round of innovation in personal aircraft, but not in the classic jet-engine backpack form. “Nothing has changed substantially on the jet side,” notes Hansman—there hasn’t been enough development to convince him that jet engines offer the best design. Rather, he sees personal aircraft in the form of rotor-based airbikes or hoverboards. “If you think about the technologies that have come along,” he says, “it’s battery technology in electric aircraft, distributed propulsion, and relatively inexpensive active control systems.” Drones, for instance, can have cheap active control systems that use sensors and software to stabilize them. Distributed propulsion, where there are multiple rotors working in coordination, has enabled the design of crafts like hoverboards and, perhaps in the near future, air taxis and air bikes.

Hansman sees the market initially embracing recreational personal aircraft, similar to jet skis on the water. That’s because building a reliable, commuter-style vehicle requires a different level of engineering to handle the wear and tear of daily use. Plus, there’s the matter of navigating controlled airspace. Even though the Federal Aviation Administration (FAA) does not regulate ultralight aircraft (single person, less than 254 pounds empty, and with max speeds no more than 60 mph), or direct traffic in airspace below 400 feet, they do restrict airspace around airports and cities. Those restrictions will have to be modified to realistically support personal commuter aircraft.

Despite the remaining obstacles, jetpacks and other personal aircraft have come a long way and are closer to reality than they’ve ever been. For hardcore jetpack enthusiasts, there are a number of companies like JetPack Aviation, Martin Jetpack, Gravity Industries, and Maverick Aviation that have working products—that is, if you have cash to burn and you aren’t intimidated by the danger of these devices. For those seeking safer ways to experience the thrill of solo flight, it may not be long before a weekend excursion to the mountains comes with an airbike or hoverboard rental that will allow you to soar over treetops and lakes, 20 minutes at a time. But if your goal is to sweep past traffic-snarled streets and highways on your daily commute, gloating at earthbound motorists as you glide overhead, you’ll have to wait even longer—and unfortunately what you’re waiting for will likely not be found in the Buck Rogers aisle of a future flymart superstore.

The post When will we finally have jetpacks? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The century-old dream of traveling by hovercraft is still alive https://www.popsci.com/technology/hovercars-history-transportation/ Wed, 10 Aug 2022 14:00:00 +0000 https://www.popsci.com/?p=461294
a black, white and purple designed image of a man driving a levitating car
'Here come cars without wheels' appeared in the July 1959 issue of Popular Science. Popular Science

These wheelless air cars were all the rage of 1950s and '60s automotive design—and they might be making a comeback.

The post The century-old dream of traveling by hovercraft is still alive appeared first on Popular Science.

]]>
a black, white and purple designed image of a man driving a levitating car
'Here come cars without wheels' appeared in the July 1959 issue of Popular Science. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

When it came to futuristic cars, magazine cover artist Arthur Radebaugh dazzled millions with his imaginative visions of automotive transportation.  From 1958 to 1962 in the comic strip, “Closer Than We Think!” Radebaugh depicted sci-fi-slick sedans and convertibles like the Flapwing Car, Sunray Sedan, and Quick-Change Color Car. Some of his designs have crossed the reality threshold, like electric car and self-driving car. But for the most part, his cars and other inventions never quite came to fruition despite the optimistic title of the strip. But there was a seemingly far-fetched, yet already prototyped idea that showed promise: a wheelless air car that he dubbed the Flying Carpet Car.

In July 1959, Popular Science senior editor Martin Mann was so enthralled by air cars, or Air Cushion Vehicles (ACV), that he proclaimed, “they threaten to turn transportation inside-out, giving you a sports car, speedboat, half-ton truck and back-pack helicopter all rolled into one.” At the time, Ford Motor Company had just showcased a levitating vehicle prototype, the Ford Levacar Mach I. Ford vice president Andrew Kucher promoted the model, which featured a wheelless car propelled on a cushion of air. In 1961, Popular Science followed up with another story about the future of family cars penned by an air-car inventor, William Bertelsen. The story showcased a colorfully illustrated air car dubbed the Aeromobile, a type of ground-effect machine—or GEM—that rocketed through airtight tubes. Bertelsen was just one of many inventors chasing the hovercraft dream. British inventor Christopher Cockerell, whose 1952 patent earned him recognition as the hovercraft inventor, was behind the Saunders-Roe vessel that made its 1959 debut, gliding across the English Channel on a cushion of air. 

Chris Fitzgerald remembers being fascinated by the Saunders-Roe vessel when it debuted in 1959, the first such commercial ACV journey. “I was in Australia, watching with my family on TV,” recalls Fitzgerald, now president of Neoteric Hovercraft. “I was always interested in flying.” But Fitzgerald had a lot of friends who became cadets and were killed while flying. A  hovercraft could be a way of flying “with one foot on the ground,” he says. That’s how he ended up founding Neoteric Hovercraft in 1960 in Australia, which manufactured one of the first personal-sized hovercraft, Neova One. Alongside Ford’s Levacar, Bertelsen’s Aeromobile, and Fitzgerald’s Neova One, the market was cluttered with new entrants, many based in the UK, including Air Bearings, Curtiss-Wright, Cushion Flight, Bartlett’s Flying Saucer, and one of the first American ACVs, Crowley’s Hydro-Air. The future of hovercrafts seemed certain. 

“There were a lot of different small companies trying to make hovercraft,” says Fitzgerald, who relocated the company to Indiana in 1976 to tap the US market. “But most of them failed—for lots of reasons.” 

[Related: What it would take for cars to actually fly]

Traditional aircraft get their lift from Bernoulli’s principle: the faster a fluid—in this case like air—moves, the more its pressure drops. The pressure difference between pockets of air above and below a plane’s wing or a helicopter or drone’s rotor is what enables lift and flight. Hovercraft, or air-cushion vehicles, are different from traditional aircraft because they get their lift from the pressure of air against a surface—an aerodynamic phenomenon known as ground effect. This allows ACVs to glide fractions of inches to several feet off the ground.

Flying machines based on the ground-effect principle can be traced as far back as 1716 when Swedish scientist Emanuel Swedenborg sketched out one of the earliest known designs. In the late 19th and early 20th centuries, nautical engineers sought ways to use ground effect to reduce the drag on boat hulls in water by pumping pockets of air beneath them for lift—giving the impression that it’s floating higher in the water.. But it was the pressure from World War I and II that motivated militaries in the US and Europe to pursue high-speed, water-to-land hovercraft that could rapidly move seabound troops, equipment, and supplies on shore. This helped jump start an experimental market for marine hovercraft as early as the 1930s—the technology eventually moving onto land and the automobile industry. 

an illustration of a yellow hovercar with a family zipping through a tube. the text reads the fantastic future of travel: 1,500 mph family cars?
‘The fantastic future of travel: 1,500-mph Family Cars?’ appeared in the August 1961 issue of Popular Science Popular Science

Despite the fevered pitch of innovation in the 1950s and ’60s, ACV technology presented obstacles that, to this day, have never been solved. In 2008, PopSci caught up with Bertelsen who was still seeking investors and working out the kinks in his ACV nearly a half-century after the national debut of the GEM. Bertelsen claimed that his ACV was far more fuel efficient than a car and blamed “last century’s low fuel prices” for lack of interest, but that claim has been difficult to base on fact. Some ACVs burn similar amounts of gas per hour of use while carrying similar loads as cars. Others burn far more.

According to Fitzgerald, ACVs “consume a fair amount of fuel.” He argues, however, that it’s unfair to compare ACVs with cars. “When an automobile runs down the road,” he explains, “it’s running on a million-dollar a mile track, which makes its fuel efficiency much better.” By design, ACVs traverse roadless and otherwise impassable terrain. When it comes to military hovercraft, Fitzgerald admits they are considerably less fuel efficient than their boat or truck counterparts. “There’s no point talking about efficiency when you’ve got to get a damn tank on the beach.” The same high operating-cost trend applied to commercial ferries: For instance, the hovercraft service that had traversed the English Channel since 1959 ceased operation in 2000.

How to keep the air cushion beneath the vehicle as it moves around has been a chief design challenge for ACVs, hindering both fuel efficiency and stability. Both flexible and rigid skirts have been used, which are supposed to keep the air trapped beneath the vehicle. But the skirts tend to wear rapidly. “So much of the technology is in the skirt,” says Fitzgerald. “It’s a compromise between how much air you’re going to pump through the hovercraft to lift it, and how much you want to sacrifice the skirt.” The more air, the more lift, which eases wear-and-tear on the skirt but requires more fuel. Plus, there’s a limit to how far you can lift an ACV before it becomes unstable. According to Fitzgerald, “that’s roughly one tenth the vehicle’s width.”

Then there’s the matter of steering. For those who had envisioned wheelless cars speeding down streets, it turns out that ACVs are challenging to maneuver and easily influenced by pressure changes caused by wind, weather, and passing vehicles. A highway filled with ACVs would be more like a game of bumper cars. Current highways are not optimally designed for such floating craft, says Fitzgerald. Their surfaces are rounded to facilitate water runoff, “but [for ACVs] they should be concave.” Many ideas have been proposed to try to control an ACV in a crowded roadway environment, Fitzgerald says. “All the obvious ones,” he notes, “like putting [small steering] wheels on it. But if you put wheels on it, you figure out pretty quickly that the best thing to do is throw the hovercraft away and just stick with [cars].” 

[Related: Why hasn’t Henry Ford’s ideal power grid become a reality?]

In the late 1960s, ACV designers also suggested tracked versions to fix steering problems, similar to the version Radebaugh depicted in his Flying Carpet Car. The most notable tracked ACV project was France’s Aérotrain, which intended to replace France’s aging railway network. But after miles of elevated concrete track had been built through the countryside north of Orléans, the project was abandoned in the mid 1970s. Technical problems with maintaining pressurization and noise concerns from the big fans—coupled with politics—doomed the high-profile project. Around the same time, the British tried their own version, called Tracked Hovercraft, that promised to shuttle passengers 330 miles from London to Edinburgh—reducing a more than six hour car ride to less than 90 minutes. Technical difficulties and excessive costs put a stop to the project in 1973. In the late 1960s, Grumman, an engineering company, explored a similar tracked ACV project for the US Department of Transportation, but the project never gained approval.

Despite many ill-fated projects, ACV tech does actually exist today—finding success in less public-facing transportation. The market can be divided into what Fitzgerald calls “heavy industry” and “light industry.” Heavy industry is dominated by military use, primarily for amphibious troop movement and logistics. ACVs have also found a home in manufacturing and warehouses to move ultra-heavy loads. Plus, a few passenger hovercraft ferries remain in service, although most have been replaced with catamaran-style fast ferries, which cost less to operate. 

Light industry includes local emergency services that rely on specialized hovercraft, like Neoteric’s Rescue models or Griffon Hoverwork’s Search and Rescue ACV, for rescue missions in places where terrain is challenging. Plus, there’s the recreational market for individuals. 

Neoteric Hovercraft glides over ice.

After more than six decades, Fitzgerald remains optimistic about the future ACV market. “If you look at the total transportation system that we have,” he notes, “there’s a little spot in there where nothing works very well—thin ice, mud, fast flowing water, and shallow water.” ACVs could take off in these types of environments. Marshy areas, places where terrain is unstable, land that is prone to flooding, and regions that experience stretches of extreme heat, which can destabilize road surfaces could also benefit from ACVs, he says. “A hovercraft will do things in those places that nothing else will do quite as well.” Hovercrafts might also be a promising mode of transportation in a future warmer planet with more frequent natural catastrophes: The “little spot” where ACVs alone excel may grow, says Fitzgerald, even if it’s just for rescue missions.

While a handful of companies manufacture hovercraft today, Neoteric Hovercraft may be the only US-based company to have remained in business continuously since ACV’s heyday between the 1950s and 1960s. Fitzgerald says his ability to adapt to market demand, while staying true to his mission of manufacturing personal-sized hovercraft has kept his business alive. “I’ve been trying lots of different models to find out what the market wants,” he says. “Right now, our biggest market is individuals.” 

There’s no need to wait if you’ve been holding out for a Radebaugh-style wheelless car that can fly like a plane, float like a boat, or drive like a car across any surface—wet, dry, marshy, or frozen. You can buy ACVs out on the market that cost roughly the same as a car. But you’ll still want to save city or highway driving for a gas-powered or electric car.

The post The century-old dream of traveling by hovercraft is still alive appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why hasn’t Henry Ford’s ideal power grid become a reality? https://www.popsci.com/environment/henry-ford-how-power-will-set-men-free/ Wed, 27 Jul 2022 14:00:00 +0000 https://www.popsci.com/?p=458253
Henry Ford article How Power Will Set Men Free
A century ago Henry Ford called for sustainable living and an end to coal. Popular Science

The industrialist's dream of agricultural-industrial micro-grids did not turn out the way he imagined.

The post Why hasn’t Henry Ford’s ideal power grid become a reality? appeared first on Popular Science.

]]>
Henry Ford article How Power Will Set Men Free
A century ago Henry Ford called for sustainable living and an end to coal. Popular Science

Scientists knew about carbon emissions-induced climate change long before global warming and rising seas began to afflict our planet. Alarms were sounded by many, among them industrialist Henry Ford. In a July 1922 essay for Popular Science, “How Power Will Set Men Free,” Henry Ford was already promoting an alternative electric-power vision for America. His advocacy for clean power and an end to coal would tee off a debate that has simmered for more than a hundred years. Some of his power predictions and proposals were not quite on the mark, but his vision is worth assessing. 

Ford believed cities were centers of opportunity and that the electricity powering them shouldn’t come from coal. “Coal,” he wrote, “is just about the most inefficient and expensive fuel there is.” So, in what may be one of the earliest industrial-age appeals for sustainable living, Ford proposed an alternative: agricultural-industrial communities powered by locally sourced clean energy, or mini-grids. 

Ford’s legacy as a writer is checkered: At best a crystal ball, while, at worst, toxic. Along with several books–including a few about his own accomplishments and ideas–the Ford Motor Company founder had written some of the most widely circulated anti-Semitic screeds. He apologized in 1927, but by then much damage was done. It is unfortunate that in 1922, the propagandist and the industrial futurist Popular Science sought to feature were the same man. 

In the 1920s, at a time when less than half of American homes had been equipped with electricity and electrical utilities were a jumble of small, inefficient distribution systems, Ford sensed the coming consolidation of centralized electrical grids. “Centralization of power has caused centralization of industry. But this cannot be expanded indefinitely,” Ford wrote. Instead, he envisioned an America filled with agricultural-industrial communities sprinkled along river basins “like jewels on a string,” deriving their power from water and their food from surrounding local farms. 

“What Ford described,” notes Paulina Jaramillo, codirector of the Green Design Institute at Carnegie Mellon University, “became the suburbs we developed in the middle of the last century, except the suburbs developed close to cities, not farms, and have massive negative externalities.” This is otherwise known as the sprawl–paving and building over open spaces like forests and fields, and driving up carbon emissions with more road transportation. 

Even though suburbia didn’t unfold as successfully as Ford had envisioned (as a carmaker he benefited nonetheless), his vision was not new in 1922. Ebenezer Howard, a London-based stenographer for Parliament and self-described urban planner had been promoting a similar idea for decades. In his book, Garden Cities of To-morrow (1898), Howard described cities that would be confined in size, offer ample open space and parks, and include access to agriculture. In the U.S., the City Beautiful movement mirrored Howard’s ideas, thriving for thirty years before Ford’s Popular Science editorial. 

Ford’s hydropower vision was also not new. As early as the 1880s, small hydroelectric power plants were being installed to light up Midwest factories. Ford recognized that all cities, large or small, would need electric power to prosper. “Power,” he noted, “is the key to tomorrow.” He believed that clean power generated locally would be preferable to burning coal in a central power plant and ferrying the volts over large swaths of countryside.

Jaramillo agrees that renewable-energy “mini grids” like those described by Ford can “improve resilience and reliability” of power. But she’s more sanguine about America’s centralized grid system. “It’s a marvel of engineering,” she notes, calling it “probably one of the coolest engineering projects of the 20th century.” That’s because it can “instantaneously match demand and supply of electricity, with very limited storage.” In fact, in 2000, the National Academies of Engineering cited the US electric grid as the greatest engineering achievement of the 20th century. Jaramillo concedes, however, that centralized grids face considerable challenges with the effects of climate change, which will escalate demand while impacting supply and distribution. She envisions reinforcing central grids with renewable-energy “grid-connected mini grids” that can transact with a centralized grid but also operate independently.

In 1922, it would have been impossible for Ford to anticipate just how much power America would eventually require. “America’s rivers,” Ford believed, “offer enough power to turn every wheel, heat every room, and light every building and street in America.” In 2022 terms, he was off by about a factor of ten. According to the US Energy Information Administration, U.S. rivers, or hydroelectricity, maxed out in the 1990s, and now generate less than 10 percent of our total energy, despite the more than 1,400 hydropower plants parked along their banks. In contrast, coal still accounts for more electricity generation in the US than all renewable energy sources combined, albeit, just barely (22 versus 20 percent). According to a 2021 report by the nonprofit Environment America Research and Policy Center and Frontier Group, while hydropower output has remained stagnant, wind, solar, and geothermal grew from half a percent of US output in 2001 to 12 percent two decades later and are poised to challenge coal’s number two rank in the next few years (natural gas is number one). 

Johanna Mathieu, Associate Professor of Electrical Engineering and Computer Science at the University of Michigan and a member of the Michigan Power and Energy Lab, sees Ford’s vision as “a piece of the puzzle.” Like Jaramillo, she envisions interconnected and renewable-powered micro-grids that can benefit from energy production anywhere in the system, but can be isolated and operated independently when the system is experiencing problems. Mathieu’s work centers on demand-side technologies, or what she calls “grid edges.” By making homes and appliances aware of their power network, they can cooperate with one another to optimize energy use and “balance the mismatch between supply and demand.” For instance, her team is working on a demonstration project in Texas with 100 houses to coordinate air conditioning to keep power demand balanced, remotely using sensorized thermostats. “We’re controlling people’s air conditioners,” she explains, “switching them on and off at slightly different times, but making sure that houses are still within the existing temperature range that they would normally operate in, so nobody should notice.” By accessing sensorized thermostats remotely, coordinating when to switch them on and off to maintain precise temperatures, Mathieu’s solution could save enough energy to represent a virtual power plant. 

This energy-intensity solution could be called a “soft path” to mitigating carbon emissions. In a landmark 1976 article for the Council on Foreign Relations called “Energy Strategy: The Road Not Taken?”, physicist Amory Lovins said that the U.S. would soon have to choose between two energy paths with profound implications on Earth’s future. The “hard path” would double-down on business as usual, supplying energy with “hard technologies”—more coal-fired plants and large-scale, capital intensive, and environmentally hazardous solutions. The “soft path” would focus on stretching how far a unit of energy could go in powering homes and businesses. To increase supply, the soft path would transition to sources and technologies that are “flexible, resilient, sustainable, and benign.” 

Lovins did not mince words. He made clear that if the hard path were chosen, it would make the doubling of atmospheric carbon dioxide concentration early in the next century virtually unavoidable, with the prospect then or soon thereafter of substantial and perhaps irreversible changes in global climate.” 

[Related: “Climate change is blowing our predictions out of the water, says the IPCC”]

By countless accounts, we are reckoning with those changes now. Since that the 1976 article, America has been on Lovins’s hard path, relying mainly on centralized coal and gas for power. Forced by climate change to rapidly replace fossil fuels with renewable energy and to harden our electric infrastructure, can we transform fast enough to become as sustainable as Ford envisioned a century ago?

Mathieu believes the technology already exists to at least transition fully to renewables. She cites regulatory, policy, and legal obstacles as some of the biggest barriers. “But if we chose to do it,” she adds, “I think we could.”

The post Why hasn’t Henry Ford’s ideal power grid become a reality? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Will baseball ever replace umpires with robots? https://www.popsci.com/technology/history-of-robotic-baseball-umpires/ Wed, 22 Jun 2022 13:00:00 +0000 https://www.popsci.com/?p=451396
an illustration of a robotic sensor that detects if a baseball thrown is a ball or strike. in the background is a purple filtered image of baseball articles and images
'New Inventions: Novel Devices Provide Thrills for Players and Spectators, And Give Aid in Practice' appeared in the June 1939 issue of Popular Science. Popular Science

The sport has long experimented with robotic umpires to take the guesswork out of calls.

The post Will baseball ever replace umpires with robots? appeared first on Popular Science.

]]>
an illustration of a robotic sensor that detects if a baseball thrown is a ball or strike. in the background is a purple filtered image of baseball articles and images
'New Inventions: Novel Devices Provide Thrills for Players and Spectators, And Give Aid in Practice' appeared in the June 1939 issue of Popular Science. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

While the quest for baseball’s inventor reads like a whodunnit, less murky is baseball’s sport-genealogy: It is descended from rounders (and to some extent cricket) and became a professional sport in the mid-19th century. Among the various rules that distinguish baseball from rounders is the role of the umpire. In baseball, the home-plate umpire calls every pitch in the game—ball or strike. In other sports, the umpire only makes calls as-needed—out-of-bounds, fouls, close plays—but the home-plate ump is mandated by the rules of the game to call every play. Because there are no lines painted around a strike zone, the home-plate ump’s play-calling is so conspicuous, and sometimes controversial, that efforts have been underway for more than a century to improve their accuracy through automation.

In a June 1939 roundup of new sports inventions, Popular Science included an “electrical umpire,” a device which used light beams to detect a ball passing through the strike zone and take “the guesswork out of calling ‘balls’ and ‘strikes.’” The 1939 version of a robotic home-plate umpire may have been among the first to use “electric eyes,” but it wasn’t the first machine to be used in a baseball diamond. A July 1916 Popular Science story also described a low-tech automated home-plate umpire designed to eliminate the guesswork in baseball training camps, little leagues, and carnivals. The 1916 device had a strike zone-sized opening cut into a sheet of canvas and was backstopped by a bowling-alley style ball-return register. 

“The whole premise of officiating is the balance of art and science,” says Brenda Hilton, a senior director of officiating for the Big Ten Conference and founder of Officially Human, an organization dedicated to improving the treatment of sports officials, especially at the high school and lower level. “Do people really want to play or watch when there are robots [officiating]?” We’re not likely to see C-3PO dressed up in stripes anytime soon, if ever, but Hilton’s question applies equally to the less eye-grabbing automation that’s already here as well as what’s on the horizon. 

Using technology to improve the performance of sports officials is not new. Instant replay has been around since 1963 when CBS TV director Tony Verna introduced it during the annual Army-Navy college football showdown that year. The NFL began experimenting with instant replay as early as 1976, but took another decade to fully implement it; the NHL followed in 1991; then the NBA in 2002. In 2008, Major League Baseball was the last of the four major American sports leagues to warm up to instant replay. But soon, it may become the first to flip the relationship between official and machine, allowing the technology to make the initial call. 

In 2022, Major League Baseball debuted “robo-umps,” an automated ball-strike system (ABS), in their Triple A minor league, the last stop before the majors. In the new officiating arrangement, which is designed to be collaborative, the home-plate umpire still posts up behind the catcher, but is joined by a black box equipped with pitch-tracking radar. In addition to the standard protective gear, the human ump’s accessories include a smartphone and an earpiece to receive transmissions from the ABS. Instead of making the call, the ump merely announces what the system “sees,” giving voice to the ABS and intervening only when there’s an obvious error, like a pitch that hops across the plate. The goal of the ABS is to call pitches more accurately and provide a consistent strike zone, one that pitchers and hitters can rely on from one game to the next and one season to the next. Beyond its use in minor league trials and training camps, MLB has not announced any future rollouts. 

[Related: Major League Baseball is nearing the era of the robot umpire]

But even if the decades-old vision of automated home-plate umpires may finally be here, it wouldn’t change the emotional investment of the players, coaches, and fans. After all, the umpire is more than just an arbiter of rules in a baseball game. “Who would the fans yell at?” Hilton asks. She’s only partly joking. 

In all sports, umpires and referees hold a special place in the hearts and minds of players, coaches, and fans. Feelings toward these key people on the field and court are mostly negative, a trend that has been on the rise. According to a 2019 survey conducted by Officially Human, 59 percent of officials who officiate games at the high school level and below don’t feel respected and 60 percent ranked verbal abuse as the top reason they quit. A similar survey conducted by the National Association of Sports Officials in 2017, which included professional sports officials, reached similar conclusions: 48 percent of male officials have at times feared for their safety. The problem has become so acute at the high school level that the National Federation of High Schools estimates that “50,000 individuals have discontinued their service as high school officials,” according to their website, citing the unsportsmanlike behavior of players, coaches, parents, and fans as one of the primary reasons. 

By adding technology, would it be possible to cool down heated emotions, reduce acrimony, or elevate respect for officials? Hilton thinks that with too much technology, “games may become unwatchable.” She admits her bias for human officials, but adds, “I think that fans would become more disengaged at the pro level if they went all electronic.” In a recent Wall Street Journal editorial, sports journalist James Hirsch seemed to agree, writing that instant replay “robs games of their drama.” 

Every sport is, after all, part performing art—a production of humans on a stage, with all their emotions, inconsistencies, delights, disappointments, thrills, and surprises. Officials play integral parts in every performance, sometimes by assuming attention-grabbing roles—making critical calls that change outcomes—but mostly by playing mundane parts to keep the show on track: throwing the ball up, dropping the puck, calling out of bounds, and offering a steady presence when tempers flare on the field. 

[Related: Radium was once cast as an elixir of youth. Are today’s ideas any better?]

Still, technology seems to have carved out a relatively permanent role in those performances. In a 2021 Morning Consult survey, 60 percent of sports fans believe that instant replay should be used “as much as possible to ensure the accuracy of calls” while another 30 percent believe it should be used on a limited basis “to maintain the flow of a game.” The remaining fans didn’t know or had no opinion. None said no to instant replay. 

Sports fans are already seeing their wish for more technology come true. For instance, the April 2022 debut of the US Football League was dominated by drones to offer more camera angles for viewers and replays. In 2021, the NFL added Hawk-Eye’s Synchronized Multi-Angle Replay Technology, or SMART, to its arsenal of instant replay cameras. Hawk-Eye is best known for its role in tennis, but is also used as goal-line technology in international soccer. 

Yet, with all that extra technology, what’s become clear is that human officials are pretty darn good at their jobs, especially at the professional level. According to CBS Sports, in the 2020 NFL season, there were 40,032 plays of which only 364 were reviewed, or less than 1 percent. Of the reviewed plays, about half were reversed, which was a little higher than previous seasons. Viewed from the officials’ lens, human referees were right 99.5 percent of the time. 

Judging by the pace of instant-replay adoption, not to mention that “electric umpires” have been an option since at least 1939, it’s not likely that Major League Baseball will implement robo-umps in the majors anytime soon. But that won’t stop sports innovators from developing new systems or tech enthusiasts from advocating for automation. “There’s a great balance somewhere,” Hilton says. “We just have to figure out what that balance is.”

The post Will baseball ever replace umpires with robots? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Radium was once cast as an elixir of youth. Are today’s ideas any better? https://www.popsci.com/science/fountain-of-youth-real/ Tue, 07 Jun 2022 11:00:00 +0000 https://www.popsci.com/?p=448403
Images from 'Will radium restore youth?' that appeared in the June 1923 issue of Popular Science
'Will radium restore youth?' appeared in the June 1923 issue of Popular Science. Popular Science

In 1923, Popular Science reported that people were drinking radium-infused water in an attempt to stay young. How far have we come to a real (and non-radioactive) 'cure' for aging?

The post Radium was once cast as an elixir of youth. Are today’s ideas any better? appeared first on Popular Science.

]]>
Images from 'Will radium restore youth?' that appeared in the June 1923 issue of Popular Science
'Will radium restore youth?' appeared in the June 1923 issue of Popular Science. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

In 1923, Popular Science reported that people were drinking radium-infused water in an attempt to stay young. How far have we come to a real (and non-radioactive) ‘cure’ for aging?

From the time Marie Curie and her husband Pierre discovered radium in 1898, it was quickly understood that the new element was no ordinary metal. When the Curies finally isolated pure radium from pitchblende (a mineral ore) in 1902, they determined that the substance was a million times more radioactive than uranium. At the time, uranium was already being used in medicine to X-ray bones and even treat cancer tumors, a procedure first attempted in 1899 by Tage Sjogren, a Swedish doctor. Coupled with radium’s extraordinary radioactivity and unnatural blue glow, the mineral was soon touted as a cure for everything including cancer, blindness, and baldness, even though radioactivity had only been used to treat malignant tumors. As Popular Science reported in June 1923, it was even believed that a daily glassful of radium-infused water would restore youth and extend life, making it the latest in a long line of miraculous elixirs.

By May 1925 The New York Times was among the first to report cancer cases linked to radium. Two years later, five terminally ill women, who became known as the Radium Girls, sued the United States Radium Corporation where they had worked, hand-painting various objects with the company’s poisonous pigment. As more evidence emerged of radium’s carcinogenic effects, its cure-all reputation quickly faded, although it would take another half-century before the last of the luminous-paint processing plants was shut down. Radium is still used today in nuclear medicine to treat cancer patients, and in industrial radiography to X-ray building materials for structural defects—but its baseless status as a life-extending elixir was short-lived. 

And yet, radium’s downfall did not end the true quest for immortality: Our yearning for eternal youth continues to inspire a staggering range of scientifically dubious products and services. 

Since the early days of civilization, when Sumerians etched one of the first accounts of a mortal longing for eternal life in the Epic of Gilgamesh on cuneiform tablets, humans have sought a miracle cure to defy aging and defer death. Five thousand years ago in ancient Egypt, priests practiced corpse preservation so a person’s spirit could live on in its mummified host. Fortunately, anti-aging biotech has advanced from mummification and medieval quests for the fountain of youth, philosopher’s stone, and holy grail, as well as the perverse practices of sipping metal-based elixirs, bathing in the blood of virgins, and even downing Radium-infused water in the early 20th century. But what hasn’t changed is that the pursuit of eternal youth has largely been sponsored by humankind’s wealthiest citizens, from Chinese emperors to Silicon Valley entrepreneurs.

“We’ve all long recognized that aging is the greatest risk factor for the overwhelming majority of chronic diseases, whether it be Alzheimer’s disease, cancer, osteoporosis, cardiovascular diseases, or diabetes,” says Nathan LeBrasseur, co-director of The Paul F. Glenn Center for Biology of Aging Research at the Mayo Clinic in Minnesota. “But we’ve really kind of said, well, there’s nothing we can do about senescence [cellular aging], so let’s move on to more prevalent risk factors that we think we can modify, like blood pressure or high lipids.” In the last few decades, however, remarkable breakthroughs in aging research have kindled interest and opened the funding spigots. Fortunately, the latest efforts have been grounded in more established science—and scientific methods—than was available in radium’s heyday. 

In the late 19th century, just as scientists began zeroing in on germs with microscopes, evolutionary biologist August Weismann delivered a lecture on cellular aging, or senescence. “The Duration of Life” (1881) detailed his theory that cells had replication limits, which explained why the ability to heal diminished with age. It would take 80 years to confirm Weismann’s theory. In 1961, biologists Leonard Hayflick and Paul Moorhead observed and documented the finite lifespan of human cells. Another three decades later, in 1993, Cynthia Kenyon, a geneticist and biochemistry professor at the University of California, San Francisco, discovered how a specific genetic mutation in worms could double their lifespans. Kenyon’s discovery gave new direction and hope to the search for eternal youth, and wealthy tech entrepreneurs were eager to fund the latest quest: figuring out how to halt aging at the cellular level. (Kenyon is now vice president of Calico Research Labs, an Alphabet subsidiary.)

“We’ve made such remarkable progress in understanding the fundamental biology of aging,” says LeBrasseur. “We’re at a new era in science and medicine, of not just asking the question, ‘what is it about aging that makes us at risk for all these conditions?’ But also ‘is there something we can do about it? Can we intervene?’”

In modern aging research labs, like LeBrasseur’s, the focus is to tease apart the molecular mechanisms of senescence and develop tools and techniques to identify and measure changes in cells. The ultimate goal is to discover how to halt or reverse the changes at a cellular level.

But the focus on the molecular mechanisms of aging is not new. In his 1940 book, Organisers and Genes, theoretical biologist Conrad Waddington offered a metaphor for a cell’s life cycle—how it grows from an embryonic state to something specific. In Waddington’s epigenetic landscape, a cell starts out in its unformed state at the top of a mountain with the potential to roll downhill in any direction. After encountering a series of forks, the cell lands in a valley, which represents the tissue it becomes, like a skin cell or a neuron. According to Waddington, epigenetics are the external mechanisms of inheritance—above and beyond standard genetics, such as chemical or environmental factors—that lead the cell to roll one way or another when it encounters a fork. Also according to Waddington, who first proposed the theory of epigenetics, once the cell lands in its valley, it will remain there until it dies—so, once a skin cell, always a skin cell. Waddington viewed cellular aging as a one-way journey, which turns out to be not so accurate. 

“We know now that even cells of different types keep changing as they age,” says Morgan Levine, who until recently led her own aging lab at the Yale School of Medicine, but is now a founding principal investigator at Altos Labs, a lavishly funded startup. “The [Waddington] landscape keeps going. And the new exciting thing is reprogramming, which shows us that you can push the ball back the other way.”

Researchers like Levine continue to discover new epigenetic mechanisms that can be used to not only determine a cell’s age (epigenetic or biological clock) but also challenge Waddington’s premise that a cell’s life is one way. Cellular reprogramming is an idea first attempted in the 1980s and later advanced by Nobel Prize recipient Shinya Yamanaka, who discovered how to revert mature, specialized cells back to their embryonic, or pluripotent, state, enabling them to start fresh and regrow, for instance, into new tissue like liver cells or teeth.

“I like to think of the epigenome as the operating system of a cell,” Levine explains. “So more or less all the cells in your body have the same DNA or genome. But what makes the skin cell different from a brain cell is the epigenome. It tells a cell which part of the DNA it should use that’s specific to it.” In sum, all cells start out as embryonic or stem cells, but what determines a cell’s end state is the epigenome.

“There’s been a ton of work done with cells in a dish,” Levine adds, including taking skin cells from patients with Alzheimer’s disease, converting them back to stem cells, and then into neurons. For some cells, “you don’t always have to go back to the embryonic stem cell, you can just convert directly to a different cell type,” Levine says. But she also notes that what works in a dish is vastly different from what works in living specimens. While scientists have experimented with reprogramming cells in vivo in lab animals with limited success, the ramifications are not well understood.  “The problem is when you push the cells back too far [in their life cycle], they don’t know what they’re supposed to be,” says Levine. “And then they turn into all sorts of nasty things like teratoma tumors.” Still, she’s hopeful that many of the problems with reprogramming may be sorted out in the next decade. Levine doesn’t envision people drinking cellular-reprogramming cocktails to stave off aging—at least not in the foreseeable future—but she does see early-adopter applications for high-risk patients who, let’s say, can regrow their organs instead of requiring transplants.

While the quest for immortality is still funded largely by the richest of humans, it has morphed from the pursuit of mythical objects, miraculous elements, and mystical rituals to big business, raising billions to fund exploratory research. Besides Calico and Altos Labs (funded by Russian-born billionaire Yuri Milner and others), there’s Life Biosciences, AgeX Therapeutics, Turn Biotechnologies, Unity Biotechnology, BioAge Labs, and many more, all founded in the last decade. While there’s considerable hype for these experimental technologies, any actual products and services will have to be approved by regulatory agencies like the Food and Drug Administration, which did not exist when radium was being promoted as a cure-all in the US.

While we’re working on landing long-term moon shots like editing genomes with CRISPR and reprogramming epigenomes to halt or reverse aging, LeBrasseur sees near-term possibilities in repurposing existing drugs to prop up senescent cells. When a cell gets old and damaged, it has one of three choices: to succumb, in which case it gets flushed from the system; to repair itself because the damage is not so bad; or to stop replicating and hang around as a zombie cell. “Not only do [zombie cells] not function properly,” explains LeBrasseur, “but they secrete a host of very toxic molecules” known as senescence associated secretory phenotype, or SASP. Those toxic molecules trigger inflammation, the precursor to many diseases. 

It turns out there are drugs, originally targeted at other diseases, that are already in anti-aging trials because they’ve shown potential to impact cell biology at a fundamental level, effectively staving off senescence. Although rapamycin was originally designed to suppress the immune system in organ transplant patients, and metformin to assist diabetes patients, both have shown anti-aging promise. “When you start looking at data from an epidemiological lens, you recognize that these individuals [like diabetes patients taking metformin] often have less cardiovascular disease,” notes LeBrasseur. “They also have lower incidence of cancer, and there’s some evidence that they may even have lower incidence of Alzheimer’s disease.” Even statins (for cardiovascular disease) and SGL2 inhibitors (another diabetes drug) are being explored for a possible role in anti-aging. Of course, senescence is not all bad. It plays an important role, for example, as a protective mechanism against the development of malignant tumors—so tampering with it could have its downsides. “Biology is so smart that we’ve got to stay humble, right?” says LeBrasseur.

Among other things, the Radium Girls taught us to avoid the hype and promise of new and unproven technologies before the pros and cons are well understood. We’ve already waited millennia for a miracle elixir, making some horrific choices along the way, including drinking radioactive water as recently as a century ago. The 21st century offers its own share of anti-aging quackery, including unregulated cosmetics, questionable surgical procedures, and unproven dietary supplements. While we may be closer than we’ve ever been in human history to real solutions for the downsides of aging, there are still significant hurdles to overcome before we can reliably restore youth. It will take years or possibly decades of research, followed by extensive clinical trials, before today’s anti-aging research pays dividends—and even then it’s not likely to come in the form of a cure-all cocktail capable of bestowing immortality. In the meantime, LeBrasseur’s advice is simple for those who can afford it: “You don’t have to wait for a miracle cure. Lifestyle choices like physical activity, nutritional habits, and sleep play a powerful role on our trajectories of aging. You can be very proactive today about how well you age.” Unfortunately, not everyone has the means to follow LeBrasseur’s medical wisdom. But the wealthiest among us—including those funding immortality’s quest—most definitely do.

The post Radium was once cast as an elixir of youth. Are today’s ideas any better? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: The discovery of DNA’s structure explained how life ‘knows’ what to do https://www.popsci.com/science/dna-discovery/ Tue, 31 May 2022 11:00:00 +0000 https://www.popsci.com/?p=445633
Images from the May 1963 issue of Popular Science.
“DNA–It calls the signals for life” by Wallace Cloud appeared in the May 1963 issue of Popular Science. Popular Science

In 1963, Popular Science reported on the Nobel Prize-winning discovery, and the woman who was left out of the accolades.

The post From the archives: The discovery of DNA’s structure explained how life ‘knows’ what to do appeared first on Popular Science.

]]>
Images from the May 1963 issue of Popular Science.
“DNA–It calls the signals for life” by Wallace Cloud appeared in the May 1963 issue of Popular Science. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

When Popular Science editor Wallace Cloud covered the 1962 Nobel Prize honoring the discovery of DNA, James Watson, one of the winners, told Cloud that “the discovery was not the work of an institute full of technicians, but the product of four minds.” But the Nobel Foundation only awarded three scientists for the discovery of DNA’s structure: James Watson, Francis Crick, and Maurice Wilkins. 

Since 1869, scientists had known about DNA, but its structure remained elusive until 1953. Understanding its shape would help explain how the life-generating molecule worked. It was Rosalind Franklin, working with Maurice Wilkins at King’s College, who would capture the first X-ray images of the molecules Watson and Crick would later decode and describe in their Nobel-winning paper. Watson told Cloud in an interview for his May 1963 Popular Science story that Franklin “should have shared” the Nobel Prize. 

In DNA discovery lore, it was Photograph 51—taken in May 1952—that revealed so much about DNA’s helical structure. Four decades later, award-winning writer and biographer Brenda Maddox detailed Franklin’s astonishing contributions to DNA research in Rosalind Franklin: The Dark Lady of DNA. And American playwright, Anna Ziegler, wrote Photograph 51, a play first performed in London’s West End in 2015, to chronicle the gender-bias exposed by Franklin’s Nobel Prize case.

Of the 975 laureates selected since 1895, when Alfred Nobel—a Swedish chemist best known for making dynamite—willed most of his fortune to an annual prize in the fields of physics, chemistry, medicine, literature, and peace (economics was added in 1968), only 58 have been women. It doesn’t take a Nobel Prize to see that the stats don’t add up. In Franklin’s case, the Nobel Foundation says they no longer award prizes posthumously (Franklin died in 1958). It’s been nearly seven decades since DNA’s double helix was decoded, and six decades since the Nobel Foundation awarded three scientists for the work of four. The stats still don’t add up.

“DNA–It calls the signals for life” (Wallace Cloud, May 1963)

How three men got the Nobel Prize for solving a jigsaw puzzle: assembling the pieces of a molecule that made you what you are—and keeps you ticking

Last December an American biologist and two English physicists received formal recognition, in the shape of a Nobel Prize, for a discovery made 10 years ago—a discovery that started a chain reaction in biology.

They determined the structure of a molecule that provides answers to questions scientists have been asking for over a century:

  • How does a heart muscle “know” how to beat?
  • How does a brain cell “know” how to play its role in thinking and feeling?
  • How do the cells of the body “know” how to grow, to reproduce, to heal wounds, to fight off disease?
  • How do infectious bacteria “know” what diseases to cause?
  • How do single fertilized egg cells, from which most of nature’s creatures begin, “know” how to become Plants, animals, people?
  • If one such cell is to multiply and form a human being, how does it “know” how to produce a potential Einstein or a Marilyn Monroe?

The stuff that genes are made of

Sounds like a lot to expect of a molecule—even one with a jaw-breaking name like deoxyribonucleic acid (known more familiarly as DNA). But it’s scientific fact that DNA is what genes are made of. DNA molecules supply the basic instructions that direct the life processes of all living things (except a few viruses). The DNA molecule contains information in a chemical code—the code of life.

The effects of discovery of the structure of DNA have been called “a revolution far greater in its potential significance than the atomic or hydrogen bomb.” Professor Arne Tiselius, President of the Nobel Foundation, has said that it “will lead to methods of tampering with life, of creating new diseases, of controlling minds, of influencing heredity—even, perhaps, in certain desired directions.”

I asked the American member of the Nobel Prize trio, Dr. James D. Watson, about these speculations in his laboratory at Harvard. It was a few weeks before he flew to Stockholm to receive the award along with Dr. Francis H. C. Crick of Cambridge University and Dr. Maurice H. F. Wilkins of King’s College, London.

The boyish 34-year-old Nobelman, who did the prize-winning research in England when he was only 25 (he entered college at 15, had been a Quiz Kid before that, in the days of radio), refused to endorse the wilder predictions about the future of DNA research. He said, “The average scientist busy with research looks ahead anywhere from an hour to two years, not more.”

Conceding that discovery of the structure of DNA was as important as the working out of atomic structure that led to the atom bomb, he added, “It will have a very profound effect, slowly, on medicine. Doctors will stop doing silly things. Our knowledge of DNA won’t cure disease, but it gives you a new approach—tells you how to look at a disease.”

Dr. Watson went on to explain just what he and his co-workers discovered during those days of inspired brainwork in England, back in 1953, and how they did it.

The discovery was not the work of an institute full of technicians, he said, but the product of four minds: He and Crick did the theoretical work, interpreting cryptic X-ray diffraction photos made by Wilkins, who had as collaborator an English woman scientist, Dr. Rosalind Franklin. She died in 1958. She “should have shared” the Nobel Prize, said Dr. Watson.

Picking up the thread

DNA was not a newly discovered substance. It had been isolated in 1869, and by 1944 geneticists were sure it was the substance of the genes—the sites of hereditary information in the chromosomes. Then they started asking, “How does it work?” That’s the question Watson and his co-Nobelists answered.

They knew DNA as one of the most complex of the “giant molecules” known to man. It was believed to have a long, chainlike structure consisting of repeating groups of atoms, with side groups sticking out at regular intervals.

The shape of the DNA molecule was important. In the cell, many of the larger molecules work together like machine parts, and their mechanical properties are as important as their chemical activity. However, even the electron microscope, through which it is possible to see some of the biggest giant molecules, shows DNA only as a thread, without detail.

One way of “looking” at molecules is to take them apart by chemical treatments that make small molecules out of big ones. In the case of DNA, the pieces—six kinds of submolecular units—had been identified. Now it was necessary to figure out how the jigsaw puzzle fitted together.

Another way is to use X rays, but in a special manner. A technique called X-ray diffraction lets physicists take a peculiar kind of look inside certain kinds of molecules—those that form crystals.

DNA extracted from cells and purified is a jelly-like material. Not much resemblance to a crystal, you might think. But when ifs pulled like taffy and dried under the right tension, it forms fibers that do have a complicated crystalline structure.

One of the Nobel Prize Winners, Dr. Wilkins, is a physicist who worked in this country on the Manhattan Project. After World War H, back in England, he got interested in biological problems and became a biophysicist. During the early 19505 he perfected a method of making X-ray diffraction photos of DNA fibers.

Such photos are taken by shooting a very narrow beam of X rays through the sample. Some of the X rays are bent by interaction with atoms. The emerging X-ray waves interfere with each other to form a pattern that registers on the film.

X-ray diffraction photos do not show the outlines of the molecules they represent. They are in “reciprocal space”-small distances on a photograph stand for large spaces in the molecule, and vice versa. The pictures must be interpreted by mathematical analysis; and the more complex the molecule, the more difficult that is.

Drs. Crick and Watson began to work on methods of interpreting the X-ray diffraction photos of DNA. They met at Cambridge, where Watson had gone to do research a couple of years after getting a Ph. D. from Indiana University.

Working backwards

Crick had worked out a theory for predicting what X-ray pictures of various molecular models would look like. That is, the pictures were so hard to interpret they had to work backwards: devise a model, then determine mathematically what its X-ray diffraction equivalent should be. Then the prediction was compared with actual distances and angles on the X-ray photos.

The two experimenters shared with Wilkins the idea that a twisted, helical molecular structure might fit the X-ray data (it had been discovered that such twists exist in other molecules produced by the cell). They built a model of rods, clamps, and sheet-metal cutouts (representing the various known pieces of the jigsaw puzzle), and evaluated it mathematically.

This first model didn’t prove out, and they temporarily dropped the problem, going on to other research. Some months later, in February, 1953, they learned of a structure proposed for DNA by Linus Pauling, Caltech’s Nobel-Prize-winning chemist. From their previous work, they knew that Pauling had to be wrong. This stimulated them to try another model, incorporating new information about the exact shapes of some of the subunits of DNA.

A month later they had a model that fitted the X-ray data closely. From it, they worked out the profound “Watson-Crick hypothesis,” which explains how the DNA molecule does its work in the cell. That hypothesis has been tested through ingenious experiments in numerous laboratories, and is accepted as gospel in the new world of molecular biology.

The key to life

The DNA molecule stands revealed as a double helix shaped roughly like a twisted ladder. 

The two legs of the ladder are identical, but the rungs are not, and this is the key to the molecule’s ability to store information. The order of the four different subunits that make up the rungs is the code of life.

The way the subunits link across the rungs is the key to DNA’s ability to transmit information. Each rung actually consists of two units, but the pairing of the units follows definite rules; the molecule can “unzip,” and each half serves as a template for rebuilding the missing half, producing two new molecules identical to the original one.

The Watson-Crick hypothesis has made possible a new view of the “molecular basis of life”: In the cell—really a miniature chemical factory-—DNA molecules contain the instructions that tell the molecular machinery of the factory what new molecules to build. The product molecules in turn determine the function of the cell whether it’s a blood cell, a nerve cell, a sperm cell, or (if not part of a many-celled organism) perhaps a harmful bacterium.

In this way, the information stored in DNA molecules specifies an entire community of cells, such as those that add up to a human being—the color of his hair and eyes, his basic aptitudes, his built-in sensitivity or resistance to disease.

Programing a man

An individual DNA molecule is about 10,000 subunits long (that is, there are that many rungs on the ladder), and the list of instructions necessary to specify a human being is about 10 billion DNA units long. If the DNA molecules containing that message were placed end to end, they would make a strand 10 feet long, but only one twelve-millionth of an inch thick. Actually the strands are bundled in the microscopic bodies called chromosomes, in the nucleus of each cell, which hold the machinery of heredity.

The specifications must be passed on from generation to generation. This takes place during the cell division, when the chromosomes divide. Preparatory to cell division, the DNA molecules in the chromosomes have unzipped and have been copied by the machinery of the cell.

Work in the cell, controlled by DNA, is important not only to healthy life, but also to disease. Viruses, for example, take over cells and turn them into virus factories by interfering with the normal flow of instructions and substituting new instructions. Hereditary diseases are the result of “errors” that have crept into the coded instructions during copying of DNA molecules. Such changes also transform normal cells into cancer cells, which have “forgotten” their usual roles and “learned” new functions.

Those facts explain why DNA has created such excitement among biologists. If a way can be found to send man-made chemical messages into cells and alter the instructions stored there by DNA molecules, almost anything is possible.

But that isn’t likely to come about this year or next. First the code must be deciphered. That’s where most of the research on DNA is concentrated today.

Another unsolved problem, perhaps even more mysterious, is how cells “decide” to use particular instructions stored in their DNA archives. Discoveries on this frontier will explain how cells respond to outside stimuli—-and how a single fertilized cell can multiply selectively to produce the many different kinds of specialized cells that make up a human being.

Biology photo
The cover of the May 1963 Popular Science was very auto-focused.

Some text has been edited to match contemporary standards and style.

The post From the archives: The discovery of DNA’s structure explained how life ‘knows’ what to do appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: When the US first caught TV fever https://www.popsci.com/technology/television-invention/ Mon, 30 May 2022 12:30:00 +0000 https://www.popsci.com/?p=445590
Images from the February 1947 issue of Popular Science.
“Television on the job” by George H. Waltz, Jr. ran in the February 1947 issue of Popular Science. Popular Science

In a 1947 issue of Popular Science, we examine the dramatic promises of early television technology.

The post From the archives: When the US first caught TV fever appeared first on Popular Science.

]]>
Images from the February 1947 issue of Popular Science.
“Television on the job” by George H. Waltz, Jr. ran in the February 1947 issue of Popular Science. Popular Science

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

The Radio Corporation of America, or RCA—with its Victrola logo, and iconic Nipper the dog mascot—was at the center of many technology disputes during the 20th century. None were as fiery, however, as the claim of television’s invention. While Philo Farnsworth, a farm boy from Utah, was officially awarded the first television patent in 1930, Vladimir Zworykin, who fled Russia during the country’s revolution, filed years before him, in 1923

In February 1947, Popular Science Associate Editor George H. Waltz, Jr. interviewed Zworykin—then director of RCA’s vast NJ-based laboratories—for “Television on the Job.” By then, a court battle had given Farnsworth invention credit, but Zworykin’s RCA-funded delivery system (National Broadcast Company or NBC) sent TV into American homes, licensing Farnsworth’s design. 

When Waltz’s story ran in February 1947, TV was still a novelty. RCA had introduced TV to America eight years earlier at the 1939 New York World’s Fair, but WWII paused its rollout. By the late ‘40s, only  a few dozen cities offered programming, most with only one station, one channel, and just a few evening shows. By the end of 1947, however, the World Series was televised for the first time, and monthly TV production rate had quadrupled

Waltz captures the TV-fever that had begun to grip the nation: “Television…means much more to us than an amusing accompaniment for radio’s sound. Its workaday uses are even more dramatic than its role as an entertainer.”  He described exploring the ocean depths by “installing a television camera in a thick-walled metal bathysphere;” getting “close-up views of what goes on within chemical reaction chambers;” and equipping assembly lines with TV control rooms. Zworykin even envisioned broadcasting “the moon and stars” (he did live to see the lunar landing televised).

“Television on the job” (George H. Waltz, Jr., February 1947)

It extends human vision beneath seas, into furnaces and throughout factories.

Television is adding overalls to its dress clothes. Its sleeves are rolled—it is ready to go to work!

To most of us, television has been a promise of armchair entertainment—a chance to have choice seats at boxing bouts, football games, news events and stage plays without budging from the budget or the living room. That phase of television is here, but television’s future goes far beyond the mere prospects of animated quiz shows and soap operas you can see.

Television, like radio, is a versatile tool. A relatively small percentage of the radio waves that flash around the earth today carry music and comedy to our loudspeakers. Most of them have more important missions. Radio helps us go places and do business. Without it, large-scale scheduled air travel would be impossible, sea travel would be slowed, crime prevention hampered, news coverage cut down, and international business and diplomacy limited.

Television likewise means much more to us than an amusing accompaniment for radio’s sound. Its workaday uses are even more dramatic than its role as an entertainer.

I found that out when I got a firsthand look into television’s future at the large modern laboratories of the Radio Corporation of America at Princeton, N. J. There I put questions to Dr. Vladimir K. Zworykin, director of the laboratories’ program of electronic research—one of the men who helped raise television from its flickering beginnings to its present status.

Getting Dr. Zworykin to talk about television was not a hard assignment. He thinks it, dreams it, lives it, and talks about it with parental love.

“Television,” he explained, after he had shown me his laboratory, “is an extension of our sight. It gives us a simple means of getting eyewitness views of things happening in places too small, too distant, or far too dangerous for the average person to observe. Properly applied, television can show us many things that we have never seen before. It can open up whole new frontiers of research and knowledge.

“Undersea exploration is an excellent example. Few divers can descend more than 200 feet. Television, however, can put our eyes there without risk to our bodies. By installing a television camera in a thick-walled metal bathysphere, lowered from a survey ship, the deepest ocean floor can be explored safely and for hours at a time by skilled observers seated comfortably in front of a direct-wire television viewer on deck—or on dry land, for that matter, if the television signals from the camera are radio-relayed from ship to shore.”

As Dr. Zworykin enlarged on his idea I realized that the construction of such a television bathysphere would present no great problems. It could be similar in design to the diving ball that Dr. William Beebe used in his undersea observations. With thicker walls to withstand greater pressures, it would otherwise be simpler, since a television camera, unlike a man, requires no oxygen and would be unaffected by the near-zero temperatures 600 feet under.

Since scenes have been televised from the dim light of a candle, illumination would not be difficult. The modern television camera using the new Image Orthicon tube, another Zworykin-guided development, is as sensitive to light as the human eye, so floodlights for underwater television exploration would have to be no brighter than those required for human observation. The bathysphere could be lowered by cable, while remotely controlled motors built into a supporting gimbal could turn and tilt its “eye” to scan the surroundings. The bathysphere could be used to aid under-water salvage, guide the placing of drilling gear for undersea oil wells, assist in submarine rescues, and, perhaps, even test the myth of lost Atlantis. The depths that could be plumbed would be limited only by the strength of the sphere’s metal shell.

Similarly, according to Dr. Zworykin, television cameras can give us close-up views of what goes on within chemical reaction chambers, inside fiery furnaces and behind the thick lead walls that surround atomic-fission experiments. It provides us with a third eye that is unaffected by lethal fumes, heat or radiations.

What actually goes on inside smelting furnaces and glass furnaces is still pretty much anybody’s guess. The heat is so great that temperatures must he measured from a distance with optical pyrometers, and quick glimpses through jet-black goggles are the only observations possible. Any closer view would sear the flesh, blind the eyes.

Television cameras at strategic spots inside the furnaces could flash pictures of the fiery  mass to a viewer in the office of the plant engineer. He could watch the process from beginning to end with no more bother than switching viewer from one camera to another. He could literally “walk around” inside the furnace. The glow from the molten metal or glass would provide more than enough illumination, and liquid-cooled jackets would protect the cameras.

Dr. Zworykin also envisions television as a super-supervisor in the large factory of tomorrow. Television cameras set up in the various departments of a manufacturing plant would allow one man in a central room to watch, control and safeguard the entire plant’s activities. Rows of television viewers would show him exactly what was happening at nerve centers of the factory. His master control room would be an industrial equivalent of the CIC (Combat Intelligence Center) rooms that coordinated our fighting forces along the different fronts during the war. Such a system would speed production and safeguard life and property.

A similar setup on a smaller scale could be used to control the flow of automobile assembly lines. At present, it requires the services of a corps of men to supervise the 25 miles or more of subassembly and main assembly lines that snake their way through most big automobile plants. Television cameras set up at the feeder lines and along the length of the main assembly line and wired to viewers in the main supervisor’s office could bring the entire problem under his eyes.

Television may well change our whole concept of educational techniques, Dr. Zworykin believes. This is particularly true in medicine, where a student’s view of an operation consists of what he can see from his seat high in the operating-room amphitheater. Television, however, can give him a surgeon’s-eye view of the whole proceedings. A television camera mounted in the cluster of lights over the operating table and wired to screens in classrooms would not only give each student a close-up of the most delicate operation but would hundreds of students, instead of a few, to watch the demonstration. If put on the air, an operation could be witnessed by students in medical schools all over the country wherever television was available.

Long-distance diagnosis is another medical possibility. With the aid of television, a doctor and his patient could take full advantage of the knowledge and skill of a specialist a thousand miles away. Public health doctors could make actual television visits to health clinics in outlying districts. Special health lectures could be delivered simultaneously to widely scattered groups.

There is no reason why students some day will not get first-hand televised looks at the moon and stars through the giant Palomar telescope, watch important experiments in progress at the world’s great research centers, sit in on the actual proceedings at international conferences, or “attend” any of the firsts in science, exploration and the arts. Famous lecturers and educators could be seen and heard simultaneously in schools all over the country.

Television as a teaching aid was dramatically demonstrated in New York City during the war when first aid and fire-bomb-fighting methods were explained to the city’s volunteer air-raid wardens via the television camera. Viewers set up at air-raid posts throughout the city made it possible for a single group of civilian-defense experts to demonstrate air-raid procedures to more wardens than ever could have been jammed into the city’s largest auditorium. And what is more to the point, every warden had a close-up of the demonstration.

I asked Dr. Zworykin if he thought it would be possible to equip news reporters with lightweight television cameras that would allow them to broadcast on-the-spot views of accidents, fires, train wrecks and similar news events. As an answer he showed me the compact, lightweight television camera that has been developed for use in a guided rocket. Weighing only 34 pounds, and no larger than a suitcase, it may well be the forerunner of the newscaster’s “walkie-lookie.” It would have to be changed only slightly. Its compact transmitter and power supply, stowed in the reporter’s car, would transmit the scene being televised to a main broadcasting station. There a picture editor, seated before a bank of viewers showing the individual pickups from perhaps a dozen reporters on their beats, could select the events be desired and rebroadcast them to the station’s television public.

Several department stores are experimenting with direct-wire television as a means of displaying merchandise to customers. Fashion shows, displays and special skits to demonstrate kitchen and garden equipment are televised and piped to viewers placed in the store’s windows and at eye-catching spots around the store. In a sales test run by one large Eastern department store a poll of the customers showed that nine out of 10 felt television was an aid to their shopping.

Television billboards are the latest advertising wrinkle. The plan, conceived by a Boston, Mass., outdoor advertising firm, calls for a network of large outdoor screens to display television sales programs broadcast by a central station. Set up on roof tops and on the sides of buildings, the television billboards will offer a variety of entertainment interspersed with commercials.

A New York bank is considering installing a direct-wire television system to speed up and simplify the identification of customers. A viewer at each teller’s counter connected to a camera at the identification-card files will allow him to verify signatures, and bank balances without leaving his window. A similar network for the nation’s police forces would speed identification of criminals by photos and fingerprints. 

New developments still in the laboratory—such things as three-dimensional and full-color pictures—will extend television uses even further, Dr. Zworykin believes. Full-color television alone, for example, will greatly simplify the accurate matching of colors in the paint, dye and textile industries.

In the meantime, television as we know it today can go far to help industry solve its problems.

Home Theater photo
The February 1947 cover of Popular Science imagines the exciting depths where television would one day take us.

Some text has been edited to match contemporary standards and style.

The post From the archives: When the US first caught TV fever appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: This talking gadget from the 1920s measured water levels https://www.popsci.com/technology/talking-water-sensor/ Fri, 27 May 2022 11:00:00 +0000 https://www.popsci.com/?p=444134
Images from the November 1922 issue of Popular Science.
“Talking machine phones height of water in reservoir” appeared in the November 1922 issue of Popular Science. Popular Science

In 1922, Popular Science got a peek at our sensor-filled future with a Rube Goldberg-esque machine.

The post From the archives: This talking gadget from the 1920s measured water levels appeared first on Popular Science.

]]>
Images from the November 1922 issue of Popular Science.
“Talking machine phones height of water in reservoir” appeared in the November 1922 issue of Popular Science. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Water level changes: float bobs, pulls cord, lifts tone-arm, moves needle, needle hovers over disk; caller calls, switch-operator connects, electric relay answers, soundbox lowers, phonograph spins, needle contacts disk, recorded voice announces numbers etched into precisely-positioned grooves. The result? A remote reading of the water level in a reservoir.

Rube Goldberg would be proud! 

Then again, when Popular Science published “Talking Machine Phones Height of Water in Reservoir” in November 1922, Rube Goldberg was just making a name for himself, and this contraption was a novel device on the forefront of telemetry—and a small window into a future sensor-filled world. Although the short piece doesn’t identify prospective customers, presumably water utilities would save time and cost calling the local reservoir for a water reading versus dispatching a technician. Or, perhaps an idle Vanderbilt or Rockefeller might find it amusing to keep a distant eye on the water levels of their country estate swimming pools.  

A century later, sensor networks have come a long way. Smart litter boxes, for example, monitor your furry feline’s output; canine lovers can check-in on their barking companions; ingestible sensors will tell you if pops took his meds; slip a snore-detector under your partner’s pillow to settle the snoring debate once and for all; save your nose by monitoring your infant’s bowel movements; save your nose twice by monitoring your own movements; take the guesswork out of grocery shopping with a fridge cam; a WiFi water sensor will sense leaks before they get out of hand; or, if they do, dispatch your robo-mop from the office.

“Talking machine phones height of water in reservoir” (November 1922)

By combining the telephone and phonograph, an English firm has perfected a novel device that automatically announces in either words or code signals the height of water in a distant pond or reservoir.

The recorder can be “run up” or switched into any existing telephone or telegraph circuit when information about the height of water is sought. As installed, the new device consists of a phonograph mechanism with a phone transmitter substituted for the sound box. An electric motor drives the record table, and a relay, acting through levers, stops and starts the machine and lifts the needle from the record.

Float controls recording needle

The recording disk contains 200 concentric grooves, each groove a vocal record of a certain height of the water. By the movement of a float that rests on the water, the tone arm, sound box, and recording needle are moved laterally into position with the disk in such a way as to give the correct reading when the needle is brought into contact with the disk.

To effect this contact, the sound box, with needle, is automatically lowered when the disk mechanism is in rotation, and raised again above the disk when the mechanism stops.

The instrument is connected in the usual way with the nearest telephone exchange, and is given a regular subscriber’s number. When the inquirer seeks information about the height of the water, he asks Central for this number. As soon as the instrument phone rings, the needle immediately drops to the record, which makes three revolutions, and a voice announces over the telephone line the exact height of the water. The short “speeches” on the record range from “empty” to “one, double nought,” enunciating each digit of a figure, such as “seven two” and “seven two half.” The mere ringing of the phone sets the mechanism in operation, delivers the spoken information, and closes the recorder.

In the code signal type of mechanism the grooves on the record contain various combinations of dots to represent the changing height of the water.

Some text has been edited to match contemporary standards and style.

The post From the archives: This talking gadget from the 1920s measured water levels appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: Jacques Cousteau shows off his underwater film technology https://www.popsci.com/technology/jacques-cousteau-underwater-film/ Thu, 26 May 2022 11:00:00 +0000 https://www.popsci.com/?p=444096
Images from the February 1969 issue of Popular Science.
“How we film under the sea: Amazing one-man subs bring you eye-dazzling pictures from the deep” by Jacques Cousteau appeared in the February 1969 issue of Popular Science. Popular Science

In the February 1969 issue of Popular Science, Jacques Cousteau wrote about his extremely maneuverable, tiny subs.

The post From the archives: Jacques Cousteau shows off his underwater film technology appeared first on Popular Science.

]]>
Images from the February 1969 issue of Popular Science.
“How we film under the sea: Amazing one-man subs bring you eye-dazzling pictures from the deep” by Jacques Cousteau appeared in the February 1969 issue of Popular Science. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

When Jacques Cousteau penned a story for Popular Science in February 1969, NASA was months away from crossing 240,000 miles of space to land a man on the moon, but marine technology was barely capable of sustained underwater exploration. To many, Cousteau, a French naval officer during WWII, is remembered for his nearly decade-long TV series, The Undersea World of Jacques Cousteau, which broadcast into millions of homes the wonders of ocean life and exposed the devastation of human activity. But Cousteau was also an inventor, whose contribution to marine-exploration technology equaled his devotion to marine conservation. The latter profited from the former, beginning with his invention of scuba (self-contained underwater breathing apparatus) in 1943. His Diving Saucer, which debuted in 1959, set the standard for nimble underwater exploration subs. 

In his 1969 Popular Science story, Cousteau proudly describes his many vessels and sensors—all customized, including his beloved ship Calypso. The story centers on Sea Fleas, “tiny subs” fitted with “jet propulsion” and “side nozzles” for “extreme maneuverability.” Cousteau’s narrative reveals not only his love of invention and the sea, but also sexist social conventions. “Practice makes [Sea Fleas] easy to handle,” he writes, “though at first I found them a bit like women—seductive and unpredictable.”

Despite considerable technological advances since the 1960s, even today Earthlings have mapped more surface area on Mars and the moon than Earth’s ocean floor, which is remarkable considering that the ocean is one vast, contiguous, surface feature that dominates earthly life and livelihood. Led by Cousteau’s extraordinary example, however, the siren call of Earth’s oceans continues to inspire explorers and documentarians like Sylvia Earle, James Cameron, and Victor Vescovo. Today, we can dive even to the deepest reaches of our planet, but can we save them?

“How we film under the sea: Amazing one-man subs bring you eye-dazzling pictures from the deep” (Jacques Cousteau, February 1969)

How do you convey to the people of the world the magnificent vistas that lie beneath the ocean? I have a life-long love of it, and of adventure; and a few years ago we were given the opportunity to communicate this love, this source of inspiration, to the public via the medium of television.

By now our programs [“The Undersea World of Jacques Cousteau,” ABC-TV], produced by myself and Alan Landsburg, Metromedia Producers, have been seen by millions. Aided by my best team of divers and other underwater experts we have tried to show the viewer the strange and wonderful forms of life that exist in the ocean and the enormous natural riches it contains. We have tried to communicate some of the excitement we feel at being among the first to explore man’s last great frontier on earth.

Tools for exploring

Not the least of my reasons for embarking on this great adventure is the fact that fascinating, if expensive, new tools are now available to help us capture some of the wonders of the ocean on film.

Among the main ones are our pair of tiny one-man submarines, which we have dubbed the “Sea Fleas.” Before explaining how they work, a few other items: Our color-movie cameras (two are carried by each submarine) were engineered by us and contain special optics. They are usually operated as handheld units by our scuba divers.

The scuba gear was also designed by us. Each outfit contains a helmet radio transmitter for the diver to communicate with our ship, the Calypso, when he surfaces. Other features include a sonar transmitter for underwater communication between individual divers and between divers and the ship. The outfits have built-in lights, compasses, an emergency signal that can be heard a mile away, and emergency signal rockets. Last but not least is a shark billy to push the sharks away when they are too many.

The Calypso herself is worthy of mention. A former minesweeper which we first outfitted for oceanography in 1950, she includes such things as a built-in underwater observation station in her bow, closed-circuit TV monitors, laboratories, and living facilities.

The Sea Fleas

The tiny subs, which take anywhere from 10 to 100 percent of the film for our “specials,” are a story in themselves. The real breakthrough here is that they are small and easy to handle. They do not need a large, specialized vessel as a tender. They are a unique new tool for oceanography, for they can be deployed out into the ocean like a school of fish—with all the maneuverability and observational capabilities of small underwater creatures.’

The basic features of the Sea Fleas were proved in my 1959 Diving Saucer—which, by the way, has been the ancestor of all modern underwater vehicles. The most important of these features are jet propulsion with two side nozzles; extreme maneuverability through 360-degree rotation of these nozzles together or separately; static buoyancy finely adjustable both ways; safety devices such as an inflatable hatch “tower” for exit in heavy seas; mercury ballast to instantly tilt the sub; extreme streamlining to avoid entanglements; and hydraulic controls.

The Sea Fleas proved themselves magnificently during their more than 100 hours of operation in some 50 dives. Practice makes them easy to handle, though at first I found them a bit like women—seductive and unpredictable. This last was, of course, due to their extreme miniaturization.

Inside the subs

Crawling into the sub’s rear hatch, the pilot lies prone in the three-by-6½-foot hull, looking out through the lower front porthole. His instruments include a barometer, a CO2 meter, clock, gyrocompass, two F (fathom) meters, tape recorder, radio for surface communication, pinger (for pinpointing the sub’s location), echo sounder, and underwater telephone.

A single control stick and rationalized controls make it easy for the operator to be both pilot and observer. The control stick takes the sub up, down, forward, back, or sideways. When it is moved from port to starboard, it controls the rudder. Back-and-forth movement activates the mercury trim system. The stick also holds switches for the propulsion pump motor, camera, and lights. As you can see, our pilot easily becomes a camera man, working in perfect harmony with the sub to record anything of interest that he sees in the ocean depths.

Cameras, lights, action 

Energy for the Sea Fleas comes from 62 two-volt lead-acid cells in four battery boxes beneath the outside fiberglass covers. These are open to sea pressure and are simply topped with oil. In addition to powering the two-hp. propulsion motor and the instruments, the batteries provide light for sailing and for photography.

Looking like giant eyes, three 750-watt iodine lamps mounted in the front of the sub give enough light for color pictures. Any two lights are used at a time, illuminating the ocean depths for either of the two movie cameras mounted below them. They make good pictures possible at ranges up to 20 feet in clear water. Two other lights are also used: a 150-watt searchlight for sailing and a 60-watt lamp to light the ocean bottom.

Diving the subs

With the pilot in the sub and the hatch tightly closed, the small two-ton vessel is swung over the stern of the Calypso by a winch. A scuba diver rides down with the sub to free the line when it is in the water. A 55-pound weight carries the Sea Flea to the bottom and is then dropped. To pinpoint buoyancy, the pilot can then admit up to 44 pounds of water into a ballast tank or drop, individually, a number of the small two-pound weights carried for this purpose.

The mercury trim system instantly adjusts the sub’s longitudinal angle. This works by shifting 143 pounds of mercury between the front of the sub and the back. To surface, a “standard” 44-pound weight is dropped. In an emergency, this can be supplemented by dropping the mercury and another 110-pound safety weight. The Sea Fleas carry a tank of oxygen adequate for 20 hours underwater. A special compound absorbs the pilot’s exhaled carbon dioxide. The barometer provides a convenient means of monitoring pressure. As oxygen content drops, it is replenished from the supply tank.

The Sea Fleas are not limited to photography. They can mount a mechanical arm and claw for bringing back biological and geological samples from the bottom. Small as they are, my tiny subs are helping to test principles we will use in future submersibles. Built at a cost of $750,000 by Sud Aviation and in operation for over a year, they are the forerunners of go-anywhere underwater vehicles that will use fuel cells for unlimited range and mobility.

During my active life I have lived with the sea and in the sea. I have accumulated a great many feelings and impressions, things that I would like to communicate to make you understand why our work has been worthwhile, and why we must continue our research and exploration. It is my hope that, before I am relegated to a desk as the elderly director of an institute, I may continue to do so.

From the archives: Jacques Cousteau shows off his underwater film technology
The February 1969 cover of Popular Science featuring underwater adventures.

Some text has been edited to match contemporary standards and style.

The post From the archives: Jacques Cousteau shows off his underwater film technology appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: How a medical ‘outsider’ discovered insulin https://www.popsci.com/science/insulin-discovery/ Wed, 25 May 2022 16:00:00 +0000 https://www.popsci.com/?p=444081
Images from the September 1923 issue of Popular Science Monthly.
“Insulin—a miracle of science” by Donald Harris appeared in the September 1923 issue of Popular Science Monthly. Popular Science

In September 1923, Popular Science profiled Frederick Grant Banting, a young Canadian doctor who discovered insulin and helped millions.

The post From the archives: How a medical ‘outsider’ discovered insulin appeared first on Popular Science.

]]>
Images from the September 1923 issue of Popular Science Monthly.
“Insulin—a miracle of science” by Donald Harris appeared in the September 1923 issue of Popular Science Monthly. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

At the time insulin was developed into a life-saving serum, the world was still recovering from The Great War’s 40 million casualties, silent movies were the rage, Ford’s Model T topped auto sales, and 22 of 100,000 New Yorkers were dying of diabetes. The disease kills the pancreas’s insulin-producing islet cells, or Islands of Langerhans. 

Also in 1920s America, medicine—the kind taught in universities and practiced in cloistered institutions—was the almost exclusive province of wealthy white men. In September 1923, when Popular Science hailed the discovery of insulin as a modern medical miracle, it’s no surprise the magazine (already a half-century old) chose to spotlight the farm-boy pedigree of insulin’s discoverer, Frederick Grant Banting, a young Canadian doctor who was considered an outsider in research circles. “‘No one had ever heard of him,’” Donald Harris wrote for Popular Science, quoting a New York City M.D. “‘This young doctor didn’t know much about diabetes. Quite by chance he discovered how to get insulin and use it as a cure.’”  Not only was Banting an outsider, but his early methods were also viewed critically. He made progress extracting insulin from animal entrails in a nonconforming lab, set up at the home of college friend in Toronto.

A century after insulin was first administered in trials, diabetes remains a killer, ranked 8th in 2020 by the CDC, taking lives at a faster clip than before its discovery. That’s because diabetes, specifically Type 2, has soared in the US, and not everyone can afford life saving treatments. Even in 1923, doctors understood that insulin was only a stopgap and that a cure would still be needed. In what now seems like an omen directed at the 21st century, Harris wrote, “the medical profession has issued a warning that [insulin] is not to be regarded as a magic or instant cure. In fact, it is not a ‘cure’ at all, since it does not destroy the causes of the disease.” 

Even with artificial pancreases, coaxing other organs to grow insulin, and monoclonal antibodies, a cure remains elusive. By the efforts of another nonconforming lab—one that  employs highly restricted human embryonic stem cells—a century after Banting’s insulin-treatment discovery, a diabetes cure may be closer than ever. 

“Insulin—a miracle of science” (Donald Harris, September 1923)

How a young laboratory assistant won world fame by discovering serum that offers relief to millions of diabetes sufferers

From six hospitals in the United States a few weeks ago came some news that electrified the scientific world. A serum derived from the entrails of animals, given to the hospitals for clinical test of its efficacy as a treatment for diabetes, had proved so extraordinarily successful in administration to many hundred patients, that the physicians who conducted the tests asserted without qualification that it appeared to be a sure method of controlling the disease. 

What “insulin” means

The new serum is insulin, a name derived from the Latin word meaning “island.” This name was applied because the particular groups of intestinal cells from which the serum is extracted are known in medicine as the “Islands of Langerhans.”

Since the first announcement of the successful tests of insulin, eminent medical men have been almost a unit in declaring that through its general use probably will be sounded the death knell of diabetes, a disease which until now has resisted the best efforts of medical science. Dr. Simon Flexner, director of the Rocks feller Institute for Medical Research, has said that insulin promises to prove “one of the great medical contributions to the world.”

Dr. A. I. Ringer, in charge of the test of insulin at the Montefiore Hospital in New York, stated unequivocally that “insulin is undoubtedly one of the greatest discoveries of the age,” and that, now that it has been given to the world, “no person should die of diabetes.”

Dr. Nellis B. Foster, writing in the “New York Medical Journal,” declared, “I think it is safe to say that could one start with insulin before operation, one could be reasonably sure that the patient would not die of diabetes.”

Other medical authorities expressed their endorsement of the new serum with equal enthusiasm, and John D. Rockefeller, Jr., only a few weeks ago contributed $150,000 to permit 15 hospitals in the United States to introduce the use of insulin in their clinics.

Probably the most amazing and dramatic feature of this remarkable discovery is the personality of its discoverer. The man who gave insulin to mankind was no famous medical authority, nor was he even a recognized scientist, trained in the intricacies of research, fortified by exhaustive knowledge of medical lore, and possessing extraordinary equipment in laboratory and materials. Instead, he was an obscure Canadian doctor of 31, less than six years out of medical college, a farmer’s son, who had accepted with pride a humble position as a laboratory assistant in a Canadian university when he returned wounded from war service only three years ago.

Moreover, the most valuable part of the discoverer’s work was accomplished in the incomplete home laboratory of a young Toronto doctor, a school friend, who permitted him the use of his home and equipment merely because he, the owner, was going away on a vacation and had no use for them.

The discoverer of insulin is Dr. Frederick Grant Banting. Probably I there is no better illustration of the general attitude of the medical profession toward him and his discovery than the recent comment of Doctor Flexner:

Where experts failed

“No one had ever heard of him. In fact, there was no reason why anyone should have heard of him.

“This young doctor didn’t know much about diabetes. Quite by chance he discovered how to get insulin and use it as a cure. At Toronto he proved the efficacy of his treatment. We experienced physicians who had so much material and so much scientific background to help us find a cure for diabetes, failed. We feel like kicking ourselves.

“The world is enormously richer today as a result of Doctor Banting’s discovery of insulin. It seems to me that mankind never again will be in the grip of this disease as it has been for so long. There is still a bit of danger in its use, but some day we shall all know just how to administer it. Then all the world, every hamlet of it, will appreciate the benefits.”

In speaking of “chance” as a factor in Doctor Banting’s discovery, Doctor Flexner had no intention of belittling either the discovery or the discoverer. The truth is that in what Doctor Banting has accomplished, chance played a prominent part, and no one is more ready to admit the fact than he.

Diabetes is caused by the failure of the Islands of Langerhans properly to perform their functions. In the normal man these “islands” secrete insulin—the same insulin which now is taken from animals to supply a means of fighting diabetes.

Carbohydrates—foods of a starchy sort —are converted into sugar, which is absorbed by the intestines. Part of the sugar is carried to the liver, where it is stored as glycogen, or animal starch. The remainder is carried by the blood to the muscles and other tissues, where some of it is oxidized and some stored as glycogen.

Insulin secreted by the normal man passes directly into the blood, there combining chemically with the sugar substances formed from food to supply the body with elements necessary to the health. In diabetes, the insulin fails to perform its chemical action on the sugar substances. This causes them to circulate in large quantities through the blood and to be lost in excretions. The body, in consequence, is deprived of an important source of energy. Among the symptoms of the disease are voracious appetite and abnormal presence of sugar in the blood and certain bodily excretions.

What causes diabetes

The medical profession long had known that the removal from an animal of the Islands of Langerhans resulted in symptoms of diabetes. Also, marked destructive changes in the “islands” were noted in the majority of patients who suffered from diabetes. The conclusion, of course, was that a derangement of secretions from the “islands” was the cause of the disease. Investigators had arrived at the belief that extracts of the secretions from the “islands” obtained from animals might supply a serum, which, by supplying to diabetics the insulin which nature was failing to produce, would form an effective treatment for the disease.

Langerhans, the German physician for whom the “islands” are named, and others had expressed this opinion in treatises, but attempts to obtain pure insulin proved futile, since it was destroyed invariably by the powerful digestive ferments present in the extracts which were made.

Dr. Banting begins research work

In November, 1920, Doctor Banting, having chanced upon Langerhans’s work on the subject of diabetes, became interested in the possibility of developing the serum, and began experimenting at the laboratories of Western University, where he had been a laboratory assistant for a few months. He discovered this work to be so engrossing that he applied for a two months’ leave of absence and set up a laboratory at the home of Dr. F. W. Hipwell in Toronto. Doctor Hipwell was a school and college friend, who was leaving the city for a vacation. The two months’ leave was extended to three, and at the end of that time Doctor Banting resigned from the university, for his experiments were progressing with encouraging success.

In attempts to extract pure insulin from the intestinal tracts of animals, previous experimenters had shown that by tying up the ducts from whence came the digestive juices, degeneration occurred much more rapidly in the juices than in the Islands of Langerhans. After many months of work, Doctor Banting conceived the idea that if an extract were prepared from the intestinal tissue remaining, some time after the ducts had been tied, it should contain insulin because there would not be enough of the digestive ferments to destroy it.

In I921 his experiments in this line proved successful; he obtained the serum he sought in a comparatively pure state and devised methods of refining it further, in moving from it substances that rendered it unsuitable for repeated injection in man.

By this time his experiments had reached a stage that led the authorities of the University of Toronto to permit him to pursue his work in the famous Connaught Laboratories. It was from there, after several months intensive work to determine the effect of insulin on normal and diabetic animals, that the announcement was made that the Banting serum was ready to be offered to the medical profession for clinical test. In his work at the university, Doctor Banting was assisted by Dr. J. J. Leod, Dr. C. H. Best, and others.

Successful tests in United States

The results of the tests of insulin conducted in six hospitals in the United States, have been entirely successful. Previous to the introduction of insulin, the accepted treatment for diabetes was dietetic—limiting the quantities of starches and sugar taken in food. This method of treatment was unsatisfactory, since the inadequate diet resulted in great loss of strength and energy, and since lapses from the severity of the prescribed diet resulted in recurrence of the diabetic symptoms.

Using insulin, physicians now are able to permit their patients a strength-sustaining diet during treatment. The serum, which usually is injected in the arm, restores to the body its normal power of transforming starches, sugar, fats, and similar food into the chemical constituents necessary for health.

Many of the patients whom the clinical directors pronounced cured of diabetes by insulin had been in a diabetic coma from which only a handful of sufferers ever had emerged previously. Five who had been in this last stage of the disease were treated and discharged at the Montefiore Hospital alone.

Robert Lansing is aided by insulin

Prominent among those whom insulin has helped is Robert Lansing. former Secretary of State, who had been suffering from diabetes for years. Recently he stated that, after six weeks’ treatment with insulin, he was well on the road to recovery.

Just how important Doctor Banting’s discovery is to the health of the nation is shown by mortality statistics recently published by the United States Census Bureau. These reveal that for 20 years the number of deaths from diabetes has been increasing steadily in the United States with a really startling increase since 1919.

In New York State the rate of mortality from diabetes is highest—22 in 100,000. New Jersey, Pennsylvania and Ohio also show very high rates, while in the West and South, deaths from this disease are comparatively few. The variations are due, not to differences in climate, it has been explained, but to the recognized varying susceptibility to diabetes of different classes of the population. [Editor’s note: Parts of the following phrasing have been edited for sensitivity.] Thus, older persons are more prone to contract the disease than the young; white persons are more susceptible than non-whites; women are more susceptible than men. Among the white nationalities, Irish and Jewish populations show an especial susceptibility, and the death rate from the disease consequently is large in states where these two groups make up a large portion of the population. Estimates as to the number of diabetics in this country vary between 500,000 and 2,000,000.

Despite the undoubted success of insulin, the medical profession has issued a warning that it is not to be regarded as a magic or instant cure. In fact, it is not a “cure” at all, since it does not destroy the causes of the disease. It is a remedy, merely supplying to the body an element that disease has removed. Injections of insulin every day for long periods are necessary to successful treatment. Stopping the treatment, it is said, causes the disease to reappear.

A high degree of skill is necessary in the administration of insulin. The quantity to be injected varies according to the proportion of sugar in the patient’s blood, and an overdose of insulin, medical authorities say, may result in serious complications.

Meanwhile Doctor Banting, lifted suddenly from obscurity to worldwide fame, remains, so his intimates say, the same unassuming, serious-minded, diffident young man he was when he returned from the war, wounded and wearing the Military Cross, to begin his professional life in Toronto. The people of Alliston, Ont., where live his father and mother, each past the threescore-and-ten mark, and his brothers and sisters, take immense pride in the fact that he has become the town’s most noted son. He still speaks of Alliston as “home.”

Began life as a farm boy

Before leaving Alliston 12 years ago to enter the University of Toronto, Doctor Banting was a farm boy, performing chores around his father’s homestead like hundreds of other boys in the agricultural sections of Canada. His teachers say that he made no particular mark in his studies, although he was studious and persevering. Up to the time he left there to study medicine, most of the people in Alliston believed he intended to enter the ministry. Immediately after graduating from medical school, he entered the Canadian army, becoming a battalion physician with the rank of captain. He was wounded at Cambrai and invalided to England, where he remained until 1920.

Referring to this incident of his career in the army, his mother recently furnished an illuminating sidelight on his character. “He made a promise that he would write to me every Sunday when he went away to college,” she recalled. “He never has failed to keep that promise. When his right arm was useless from wounds, he learned to write with his left hand so that I’d continue to get my letters.”

The Canadian Government recently granted Doctor Banting an annuity of $7500 for life in recognition of his discovery of insulin, and the Ontario Legislature has appropriated $10,000 a year to create a department of research in the University of Toronto. This will be known as the Banting-Best Chair of Research. and Doctor Banting has been appointed its first incumbent at a salary of $6000 a year.

Today the eyes of the scientific world are turned toward Canada, eager to glimpse the new activities of the young physician who has become a world figure at an age at which most professional men are struggling for a foothold.

Diabetes photo
September 1923 cover of Popular Science, featuring very fast cars… for their time.

Some text has been edited to match contemporary standards and style.

The post From the archives: How a medical ‘outsider’ discovered insulin appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What it would take for cars to actually fly https://www.popsci.com/aviation/history-of-flying-cars/ Tue, 24 May 2022 18:00:00 +0000 https://www.popsci.com/?p=445575
a purple and black and white stylized image of a historic foldable flying car with newspaper clippings in the background
"Airplane-auto folds up to fit in garage" appeared in the September 1926 issue of Popular Science. Popular Science

Since the 1800s, inventors have struggled to design a hybrid craft that could traverse both earth and sky—but flying cars might soon get a new lift.

The post What it would take for cars to actually fly appeared first on Popular Science.

]]>
a purple and black and white stylized image of a historic foldable flying car with newspaper clippings in the background
"Airplane-auto folds up to fit in garage" appeared in the September 1926 issue of Popular Science. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

Decades before Orville and Wilbur Wright propellered into the air, the dream of flying cars (or carriages) got an unexpected lift. In 1856, French sea captain Jean-Marie Le Bris sailed through the skies in a horse-drawn glider fashioned after an albatross. The aptly named L’Albatros artificiel, or the Artificial Albatross, carried Le Bris 300 feet off the ground—an impressive height for the mid-19th century when the first steam-powered automobiles were only just beginning to dot roadways. With this seminal, albeit short-lived, flight, a flying-car archetype was born. Subsequent iterations of soaring carriages followed with varying levels of success, such as aviation-industry titan Glenn Curtiss’s 1917 self-propelling Autoplane. While a much improved model, his car-craft’s hang-time was only marginally better than Le Bris’.

Then in the 1920s, Sherman Fairchild, founder of Fairchild Industries whose father George W. Fairchild was the cofounder and first president of IBM, came up with a more practical concept. This concept actually worked! As Popular Science reported in September 1926, Sherman Fairchild designed and built one of the first flying cars, with wings that “fold like a beetle’s.” At the time, Popular Science predicted that Fairchild’s airplane-auto, which he used to tote his golfing pals around the Long Island’s Gold Coast, would become “a serious competitor for space in the family garage.” Despite a century of noteworthy attempts to mainstream flying cars, however, the obstacles have proved too great. But promising developments in aviation technology and engineering suggests that this may soon change.

a bird-like flying craft attached to a wagon with a man in a top hat sitting in the cockpit
Le Bris’ in his L’Albatros artificiel. Public Domain

“When the airplane was invented, people came up with all kinds of amazing ideas about what airplanes were going to do,” says aviation historian Janet Bednarek, a history professor at the University of Dayton in Ohio. Society had lofty hopes for the flying crafts: “They were going to create world peace. They were going to improve human beings, and bring about greater racial and gender equality,” she says. Of course, the idea of fusing already popular automobiles with novel airplanes was even more appealing. “It’s the most persistent part of [the aviation ideal] that doesn’t die,” Bednarek notes. If fiction is any gauge (Chitty Chitty Bang Bang, The Jetsons, Back to the Future), flying cars have captured our collective imagination; they often represent individual freedom—the ability to go wherever, whenever, even through time.

The concept of flying cars, or at least an airplane in every garage, had been so firmly fixed in the national consciousness that in 1934, Eugene Vidal, newly-minted director of the US Bureau of Air Commerce, launched a contest for an affordable family airplane, dubbed flivver aircraft (flivver being slang for cheap cars, made popular by Ford’s Model T). The contest prompted entries ranging from auto companies to independent inventors like Waldo Waterman, who used the frame of a Studebaker to build his contest-winning Aerobile, or Arrowplane. But even with Uncle Sam’s backing, flying car sales never took off.

A decade later as World War II came to a close, many trained pilots returned home and the notion of an airplane in every garage got another lift. Inventor Robert Fulton (an alleged descendant of the famed steam-engine pioneer with the same name) introduced the Airphibian—an airplane that could be modified to an auto simply by detaching the wings and propeller. That same decade, the first backpack helicopter, or hoppicopter, was built by Horace Pentecost, which Popular Science covered in July 1945. However these designs didn’t gain traction, resulting in another disappointing decade for flying car enthusiasts.

a bright red plane with tannish gold trim. the plane's back end and wings can be separated from the front of the cockpit, which is alternatively a car
In 1950, Fulton’s Airphibian became the first roadable aifcraft to receive a type certificate from the Civil Aviation Administration. Smithsonian National Air and Space Museum, National Air and Space Museum

Since the advent of aviation, flying cars have never progressed from prototype to reality. “Having a car attached to an airplane is hard to make efficient,” says Ella Atkins, an aerospace engineering professor and director of the University of Michigan’s Autonomous Aerospace Systems Lab. “The car is not going to notice much difference from having the parts of an airplane, but the plane is going to really be impacted by the presence of a car.”

It’s more than just impractical engineering that has grounded the flying car dream. After all, as far back as 1926, Fairchild’s foldable design seemed to have struck a crude auto-airplane compromise. But he couldn’t solve the complex matter of operating the flying craft itself. “Pilots need a lot more training and practice to become proficient than 16-year-olds who get their drivers’ licenses,” Atkins says.

Plus, flying a rusty jalopy is much riskier than driving one, even for a trained pilot. Had Vidal succeeded in coaxing manufacturers to build and sell affordable family aircraft in 1934, his grand vision of an airplane in every garage likely would have failed when the cost of upkeep began breaking family budgets. “Airplanes are very maintenance intensive,” Bednarek says. “There are a lot more costs associated with owning an airplane than with owning a car.”

In the case of aircraft, maintenance is not just about keeping the airborne passengers safe; the world below is also vulnerable, too. “I’ve been in a car that actually had its [backseat] battery drop through the floor,” Atkins says. If this happened in a flying car, pulling over on the side of the road is not an option for a pilot, she adds. Batteries dropping from the sky would send shockwaves through any community, and yet pelting neighborhoods with projectiled parts might not even be flying cars’ worst fallout. “It’s about the noise. It’s about the annoyance,” says Atkins, who adds that these crafts could also potentially create a significant amount of congestion. “We can’t just stop in bumper to bumper air traffic,” she adds. “Even if we have hover-capable aircraft, we are burning a tremendous amount of fuel or electricity just to stay in the air.”

[Related: From the archives: A grand tribute and eulogy for Zeppelins]

What’s more, any congestion would only ratchet up existing emissions from aircrafts. Aviation already pumps out 2.5 percent of the planet’s greenhouse gases without the added snarl and stink of flying-car traffic jams. Not surprisingly the 21st century offers its own special twist: climate change. If aviation were a country, it would rank sixth in the world for CO2 emissions. “Aviation is in the crosshairs as a huge emitter,” says Bednarek, who considers climate change an existential threat to aviation in general, let alone flying cars.

Despite so many obstacles, the transportation landscape might finally be ready for flying cars—and it’s mostly thanks to deep-pocketed investors. A collection of companies like Terrafugia, Klein Vision, Pal-V, and Aeromobil have announced plans to soon offer true hybrid flying cars , equally capable of cruising down the freeway and soaring through the skies. Bell Nexus and Joby Aviation (which in 2020 acquired Uber Elevate, the ridesharing company’s aerial initiative), have their sights set on all-electric, vertical take-off and landing (eVTOL) air taxis, set to debut in 2023. “There are a couple of planned communities being designed in Northern California,” says Atkins, “with a model of actually having solar panels to charge eVTOLs for a commuter service to go in and out of the Bay area each day.”

a modern hybrid car airplane model in yellow and white in an indoor hanger
A new flying car design by AeroMobil. AeroMobile

But even as flying cars seem on the cusp of a breakthrough, a whole new class of vehicles are quickly cluttering the skies: autonomous drones that are increasingly being used for package delivery, surveillance, mapping, news, and entertainment. “The biggest obstacle to all this,” Atkins explains, “is transitioning away from voice-based air traffic control to data link.” By data link, she means enabling aircraft to communicate directly with one another, with little or no human intervention.

Before flying cars, air taxis, or drones can take to the skies in numbers, air traffic control will need a serious upgrade. Atkins envisions a mainly autonomous solution—an Urban Air Mobility, or UAM, air traffic control system. An UAM would enable aircraft to communicate directly with one other (no humans in the loop), as well as with a central command center and community-based centers, which, when combined, would be capable of handling thousands of simultaneous flights over a metropolitan area.

Bednarek is not so sanguine about the coming whirlwind of air traffic, including flying cars. “I think people would actually be rather repulsed by the environmental impact,” she says, citing the visual, noise, and carbon pollution. “I’m not entirely convinced that we should get there, even if we could.” She concedes, though, that flying cars remain “probably the most persistent dream of those who are enthusiastic about flight.”

Correction: It was Sherman Fairchild’s father George W. Fairchild who cofounded IBM, not Sherman Fairchild as originally stated.

The post What it would take for cars to actually fly appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: A forecast on artificial intelligence, from the 1980s and beyond https://www.popsci.com/technology/ai-history-eighties/ Tue, 24 May 2022 11:00:00 +0000 https://www.popsci.com/?p=443886
Images from the February 1989 issue of Popular Science from an article on "brain-style" computers.
“Brain-style computers” by Naomi J. Freundlich appeared in the February 1989 issue of Popular Science. Popular Science

In the February 1989 issue of Popular Science, we dove deep in the reemerging projects developing 'brain-style' computers and their futures in the next two decades.

The post From the archives: A forecast on artificial intelligence, from the 1980s and beyond appeared first on Popular Science.

]]>
Images from the February 1989 issue of Popular Science from an article on "brain-style" computers.
“Brain-style computers” by Naomi J. Freundlich appeared in the February 1989 issue of Popular Science. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Social psychologist Frank Rosenblatt had such a passion for brain mechanics that he built a computer model fashioned after a human brain’s neural network, and trained it to recognize simple patterns. He called his IBM 704-based model Perceptron. A New York Times headline called it an “Embryo of Computer Designed to Read and Grow Wiser.” Popular Science called Perceptrons “Machines that learn.” At the time, Rosenblatt claimed “it would be possible to build brains that could reproduce themselves on an assembly line and which would be conscious of their existence.” The year was 1958.

Many assailed Rosenblatt’s approach to artificial intelligence as being computationally impractical and hopelessly simplistic. A critical 1969 book by Turing Award winner Marvin Minsky marked the onset of a period dubbed the AI winter, when little funding was devoted to such research—a short revival in the early ‘80s notwithstanding. 

In a 1989 Popular Science piece, “Brain-Style Computers,” science and medical writer Naomi Freundlich was among the first journalists to anticipate the thaw of that long winter, which lingered into the ‘90s. Even before Geoffrey Hinton, considered one of the founders of modern deep learning techniques, published his seminal 1992 explainer in Scientific American, Freundlich’s reporting offered one of the most comprehensive insights into what was about to unfold in AI in the next two decades. 

“The resurgence of more-sophisticated neural networks,” wrote Freundlich, “was largely due to the availability of low-cost memory, greater computer power, and more-sophisticated learning laws.” Of course, the missing ingredient in 1989 was data—the vast troves of information, labeled and unlabeled, that today’s deep-learning neural networks inhale to train themselves. It was the rapid expansion of the internet, starting in the late 1990s, that made big data possible and, coupled with the other ingredients noted by Freundlich, unleashed AI—nearly half a century after Rosenblatt’s Perceptron debut.

“Brain-style computers” (Naomi J. Freundlich, February 1989)

I walked into the semi-circular lecture hall at Columbia University and searched for a seat within the crowded tiered gallery. An excited buzz petered off to a few coughs and rustling paper as a young man wearing circular wire-rimmed glasses walked toward the lectern carrying a portable stereo tape player under his arm. Dressed in a tweed jacket and corduroys, he looked like an Ivy League student about to play us some of his favorite rock tunes. But instead, when he pushed the “on” button, a string of garbled baby talk-more specifically, baby-computer talk-came flooding out. At first unintelligible, really just bursts of sounds, the child-robot voice repeated the string over and over until it became ten distinct words. 

“This is a recording of a computer that taught itself to pronounce English text overnight,” said Terrence Sejnowski, a biophysicist at Johns Hopkins University. A jubilant crowd broke into animated applause. Sejnowski had just demonstrated a “learning” computer, one of the first of a radically new kind of artificial-intelligence machine. 

Called neural networks, these computers are loosely modeled after the interconnected web of neurons, or nerve cells, in the brain. They represent a dramatic change in the way scientists are thinking about artificial intelligence- a leaning toward a more literal interpretation of how the brain functions. The reason: Although some of today’s computers are extremely powerful processors that can crunch numbers at phenomenal speeds, they fail at tasks a child does with ease-recognizing faces, learning to speak and walk, or reading printed text. According to one expert, the visual system of one human being can do more image processing than all the supercomputers in the world put together. These kinds of tasks require an enormous number of rules and instructions embodying every possible variable. Neural networks do not require this kind of programming, but rather, like humans, they seem to learn by experience. 

For the military, this means target-recognition systems, self-navigating tanks, and even smart missiles that chase targets. For the business world, neural networks promise handwriting-and face-recognition systems and computer loan officers and bond traders. And for the manufacturing sector, quality-control vision systems and robot control are just two goals. 

Interest in neural networks has grown exponentially. A recent meeting in San Diego brought 2,000 participants. More than 100 companies are working on neural networks, including several small start-ups that have begun marketing neural-network software and peripherals. Some computer giants, such as IBM, AT&T, Texas Instruments, Nippon Electric Co., and Fujitsu, are also going full ahead with research. And the Defense Advanced Research Projects Agency (or DARPA) released a study last year that recommended neural-network funding of $400 million over eight years. It would be one of the largest programs ever undertaken by the agency. 

Ever since the early days of computer science, the brain has been a model for emerging machines. But compared with the brain, today’s computers are little more than glorified calculators. The reason: A computer has a single processor operating on programmed instructions. Each task is divided into many tiny steps that are performed quickly, one at a time. This pipeline approach leaves computers vulnerable to a condition commonly found on California freeways: One stalled car-one unsolvable step-can back up traffic indefinitely. The brain, in contrast, is made up of billions of neurons, or nerve cells, each connected to thousands of others. A specific task enlists the activity of whole fields of neurons; the communication pathways among them lead to solutions. 

The excitement over neural networks is not new and neither are the “brain makers.” Warren S. McCulloch, a psychiatrist at the Universities of Illinois and Chicago, and his student Walter H. Pitts began studying neurons as logic devices in the early 1940s. They wrote an article outlining how neurons communicate with each other electrochemically: A neuron receives inputs from surrounding cells. If the sum of the inputs is positive and above a certain preset threshold, the neuron will fire. Suppose, for example, that a neuron has a threshold of two and has two connections, A and B. The neuron will be on only if both A and B are on. This is called a logical “and” operation. Another logic operation called the “inclusive or” is achieved by setting the threshold at one: If either A or B is on, the neuron is on. If both A and B are on, then the neuron is also on. 

In 1958 Cornell University psychologist Frank Rosenblatt used hundreds of these artificial “neurons” to develop a two-layer pattern-learning network called the perceptron. The key to Rosenblatt’s system was that it learned. In the brain, learning occurs predominantly by modification of the connections between neurons. Simply put, if two neurons are active at once and they’re connected, then the synapses (connections) between them will get stronger. This learning rule is called Hebb’s rule and was the basis for learning in the perceptron. Using Hebb’s rule, the network appears to “learn by experience” because connections that are used often are reinforced. The electronic analog of a synapse is a resistor and in the perceptron resistors controlled the amount of current that flowed between transistor circuits.  

Other simple networks were also built at this time. Bernard Widrow, an electrical engineer at Stanford University, developed a machine called Adaline (for adaptive linear neurons) that could translate speech, play blackjack, and predict weather for the San Francisco area better than any weatherman. The neural network field was an active one until 1969. 

In that year the Massachusetts Institute of Technology’s Marvin Minsky and Seymour Papert—major forces in the rule-based AI field—wrote a book called Perceptrons that attacked the perceptron design as being “too simple to be serious.” The main problem: The perceptron was a two-layer system-input led directly into output-and learning was limited. ”What Rosenblatt and others wanted to do basically was to solve difficult problems with a knee-jerk reflex,” says Sejnowski. 

The other problem was that perceptrons were limited in the logic operations they could execute, and therefore they could only solve clearly definable problems–deciding between an L and a T for example. The reason: Perceptrons could not handle the third logic operation called the “exclusive or.” This operation requires that the logic unit turn on if either A or B is on, but not if they both are. 

According to Tom Schwartz, a neural-network consultant in Mountain View, Calif., technology constraints limited the success of perceptrons. “The idea of a multilayer perceptron was proposed by Rosenblatt, but without a good multilayer learning law you were limited in what you could do with neural nets.” Minsky’s book, combined with the perceptron’s failure to achieve developers’ expectations, squelched the neural-network boom. Computer scientists charged ahead with traditional artificial intelligence, such as expert systems. 

Underground connections

During the “dark ages” as some call the 15 years between the publication of Minsky’s Perceptrons and the recent revival of neural networks, some die-hard “connectionists” –neural-network adherent–prevailed. One of them was physicist John J. Hopfield, who splits his time between the California Institute of Technology and AT&T Bell Laboratories. A paper he wrote in 1982 described mathematically how neurons could act collectively to process and store information, comparing a problem’s solution in a neural network with achieving the lowest energy state in physics. As an example, Hopfield demonstrated how a network could solve the “traveling salesman” problem- finding the shortest route through a group of cities a problem that had long eluded conventional computers. This paper is credited with reinvigorating the neural network field. “It took a lot of guts to publish that paper in 1982,” says Schwartz. “Hopfield should be known as the fellow who brought neural nets back from the dead.” 

The resurgence of more-sophisticated neural networks was largely due to the availability of low-cost memory, greater computer power, and more-sophisticated learning laws. The most important of these learning laws is some- thing called back-propagation, illustrated dramatically by Sejnowski’s NetTalk, which I heard at Columbia. 

With NetTalk and subsequent neural networks, a third layer, called the hidden layer, is added to the two-layer network. This hidden layer is analogous to the brain’s interneurons, which map out pathways between the sensory and motor neurons. NetTalk is a neural-network simulation with 300 processing units-representing neurons- and over 10,000 connections arranged in three layers. For the demonstration I heard, the initial training input was a 500-word text of a first-grader’s conversation. The output layer consisted of units that encoded the 55 possible phonemes-discreet speech sounds-in the English language. The output units can drive a digital speech synthesizer that produces sounds from a string of phonemes. When NetTalk saw the letter N (in the word “can” for example) it randomly (and erroneously) activated a set of hidden layer units that signaled the output “ah.” This output was then compared with a model: a correct letter-to-phoneme translation, to calculate the error mathematically. The learning rule, which is actually a mathematical formula, corrects this error by “apportioning the blame”-reducing the strengths of the connections between the hidden layer that corresponds to N and the output that corresponds to “ah.” “At the beginning of NetTalk all the connection strengths are random, so the output that the network produces is random,” says Sejnowski. “Very quickly as we change the weights to minimize error, the network starts picking up on the regular pattern. It distinguishes consonants and vowels, and can make finer distinctions according to particular ways of pronouncing individual letters.” 

Trained on 1,000 words, within a week NetTalk developed a 20,000-word dictionary. “The important point is that the network was not only able to memorize the training words, but it generalized. It was able to predict new words it had never seen before,” says Sejnowski. “It’s similar to how humans would generalize while reading ‘Jabberwocky.’ “

Generalizing is an important goal for neural networks. To illustrate this, Hopfield described a munition identification problem he worked on two summers ago in Fort Monmouth, N.J. “Let’s say a battalion needs to identify an unexploded munition before it can be disarmed,” he says. “Unfortunately there are 50,000 different kinds of hardware it might be. A traditional computer would make the identification using a treelike decision process,” says Hopfield. ”The first decision could be based on the length of the munition.” But there’s one problem: “It turns out the munition’s nose is buried in the sand, and obviously a soldier can’t go out and measure how long it is. Although you’ve got lots of information, there are always going to be pieces that you are not allowed to get. As a result you can’t go through a treelike structure and make an identification.”

Hopfield sees this kind of problem as approachable from a neural-network point of view. “With a neural net you could know ten out of thirty pieces of information about the munition and get an answer.”

Besides generalizing, another important feature of neural networks is that they “degrade gracefully.” The human brain is in a constant state of degradation-one night spent drinking alcohol consumes thousands of brain cells. But because whole fields of neurons contribute to every task, the loss of a few is not noticeable. The same is true with neural networks. David Rumelhart, a psychologist and neural-network researcher at Stanford University, explains: “The behavior of the network is not determined by one little localized part, but in fact by the interactions of all the units in the network. If you delete one of the units, it’s not terribly important. Deleting one of the components in a conventional computer will typically bring computation to a halt.”

Simulating networks

Although neural networks can be built from wires and transistors, according to Schwartz, “Ninety-nine percent of what people talk about in neural nets are really software simulations of neural nets run on conventional processors.” Simulating a neural network means mathematically defining the nodes (processors) and weights (adaptive coefficients) assigned to it. “The processing that each element does is determined by a mathematical formula that defines the element’s output signal as a function of whatever input signals have just arrived and the adaptive coefficients present in the local memory,” explains Robert Hecht-Nielsen, president of Hecht-Nielsen Neurocomputer Corp. 

Some companies, such as Hecht- Nielsen Neurocomputer in San Diego, Synaptics Inc. in San Jose, Calif., and most recently Nippon Electric Co., are selling specially wired boards that link to conventional computers. The neural network is simulated on the board and then integrated via software to an IBM PC-type machine. 

Other companies are providing commercial software simulations of neural networks. One of the most successful is Nestor, Inc., a Providence, Rl.,-based company that developed a software package that allows users to simulate circuits on desk-top computers. So far several job-specific neural networks have been developed. They include: a signature-verification system; a network that reads handwritten numbers on checks; one that helps screen mortgage loans; a network that identifies abnormal heart rates; and another that can recognize 11 different aircraft, regardless of the observation angle. 

Several military contractors including Bendix Aerospace, TRW, and the University of Pennsylvania are also going ahead with neural networks for signal processing-training networks to identify enemy vehicles by their radar or sonar patterns, for example. 

Still, there are some groups concentrating on neural network chips. At Bell Laboratories a group headed by solid-state physicist Larry Jackel constructed an experimental neural-net chip that has 75,000 transistors and an array of 54 simple processors connected by a network of resistors. The chip is about the size of a dime. Also developed at Bell Labs is a chip containing 14,400 artificial neurons made of light-sensitive amorphous silicon and deposited as a thin film on glass. When a slide is projected on the film several times, the image gets stored in the network. If the network is then shown just a small part of the image, it will reconstruct the original picture. 

Finally, at Synaptics, CalTech’s Carver Mead is designing analog chips modeled after human retina and cochlea. 

According to Scott E. Fahlman, a senior research scientist at Carnegie Mellon University in Pittsburgh, Pa., “building a chip for just one network can take two or three years.” The problem is that the process of laying out all the interconnected wires requires advanced techniques. Simulating networks on digital machines allows researchers to search for the best architecture before committing to hardware. 

Cheap imitation

”There are at least fifty different types of networks being explored in research or being developed for applications,” says Hecht-Nielsen. ”The differences are mainly in the learning laws implemented and the topology [detailed mapping] of the connections.” Most of these networks are called “feed-forward” networks-information is passed forward in the layered network from inputs to hidden units and finally outputs. 

John Hopfield is not sure this is the best architecture for neural nets. “In neurobiology there is an immense amount of feedback. You have connections coming back through the layers or interconnections within the layers. That makes the system much more powerful from a computational point of view.” 

That kind of criticism brings up the question of how closely neural networks need to model the brain. Fahlman says that neural-network researchers and neurobiologists are “loosely coupled.” “Neurobiologists can tell me that the right number of elements to think about is tens of billions. They can tell me that the right kind of interconnection is one thousand or ten thousand to each neuron. And they can tell me that there doesn’t seem to be a lot of flow backward through a neuron,” he says. But unfortunately, he adds, “they can’t provide information about exactly what’s going on in the synapse of the neuron.” 

Neural networks, according to the DARPA study, are a long way off from achieving the connectivity of the human brain; at this point a cockroach looks like a genius. DARPA projects that in five years the electronic “neurons” of a neural network could approach the complexity of a bee’s nervous system. That kind of complexity would allow applications like stealth aircraft detection, battlefield surveillance, and target recognition using several sensor types. “Bees are pretty smart compared with smart weapons,” commented Craig I. Fields, deputy director of research for the agency. “Bees can evade. Bees can choose routes and choose targets.” 

AI photo
The cover of the February 1989 issue of Popular Science featured a deadly new fighter plane and news in glues.

Some text has been edited to match contemporary standards and style.

The post From the archives: A forecast on artificial intelligence, from the 1980s and beyond appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: The promising new world of solar power—in the 1950s https://www.popsci.com/energy/solar-power-history/ Mon, 23 May 2022 13:00:00 +0000 https://www.popsci.com/?p=443855
Images from the March 1954 issue of Popular Science.
“Sun furnace goes to work” by Alden P. Armagnac appeared in the March 1954 issue of Popular Science. Popular Science

In the March 1954 issue of Popular Science, we explored the auspicious and suspicious new ways of harnessing the sun's energy.

The post From the archives: The promising new world of solar power—in the 1950s appeared first on Popular Science.

]]>
Images from the March 1954 issue of Popular Science.
“Sun furnace goes to work” by Alden P. Armagnac appeared in the March 1954 issue of Popular Science. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Tapping the power of concentrated sunlight dates to antiquity. During the Siege of Syracuse in 212 BCE, legendary Greek mathematician Archimedes invented a solar death ray. Its polished bronze shields reflected sunlight walls onto approaching Roman ships to set them aflame—a feat that has been replicated in modern times

By the time Popular Science published Alden P. Armagnac’s “Sun Furnace Goes to Work” in March 1954, the prolific writer and editor was well into his four-plus decades-plus of covering the science beat. No stranger to telling sweeping stories with a flare, Armagnac, a chemical engineer from Columbia University, sought to make science entertaining. He neglected to mention Archimedes in his tour of solar technology, but he does cite Lavoisier, a renowned 18th-century French chemist who used sunlit mirrors to melt metals.

Armagnac’s sun-drenched feature covers an array of emerging solar technologies, including a furnace in which “steel melts and drips like sealing wax over a flame;” photovoltaics, whose electric yield was so wretched in 1954 that a skeptic said, “Scientists must understand matter and energy much better before we can count on charging our automobile batteries from thermoelectric or photoelectric generators on the garage roof;” photosynthesis, used to grow a high-yield, single-cell algae food crop, about which Armagnac doesn’t mince words, “if our palate isn’t progressive enough to fancy such ultramodern fare, we can feed it to cattle or poultry, and be rewarded with old-fashioned steaks or fricassees;” and finally, sunlight-in-a-bottle, or capturing rays in test tubes to tap its catalytic characteristics: “Sunlight is known to produce various chemical reactions,” wrote Armagnac, “as when it turns blank film into a portrait of Aunt Ella.” Guess he wasn’t a fan of drinking UV light to cure disease.

“Sun furnace goes to work” (Alden P. Armagnac, March 1954)

A man-made inferno tries out materials for jet and rocket engines—and shows one way to capture free solar power.

A top a 6,000-foot mountain near San Diego, Calif., they’re harnessing the sun to help build airplanes.

A solar furnace newly installed there focuses the sun’s rays, with a 10-foot-diameter mirror of polished aluminum, upon a spot smaller than a dime. It surpasses by far the temperature of the hoc test blowtorch or electric furnace.

Researchers of the Consolidated Vultee Aircraft Corporation apply the sun furnace’s terrific heat to materials under trial for jet and rocket engines and for guided missiles. Aim of their experiments is to develop substances more resistant to heat and thermal shock than any yet known—stuff that won’t soften and flow, say, when a long-range missile screams back to the earth from dizzy altitudes.

That the possibilities are promising is shown by recent discovery of two super-refractories, hafnium carbide and tantalum carbide, with fantastically high melting points—7,530 and 7,020 degrees F., respectively. The first looks like the record for any substance known. For comparison, iron melts at a mere 2,800 degrees, and tungsten tops the list of metals at 6,100 degrees; while graphite, long the supreme heat-resisting material, turns from solid into vapor at about 6,600 degrees.

The California experimenters’ solar furnace, essentially an enormous burning glass, provides the most practicable way to explore this newly opening extreme-high-temperature realm. When sky conditions are ideal, it yields an estimated maximum of 8,500 degrees F. At the focus of the great mirror, this heat is concentrated in a spot 5/16 of an inch in diameter.

Metal melts like butter on a stove

Its intensity burns a hole in firebrick with ease. Steel melts and drips like sealing wax over a flame, when a rod is held with its tip at the focal point. A movable cylindrical sunshade controls the temperature of thousands of degrees to within a degree or two, a triumph of precision. Equipped with an astronomical drive, the big mirror turns automatically to follow the sun, permitting experiments of hours’ duration.

Best of all, pure samples of materials can be subjected to the searing heat without contamination by foreign substances, like carbon in electric furnaces. And there are no electric and magnetic fields, nor fumes, to disturb reactions or hinder spectroscopic observation.

Science’s pioneers led way

In going to work for industry, the solar furnace has exchanged academic robes for overalls. For its advantages long were appreciated only by savants of pure science. Lavoisier and other great chemists of the past melted metals with solar furnaces, which made up in size whatever their lenses or mirrors lacked in optical perfection. Then the idea seems to have been forgotten, until recent years.

Abroad, French experiments that began a few years ago with a 78-inch searchlight mirror (PSM Aug. ’50. p. 122) have now led to what is probably the world’s largest solar furnace. Using a 40-foot diameter composite mirror, a mosaic of small panes of window glass, this semi-industrial installation in the Pyrenees went into operation in 1952.

In this country, first practical use of a solar furnace appears to date back only a little earlier, to a little-known project of World War II. A 120-inch sun furnace was built for the AC Spark Plug Division of General Motors at Flint, Mich. with the cooperation of the Aluminum Company of America. Originally 16 reflecting sectors of quarter-inch sheet aluminum gave it a saucer-sized hot spot of up to 2,000-degree temperature and five-inch diameter. After the war, when it became surplus, it was moved to Rockhurst College in Kansas City, Mo., and used in scientific studies by its designer, Dr. Willi Conn. Having reshaped the mirror to obtain a smaller hot spot and much higher temperatures, he perfected the technique of controlling and measuring the extreme heat accurately.

New owner. Final version

This is the sun furnace that Consolidated Vultee has now purchased, modified further to suit its new tasks, and put to work. Incidentally, moving the furnace southward in latitude from Kansas City to San Diego required a new mounting for the big mirror—it turns in its gimbals ring, like an astronomical telescope, on a “polar axis” parallel to that on which the earth turns.

To scientists looking into the future, solar furnaces illustrate just one of many ways to harness the sun. The tantalizing fact is that a full horsepower of solar energy, free for the taking, falls at midday on each square yard of the earth. What progress experimenters are making toward capturing it was recently reviewed by Dr. George R. Harrison, dean of the MIT School of Science.

India has solar cooker

Devices using the sun’s heat directly, as the solar furnace does, represent the simplest approach and the most successful to date. India has perfected a solar cooker; which a government agency now sells for $14, using a circular mirror of yard-square area. Solar stills on life rafts make drinking water from sea water. Experimental solar houses, heated mostly or entirely by the sun, show promise, and will look much more interesting if the necessary investment—still higher than for an ordinary heating plant—can be reduced. For heating homes and hot water, mirrors or lenses aren’t needed. Temperatures up to 400 degrees F. can be obtained in flat-plate collectors, essentially glass-covered boxes lined with black-painted metal, through which water or air circulates to be heated. Steam engines can be run on solar power, as Dr. Charles G. Abbot of the Smithsonian Institution has demonstrated in his pioneering experiments. So far, though, it’s an expensive way—an Abbot boiler with a mirror large enough to produce two horsepower would probably cost about $1,000. Heat engines that could run efficiently at lower temperature than conventional types, dispensing with the mirrors’ cost and complications, would be another story. That’s something for inventors to work on.

Sunshine into electricity?

A miniature “sun motor” exhibited not long ago by Charles F. Kettering, General Motors research ace, demonstrates the future possibility of turning sunshine right into electricity. Enough current to spin it is generated when a candle flame heats a tube bristling with twisted wires, or when a lamp’s beam is directed on a bank of photovoltaic cells.

Twist together the ends of pieces of copper and silver wire, heat one of the two junctions, and current will be generated. This is the effect that a thermocouple applies to operate the temperature-indicating meter of an electrical pyrometer. Voltage and current can be multiplied to substantial figures, by connecting many thermocouples and heating one set of junctions. So it’s easy, in imagination, to picture a solar power station where acres of thermocouples turn the sun’s rays into free kilowatts. The catch is their notoriously low efficiency as energy converters, although improved thermocouple alloys have lately raised the figure considerably.

Unlike photocells that merely control current from an external source, photovoltaic cells actually generate current, when light falls upon them. Camera fans’ light meters employ photovoltaic cells, while a selenium or copper-oxide cell of this type may convert as much as 12 percent of light of some wave lengths into electricity, its low overall efficiency compares with that of a thermocouple. Eventually the figure may be bettered but, as Dr. Harrison comments wryly, “Scientists must understand matter and energy much better before we can count on charging our automobile batteries from thermoelectric or photoelectric generators on the garage roof.”

Improving on nature’s way

There remains the chemical route to harnessing solar energy—tried and proved by nature, long before man came along to puzzle over the problem. All but five percent of the energy we use, including the coal we burn and the food we eat, has at some time been stored by photosynthesis in plants, which captured it from the sun. Can We do better than plant wheat, or corn, or sugar cane, on farms and plantations, and let nature take its course? Some think so.

It may have been a preview of the future when a crop consisting of 100 pounds of dried, microscopic plants was harvested at Cambridge. Mass., not long ago, for the Carnegie Institution of Washington.

The odd product consisted of myriads of single-celled algae of a kind called Chlorella. They had been grown in water containing suitable salts, circulating with carbon-dioxide gas in a plastic tube, while sunshine poured in through the tube’s walls.

Experts predict an acre’s yield of food could be multiplied manyfold by growing Chlorella in tanks, instead of planting the soil at all. The dried or frozen product, we’re assured, forms a nourishing paste with “a delicate grassy flavor.” And if our palate isn’t progressive enough to fancy such ultramodern fare, we can feed it to cattle or poultry, and be rewarded with old-fashioned steaks or fricassees in abundance.

Cerium salts get into the act

Finally, the chemists are talking about bypassing biological processes entirely and capturing sunshine right in flasks and test tubes! Sunlight is known to produce various chemical reactions, as when it turns blank film into a portrait of Aunt Ella. More promising for delivering useful quantities of energy is a photochemical reaction exhibited by salts of cerium dissolved in water, which Prof. Lawrence J. Heidt of MIT has been investigating.

The cerium salts’ ions can take on two forms, called cerous and ceric. When sunlight acts on the solution, they change from one form to the other-and then back again. This might seem to get nobody anywhere. But, in the process, some of the water decomposes into elementary hydrogen and oxygen. The overall result is that, with the cerium salts playing the role of catalyst, sunlight breaks down water into gasses that can be burned for fuel.

We’re still a long way from drawing the plans for a solar-powered gasworks. One percent would be a generous estimate of this reaction’s efficiency in converting solar to chemical energy. But it’s a break-through on a new front, one more promising approach for future experimenters to explore. 

Renewables photo
The cover for the March 1954 issue of Popular Science, depicting speedy new subway trains and a tiny, DIY tractor.

Some text has been edited to match contemporary standards and style.

The post From the archives: The promising new world of solar power—in the 1950s appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: Rube Goldberg machines are serious business https://www.popsci.com/technology/rube-goldberg-inventions/ Fri, 20 May 2022 11:00:00 +0000 https://www.popsci.com/?p=443767
Images from the June 1923 issue of Popular Science featuring Rube Goldberg.
“Why I am an inventor (Do I hear a laugh?)” by Rube Goldberg appeared in the June 1923 issue of Popular Science. Popular Science

In the June 1923 issue of Popular Science, Rube Goldberg himself writes that he hopes to 'invent something useful.'

The post From the archives: Rube Goldberg machines are serious business appeared first on Popular Science.

]]>
Images from the June 1923 issue of Popular Science featuring Rube Goldberg.
“Why I am an inventor (Do I hear a laugh?)” by Rube Goldberg appeared in the June 1923 issue of Popular Science. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Before the name Rube Goldberg became synonymous with comically over-engineered chain-reaction contraptions that satirize technology, the eponym belonged to a humble engineer turned cartoonist. Born in 1883, Reuben Lucius Goldberg lived long enough to watch the world transform from horse-and-buggies to lunar landers. “We all want to invent something useful,” Goldberg claimed in a humorous but thoughtful piece for Popular Science in June 1923

In 1923, Goldberg, who’d earned a degree at UC Berkeley’s College of Mining and had taken jobs as a sewer designer and a sportswriter, was already a famous and well-compensated New York cartoonist. “My knowledge of science and mechanics is largely responsible for my progress as a cartoonist,” he wrote in Popular Science, feeling compelled to defend his work and affirm his love of engineering and invention. 

In his own words in 1923—and without the benefit of knowing that he would have another nearly half century of cartooning before him—Goldberg lamented, “I still have hopes of inventing something useful.” In hindsight, however, it becomes clear that Goldberg’s keen lens—which he used to comment on everything from nuclear weapons to World War II—was his most useful invention. At a time when technology often seems to have run amok, precariously tilting the scales of society and politics, we could use that perspective. 

“Why I am an inventor (Do I hear a laugh?)”  (Rube Goldberg, June 1923)

“I lampoon inventions because I love ’em,” says famous cartoonist. His great ambition is to invent something useful.

With me, science and invention are a serious business. I have made them a serious business because I recognize the fundamental human interest in such subjects.

Every man should have some knowledge of science and mechanics. 

It is as helpful as a knowledge of law in business. To be able to turn a small screw on a typewriter may save many valuable minutes in a busy office. Ability to remedy a simple matter of ventilation may speed up the work of an establishment. Then the knowledge of some scientific principle may enable a man to put a new and useful product on the market—may make him rich.

He’s a mechanic

My knowledge of science and mechanics is largely responsible for my progress as a cartoonist. When I was studying mining engineering at the University of California, I took up analytical mechanics. I was introduced to a machine, invented by one of the professors, used to determine the weight of the Earth.

This machine amused me, as it did every other student in the class, and I began to draw pictures of machines of my own that I thought were useless. These fantastic drawings were the beginnings of my career as a cartoonist.

Practically every American man likes to work with tools. I have this leaning toward mechanics and I have taken it into my work. The response has surprised me. It has proved to me that we are living in an age of science and mechanism.

One of my useless inventions was a mechanical music turner. Every bashful man who has had to stand up before visitors and turn music for his wife or sweetheart will sympathize with my attempt to do away with this embarrassing nuisance. My idea was to have a foot pedal connecting with an arm for turning the pages. Of course it has not been put on the market. But may be, some time, it will be.

The average man dislikes to carry an umbrella. Many throw them away as soon as it stops raining. Once I conceived the idea of inventing a folding umbrella that could he put in the pocket when not in use. It has never been perfected, but I still think it is a good idea.

Do I hear a loud laugh?

I have taken my place also among the thousands of Americans who have dreamed of a non-skid device for automobiles. My idea was to have a fifth wheel equipped with chains that could be dropped to the pavement beneath the car. I was surprised to find that two others had had the same idea before me.

The thousands of men working on inventions in the country today get a lot of enjoyment seeing fantastic drawings of mechanical things. Why? Because they see the humorous side of many of their own ideas. and I’m not convinced that I do not offer usable ideas now and then. Even the man who has not tried his hand at invention generally has a home workshop. What is the first thing he shows a visitor? Usually it is some little contrivance he has rigged up. He is proud of it because it shows he has some knowledge of mechanics. And he is always ready to laugh at one of my crazy mechanical cartoons.

When a child breaks a toy, it is up to the father to fix it, or lose his reputation. A cartoon on the subject is good for a laugh in nearly every American home.

And what man hasn’t had the idea of inventing something to automatically stoke, shake, and clean his furnace? Some elaborate contrivance for doing so, pictured in a cartoon, is sure to tickle him. Usually he is sport enough to laugh at his pet theories.

Crazy as some of my mechanical cartoons are, most of them are mechanically possible. The same is true of nearly every invention.

I still have hopes of inventing something useful. Perhaps I may yet come across the big idea in working out some of my foolish cartoons. The field is wide and strange things happen.

From the archives: Rube Goldberg machines are serious business
This June 1923 cover of Popular Science Monthly depicts a nautical, cinematic adventure.

Some text has been edited to match contemporary standards and style.

The post From the archives: Rube Goldberg machines are serious business appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: A grand tribute and eulogy for Zeppelins https://www.popsci.com/technology/zeppelins-hindenburg/ Thu, 19 May 2022 11:00:00 +0000 https://www.popsci.com/?p=443741
Images from the May 1962 article in Popular Science about Zeppelins.
“The biggest birds that ever flew” by A. A. Hoehling and Martin Mann appeared in the May 1962 issue of Popular Science. Popular Science

In the May 1962 issue of Popular Science, we explored this luxurious trend of aviation and its possible end with the Hindenburg disaster.

The post From the archives: A grand tribute and eulogy for Zeppelins appeared first on Popular Science.

]]>
Images from the May 1962 article in Popular Science about Zeppelins.
“The biggest birds that ever flew” by A. A. Hoehling and Martin Mann appeared in the May 1962 issue of Popular Science. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

To read the harrowing tales of death and destruction in Popular Science’s May 1962 story, “The Biggest Birds That Ever Flew”, it’s hard to imagine why so many passengers traveled so willingly aboard hydrogen-filled dirigibles. Part tribute, part eulogy, the 1962 story—penned by military historian A. A. Hoehling and Popular Science editor Martin Mann, and published on the 25th anniversary of the Hindenburg disaster—even opens with an epitaph: “The sky has never seen anything to match the giant Zeppelins. Like luxurious airborne hotels—with promenades, staterooms, dining salons, showers—they swiftly flew passengers over oceans. Yet tragedy flew with them until final disaster, 25 years ago this month, sealed their doom.” 

In their heyday, the great Zeppelins were the way to travel, making transatlantic crossings in mere days and carrying passengers in luxurious accommodations. There was no shortage of imagination about how airships might be put to use, even as floating laboratories. But the Hindenburg disaster and the destruction of German airship facilities during WWII ended their reign. Or did they?

To a world obsessed with speed, resurrecting airships might seem counterintuitive. But 85 years after the Hindenburg went down in flames, the vessels may have a part to play in a greener air-travel future. A series of companies have plans to enter the market, leveraging non-flammable helium and sporting design advances like mooring-free landing gear  and incredible fuel efficiency. Aviation giants like Lockheed Martin have built prototypes but appear to be waiting for relative newcomers like UK-based Hybrid Air Vehicles and Sergey Brin’s LTA Research to kickstart the big-bird market. Besides being greener than airplanes and freighters, airships don’t require runways or harbors, liberating them to come and go from remote or disaster-stricken regions. There’s also the luxury travel aspect: Beginning as soon as 2024, OceanSky Cruises plans to offer roundtrip voyages to the North Pole from Norway’s Svalbard islands and a separate air-expedition above the African continent. 

“The biggest birds that ever flew” (A. A. Hoehling* and Martin Mann, May 1962)

The sky has never seen anything to match the giant Zeppelins. Like luxurious airborne hotels—with promenades, staterooms, dining salons, showers—they swiftly flew passengers over oceans. Yet tragedy flew with them until final disaster, 25 years ago this month, sealed their doom. 

They were unbelievably long-as much as a sixth of a mile. Their shadows darkened several city blocks. They held gas enough to heat a small town for months. 

In the caverns of their compartments they carried, with space to spare, dozens of passengers and colorful loads of bulky cargo: circus animals, sports ears, even airplanes. Voyagers paced their promenade decks, stretched out in smoking lounges, even sang in shower baths.

The Zeppelins looked like whales and handled like submarines. But the sky was home.

They were the biggest birds that ever flew. There had been nothing remotely like them before they came. There has been nothing remotely like them since the last died in flaming public death. That happened just 25 years ago this month. Yet already one of the boldest achievements of aviation science is nearly forgotten.

On Thursday, May 6, 1937, the great gray bird droned over the eastern coast of the United States, inbound at the start of her second season of regular transatlantic service. The day was warm and stormy. She was already 10 hours late. And now she had to stooge over the Jersey beaches, waiting the forecasted clearing of the weather. This was the largest and most extravagant aircraft ever flown. Her builders had labeled her LZ-129–the 129th Luftchiff (airship) Zeppelin—and christened her Hindenburg (after the World \Var I field marshal who was conned by Hitler into surrendering control of Germany).

In the staterooms, impatient passengers tidied up their valises.

At the Lakehurst, N. J., landing field waited a corps of reporters, photographers, even a special radio-broadcasting crew. Supervising ground operations was the U.S. Navy’s foremost lighter-than-air expert, Cmdr. Charles E. Rosendahl.

Now, in the twilight shortly after 7 p.m., the Hindenburg ponderously nuzzled up to the mooring mast.

In 1937, this was the way to travel—the quickest, most comfortable transatlantic crossing possible. The fastest ocean liners took nearly twice as long. Commercial airplane flights were still two years in the future.

The Hindenburg had departed Frankfurt on Monday, May 3, to the customary fanfare of glowing press notices. Aboard climbed the passengers, surrendering their matches and cigarette lighters as they entered: Mrs. Marie Kleeman bound for a visit with her daughter in Massachusetts, Joseph Spah, an acrobat returning from European engagements, Poetess Margaret Mather flying home to New Jersey, and 33 others. The crew was headed by the veteran Luftschifführer Kapitän Max Pruss.

There was no foreboding of historic tragedy as the command “Up ship!” resounded. This was a gay adventure. If you were a Very Important Passenger, you could count on a tour of the fantastic ship. It was an opportunity not to be missed, for the Hindenburg was a masterpiece of engineering.

The sheer size made your jaw drop. This Zeppelin was enormous: 185 feet across the middle and 804 feet in length. From stern to bow she extended more than three city blocks. If stood on end, she would have reached the 67th floor of the Empire State Building, and towered over the Washington Monument.

The inside of this monstrous football was equally impressive. You walked to the nose along the Kiellaufgang-a narrow aluminum catwalk atop the kecl girder. There was no railing; except for a few guidelines, only a maze of cross-bracing wires and the thin fabric of the hull separated you from the Atlantic Ocean 600 feet below.

From the nose, you looked back on the elaborate blue-painted skeleton—“It seems like a cathedral,” one captain had rhapsodized. The lateral support for the fabric skin was 50 aluminum rings (not truly round, but 36-sided polygons), graduated in size from the fat middle to the pointed bow and stern. Holding the rings were 35 flat girders running lengthwise, and an interlocking cobweb of steel wires. It took 5,500,000 rivets just to fasten the rings to the girders.

The Hindenburg was fatter across her midsection than previous Zeppelins—the Shenandoah had snapped in two, indicating the need for strength amidships. But the heftiest framework supported the bow, for it hooked onto the mooring mast and had to hold, no matter how gusty the conditions on the ground.

This giant craft did not fly like a bird or an airplane. It floated in the air. The buoyancy came from 16 separate gas cells—tremendous bags that were shaped like gigantic pairs of pants. From below you saw only the floppy “pants legs.” These gas cells pushed up against the “ceiling” of the airship (a rope not kept cells from chafing against the hull).

Your tour guide would avoid mentioning it, but those gas cells contained 7,000,000 cubic feet of hydrogen, the lightest gas known-and also the most powerfully explosive. U. S. airships used helium, not quite so buoyant but not at all inflammable. Germany had no helium. Already the black clouds of World War II loomed, and Americans were in no mood to supply rare strategic material to a future enemy.

The Hindenburg’s designers understood the danger. Chimneylike Gasschachte (shafts) vented any seeping hydrogen to the outside of the hull. You caught sight of riggers, wearing buttonless asbestos suits and felcsoled shoes to avoid any chance of static sparks, inspecting those shafts. They also checked the gas cells—they walked right through them along the Mittellaufgang, the hull-bracing axial catwalk that pierced the cells by way of little canvas tunnels.

Walking aft past the officers’ quarters, you came to the Fiihrergondel, the control car. Window-walled, roomy, and impressive, it resembled the bridge of a ship.

Right away, you noticed that it took two men to steer a Zeppelin. The rudderman, facing forward, kept her on course with his giant wheel. The elevatorman faced sideways, watching an inclinometer and altimeter to keep her at the charted altitude. The up-and-down steersman had an unusual and valuable instrument: a crude forerunner of today’s radar altimeter. It was a compressed-air whistle. By timing the beep-beep echoes bounced back from the surface below, he could tell exactly how high he was.

A precise measure of altitude was vital in dirigibles, for they cruised ridiculously low by airplane standards—usually the height above the surface was less than the ship’s length. This was a hazard: A vagary of wind might slam the tail down to disaster (the Akron apparently crashed just that way). However, high altitudes were uneconomical: Too much gas had to be expelled to come down again.

Beyond the Fiihrergondel you came to passenger country. It was spectacular—an amazing replica of first-class oceanliner accommodations, extending all the way across the width of the ship and one-third the depth up from the keel.

There were two decks. The main deck had promenades on either side lined with wide, slanting windows and overlooked by a lounge and the dining salon (hot biscuits, baked fresh in the galley, were a specialty).

Off the foyer on A deck was a narrow corridor leading to the 25 Fahrgastriiume. Each stateroom had two bunks, a stool, folding shelf, fold-up plastic washbasin, mirror, and electric light.

You could even smoke aboard this airship. The bar was sealed off by double doors, which the steward unlocked when you rang the bell. Here the air pressure was maintained slightly above that in the rest of the ship so that no stray hydrogen could possibly leak inside. The smokers lit up with electric lighters (matches were verboten anywhere aboard).

If you wanted to take a shower (imagine that aboard a jet airliner!), you went down to the Badzimmer on B deck. It gave a trickle of water until an automatic shutoff unmistakably told you “time’s up.” Water was too heavy to be carried in lavish supply—they augmented tank storage by collecting rain and dew that ran off the Hindenburg’s four-acre back.

Everywhere, ingenious touches economized on weight. Each extra pound meant 13 more cubic feet of hydrogen. You could lift any of the chairs with a finger of one hand. You needed two hands to raise the piano, which was made of aluminum. The partitions—even stateroom walls—were canvas; it was like living in a many-roomed tent.

Beyond the passenger quarters stretched two-thirds of the giant craft. You walked past three crew foc’sles, one major and 14 lesser freight rooms, two dozen lockers for ship’s gear, 15 water-ballast tanks, 42 tanks storing 64 tons of diesel fuel.

Each of the four engines-1,100-hp. Mercedes-Benz diesels driving 20-foot four-bladed wooden propellers—wvas carried with its operator in a Motorgondel, a little car hanging outside the hull. You climbed into it by a narrow ladder leading down from the lower catwalk. Inside, the roar was deafening —the telephone connection to the control car was useless, and instructions had to be signaled over an engine telegraph like those in steamships.

The Hindenburg could make 84 knots top, and cruise at 77 knots—not too far behind commercial airplanes of the day. She also had something no airplane ever has—a spare engine stowed in a freight compartment.

At the very stem inside the huge under fin was a retractable tail wheel, similar to one under the control car. At Lakehurst the tail wheel rested on a flat car that rolled around a circular track, allowing the airship to turn with the wind when she was tethered to her mooring mast. Huge, complex, and beautiful, the Hindenburg was the supreme creation of the Zeppelin builder’s art. Safe, too. Her designer, Dr. Ludwig Duerr, had boasted that she was as  fireproof as man knew how to make any vehicle of transport.

If anyone knew how to build and fly airships, it was the Teutons from Friedrichshafen. Count Ferdinand von Zeppelin had built the first practical dirigible in 1900 (these things were giants from the starcold LZ-1 stretched 420 feet). Ten years later he was hauling passengers in the world’s first commercial air transport. By the time World War I shut down the Deutsche Luftschiffahrt A.G., it had established a most respectable record: 34,228 passengers, 144,000 miles, no deaths, no injuries.

The Germans flew 72 Zeppelins during World War I and sent them on 311 bombing raids. The bomb casualties in England alone came to 1,882 people, not counting a very substantial number hurt by falling shells from the Britons’ own ack-ack. The biggest of these warcraft, the 700-foot L-72, was poised to cross the Atlantic and strike New York, but peace came just in time.

The victorious Allies, impressed by this record, took over the Luftschiffabteilung’s Zeppelins, and rushed to build more of their own. A decade and a half of disaster followed.

In 1921, the ZR-2, built for the U.S. Navy by the Royal Airship Works in England, broke its back and burned, killing 62.

In 1923, the Dixmude (the old L-72, seized and renamed by the French) disappeared on a flight to Africa. The only trace ever found was the body of her captain, Commander du Plessis de Grenedan, pulled out of the Mediterranean by fishermen.

In 1925 the Shenandoah, an American-made copy of the German L-49, broke up in a squall over Ohio, killing 14.

In 1930 the R-101, pride of Britain, exploded against a hillside at Beauvais, France, killing 47 (including the Secretary of State for Air, the Director of Civil Aviation, and most of the Empire’s airship experts).

In 1933 the U.S. Navy’s Akron, which could launch airplanes like an airborne aircraft carrier, plunged into the Atlantic off Barnegat, N. J., killing 73.

In 1935 the Macon, sister ship to the Akron, broke her stem and fell into the Pacific, killing two.

That did it for everybody except the Germans. Back in Friedrichshafen things had gone swimmingly.

In the autumn of 1928 the Graf (Count) Zeppelin—the LZ-127, 774 feet long, weighing 66 tons, able to haul a payload of 20 passengers and 13 tons cargo—inaugurated commercial service. She followed a southern track to America, averaging not quite 60 miles an hour: 6,000 miles from Friedrichshafen to Lakehurst in four days and 16 hours.

The New York Times gave nearly 10 pages to the story.

The following year the Graf flew around the world. In 1980, service to South America began. By 1936, she had transported 13,000 passengers on 575 trouble-free flights.

Yet the crews became unbelievably careless. They smuggled contraband. They even sneaked cigarettes on catwalks, hiding behind bags billowing with touchy hydrogen.

On one journey from South America, crewmen secreted monkeys in the hull. The monkeys escaped and swung, chattering and scolding, from girder to girder until the ship landed. Another time, tropical fruit, tucked high in the framework, dripped sticky juice on all who passed below. Cameras and radios, a special hazard because they might contain spark-causing batteries, were conveniently concealed in the folds of the floppy gas cells.

Nonetheless, the Craf’s phenomenally charmed life held (she and the U.S. Navy’s Los Angeles, also German-built, were eventually dismantled). The Craf was west of the Canary Islands, homeward bound from South America, as the Hindenburg prepared to moor that thunderstormy afternoon 25 years ago.

At Lakehurst, Lt. Raymond F. Tyler and Chief “Bull” Tobin—both lighter than-air pros—directed the ground crew. They had rolled out the 75-foot tripod mast and deployed the line handlers.

Theirs was a delicate task. It was up to Kapi Kapitan tin Pruss to “weigh off” his Hindenburg: get it nearly level and aerostatically balanced by valving off or adding gas into the various sections, depending on whether the ship needed to be heavier or lighter. But even after a perfect weighoff, it took more than 200 strong men to haul the balky colossus down from the sky. Troops from Camp Dix had been drafted to help 138 civilian and 92 Navy linesmen. The least gust of wind could and often did—send the airship bounding like a kangaroo hundreds of feet skyward. On other occasions rope handlers had been lifted before they could let go, then dropped to their doom.

The Hindenburg swept in over the south fence at a brisk 73 knots, 590 feet high.

“What a sight it is!” exulted Herb Morrison, the Chicago radio commentator who was making an eyewitness recording on the field. “The sun is striking the windows of the observation deck and sparkling like glittering jewels on black velvet….”

Kapitan Pruss crossed the field and tuned to come in, valving gas from forward cells, dumping water ballast from the stem, shifting crewmen for an exact balance.

At 7:21 p.m., the first handling rope hit the ground.

In the passenger compartment, photographer Otto Clemens leaned out a window and worked his Leica to record the action below. He did not know it until his film was developed days later, but his negative showed flame reflected in rain puddles on the ground.

A bystander, Cage Mace, recalled later, “A shower of sparks shot up from the top of the bag and to the rear, followed instantly by a column of yellowish flame…”

Above, passengers tumbled, one atop the other, a mass of shrieking, crying people.

Joseph Spah, the acrobat, knocked out a window, climbed through, and dangled outside by one hand. When the ship started falling, he dropped—hard enough to bounce.

Miss Mather was pulled out of the crumpling, flaming cabin by ground crewmen.

Frau Kleeman just walked down the debarkation stairs.

In half a minute, 35 people were killed or fatally hurt.

Even today, 25 years later, your back chills when you listen to the recording of newscaster Morrison’s sobs: “…Get this, Charlie, get this. Charlie… It is burning. Oh, the humanity and all the passengers!”

More than humanity perished that warm May evening. It was the end of an era. The great airships had become a part of history.

Official investigations arrived at the “least improbable” conclusion: Static electricity had ignited leaking hydrogen. This verdict was not very convincing then, and is less so now. New evidence points to sabotage by a crew member allied with the Communist anti-Nazi underground (it’s a complex story detailed in the book Who Destroyed the Hindenburg? by A. A. Hoehling, Little, Brown & Co., Boston).

But one more Zeppelin flew: the LZ-130. She cruised the English Channel, ferreting out British radars before World War II, but was ignominiously scrapped for her aluminum.

If you visit Friedrichshafen now, you can sec the ruins of the Luftschifibau, leveled by bomb attacks. Weeds wave above rubble—jagged headstones of the Zeppelin’s own burying ground.

Until his death in 1960, Max Pruss had campaigned for a new airship company. He came close to winning approval for a 150-passenger Zeppelin even bigger than the Hindenburg. In the United States. Prof. Francis Morse of Boston University has blueprinted an atomic-engined dirigible-without much encouragement from anyone who might build it.

The plain facts of transportation explain why. A jet airliner can fly the Atlantic in six hours instead of 60. It can carry three times as many passengers each trip as the Hindenburg did. It costs only a fraction as much to build.

The biggest birds that ever flew are gone—extinct as dinosaurs and pterodactyls, and no more likely to return.

Aviation photo
The May 1962 cover of Popular Science featuring new cars, new jets, and “picture tubes.”

*Author of Who Destroyed the Hindenburg? and Martin Mann

Some text has been edited to match contemporary standards and style.

The post From the archives: A grand tribute and eulogy for Zeppelins appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: Inside the tantalizing quest to sense gravity waves https://www.popsci.com/science/gravity-waves-search/ Wed, 18 May 2022 11:00:00 +0000 https://www.popsci.com/?p=443702
Images from April 1981 issue of Popular Science.
“The tantalizing quest for gravity waves” by Arthur Fisher appeared in the April 1981 issue of Popular Science. Popular Science

In the April 1981 issue of Popular Science, we explored the many initiatives and techniques used in the exciting hunt for sensing gravity waves, then out of reach.

The post From the archives: Inside the tantalizing quest to sense gravity waves appeared first on Popular Science.

]]>
Images from April 1981 issue of Popular Science.
“The tantalizing quest for gravity waves” by Arthur Fisher appeared in the April 1981 issue of Popular Science. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

When two blackholes collided 1.3 billion years ago, the impact released three suns’ worth of energy into the fabric of spacetime. On a Monday in 2015, at a remote facility in Hanford, Wash., researchers detected that ancient cosmic impact as its effects swung past Earth. They mapped its gravity wave, whose length was almost incomprehensibly small (1/10,000th the diameter of a proton), to audio and heard a whoop. That tiny soundtrack was more than 100 years in the making.

Physicists had been seeking ways to detect gravity waves—ripples in spacetime caused by massive events—ever since Einstein predicted their existence in 1916. In an April 1981 article, Popular Science’s editor, Arthur Fisher, described the hunt for gravity waves, calling it “one of the most exciting in the whole history of science.” The Laser Interferometer Gravitational-Wave Observatory, or LIGO, was responsible for sensing the 2015 wave, but, as Fisher explains, in 1981 it was just one among many competing initiatives, each pursuing a different measurement technique. 

Rainer Weiss, MIT physics professor (now emeritus) and Kip Thorne (Caltech) were among the many scientists Fisher met and interviewed. Weiss devised the laser interferometer’s design in the 1970s and later teamed up with Thorne and Barry Barish to build LIGO (all three earned the 2017 Nobel Prize in Physics for their efforts). Ever since that first cosmic whoop in 2015, LIGO has detected 90 different gravitational wave events

In his story, Fisher describes the far-out wilds of space responsible for shaking space-time, including starquakes, gamma-ray bursts, and ticking neutron stars (pulsars). But it was Weiss, shortly after his device detected its first gravity wave in 2015, who captured space’s turbulence best: “monstrous things like stars, moving at the velocity of light, smashing into each other and making the geometry of space-time turn into some sort of washing machine.” 

“The tantalizing quest for gravity waves” (Arthur Fisher, April 1981) 

When scientists finally detect a form of energy they have never seen, they will open a new era in astronomy.

In the vast reaches of the cosmos, cataclysms are a commonplace: Something momentous is always happening. Perhaps the blazing death of an exhausted sun, or the collision of two black holes, or a warble deep inside a neutron star. Such an event spews out a torrent of radiation bearing huge amounts of energy. The energy rushes through space, blankets our solar system, sweeps through the Earth… and no one notices.

But there is a small band of experimenters, perhaps 20 groups worldwide, scattered from California to Canton, determined that some day they will notice. Pushed to the edge of contemporary technology and beyond, battling the apparent limits of natural law itself, they are developing what will be the most sensitive antennas ever built. And eventually, they are sure, they will detect these maddeningly intangible phenomena—gravity waves.

Even though gravity waves (more formally called gravitational radiation) have never been directly detected, virtually the entire scientific community is convinced they exist. This assurance stems, in part, from the bedrock on which gravity-wave notions are founded: Albert Einstein’s theory of general relativity, which, though still being tested, remains untoppled [PS, Dec. ‘79]. Says Caltech astrophysicist Kip Thorne, “I don’t know of any respectable expert in gravitational theory who has any doubt that gravity waves exist. The only way we could be mistaken would be if Einstein’s general relativity theory were wrong and if all the competing theories were also wrong, because they also predict gravity waves.”

In 1916, Einstein predicted that when matter accelerated in a suitable way, the moving mass would launch ripples in the invisible mesh of space-time, tugging momentarily at each point in the universal sea as they passed by. The ripples—gravity waves—would carry energy and travel at the speed of light. 

In many ways, this prediction was analogous to one made by James Clerk Maxwell, the brilliant British physicist who died in the year of Einstein’s birth—1879. Maxwell stated that the acceleration of an electric charge would produce electromagnetic radiation—a whole gamut of waves, including light, that would all travel at the same constant velocity. His ideas were ridiculed by many of his contemporaries. But a mere decade after his death, he was vindicated when Heinrich Hertz both generated and detected radio waves in the laboratory.

Why, then, more than 60 years after Einstein’s bold forecast, has no one seen a gravity wave? Why, despite incredible obstacles, are physicists still seeking them in a kind of modern quest for the Holy Grail, one of the most exciting in the whole history of science?

To find out, l visited experimenters who are building gravity-wave detectors and theoreticians whose esoteric calculations guide them. In the process, I learned about the problems, and how the attempts to solve them are already producing useful spinoffs. And I learned about the ultimate payoff if the quest is successful: a new and potent tool for penetrating, for the first time, what one physicist has called “the most overwhelming events in the universe.”

A kiss blown across the Pacific

The fundamental problem in gravity-wave detection is that gravity as a force is feeble in the extreme, some 40 orders of magnitude weaker than the electromagnetic force. (That’s 1040, or a 1 followed by 40 zeros.)

Partly for this reason, and partly because of other properties of gravity waves, they interact with matter very weakly, making their passage almost imperceptible. And unlike the dipole radiation of electromagnetism, gravitational radiation is quadrupole.

If a gravity wave generated, for example, by a supernova in our galaxy passed through the page you are now reading, the quadrupole effect would first make the length expand and the width contract (or vice versa), and then the reverse. But the amount of energy deposited in the page would be so infinitesimal that the change in dimension would be less than the diameter of a proton. Trying to detect a gravity wave, then, is like standing in the surf at Big Sur and listening for a kiss blown across the Pacific. As for generating detectable waves on Earth, a la Hertz, theoreticians long ago dismissed the possibility. “Sure, you make gravity waves every time you wave your fist,” says Rainer Weiss, a professor of physics at MIT. “But anything you will ever be able to detect must be made by massive bodies moving very fast. That means events in space.”

Astrophysicists have worked up whole catalogs of such events, each associated with gravity waves of different energy, different characteristic frequencies, and different probabilities of occurrence. They include the supposed continuous background gravitational radiation of the “big bang” that began the universe [PS, Dec. ‘80], and periodic events like the regular pulses of radiation emitted by pulsars and binary systems consisting of superdense objects. And then there are the singular events: the births of black holes in globular clusters, galactic nuclei, and quasars; neutron-star quakes; and supernovas.

Probably the prime candidate for detection is what William Fairbank, professor of physics at Stanford University, calls “the most dramatic event in the history of the universe”—a supernova. As a star such as our sun ages, it converts parts of its mass into nuclear energy, perhaps one percent in five billion years. “The only reason a large star like the sun doesn’t collapse,” explains Fairbank, “is because the very high temperature in its core generates enough pressure to withstand gravitational forces. But as it cools from burning its fuel, the gravitational forces begin to overcome the electrical forces that keep its particles apart. It collapses faster and faster, and if it’s a supernova, the star’s outer shell blasts off. In the last thousandth of a second, it collapses to a neutron star, and if the original star exceeded three solar masses, maybe to a black hole.” One way of characterizing the energy of a gravity wave is the strain it induces in any matter it impinges on. If the mass has a dimension of a given length, then the strain equals the change in that length (produced by the gravity wave) divided by the length. Gravity waves have very, very tiny strains. A supernova occurring in our galaxy might produce a strain on Earth that would shrink or elongate a 100-cm-long detector only one one-hundredth the diameter of an atomic nucleus. (That is 10-15 cm, and physicists would label the strain as 10-17.) To the credit of tireless experimenters, there are detectors capable of sensing that iota of a minim of a scruple.

But there is a catch: Based on observations of other galaxies, a supernova can be expected to occur in the dense center of any given galaxy roughly about once in 30 years. That is a depressingly long interval. Over and over again, the scientists I spoke to despaired of doing meaningful work if it had to depend on such a rara avis. Professor David Douglass of the University of Rochester told me: “To build an experiment to detect an event once every 30 years—maybe—is not a very satisfying occupation. It’s hardly a very good Ph.D. project for a graduate assistant; it’s not even a good career project—you might be unlucky.”

Gravity waves: powerful astronomical tools?

What if we don’t confine ourselves to events in our own galaxy, but look farther afield? Instead of the “hopelessly rare” (in the words of one researcher) supernova in our galaxy, what if we looked for them in a really large arena—the Virgo cluster, which has some 2,500 galaxies, where supernovas ought to be popping from once every few days to once a month or so? That’s Catch-222. The Virgo cluster is about 1,000 times farther away than the center of our own galaxy. So a supernova event from the cluster would dispatch gravity waves whose effect on Earth would be some million times weaker (1,000 times 1,000, according to the inverse-square law governing all radiative energy). And that means building a detector a million times more sensitive. “There is no field of science,” says Ronald Drever of Caltech and the University of Glasgow, Scotland, “where such enormous increases in sensitivity are needed as they are here, in gravity-wave detection.” Trying to detect a supernova in a distant galaxy means having to measure a displacement one-millionth the size of an atomic nucleus.

Paradoxically, it is this very quality that gives gravity waves the ability to be, as Kip Thorne says, “a very powerful tool for astronomy. True, they go through a gravity-wave detector with impunity. But that means the gravity waves generated during the birth of a black hole can also get away through all the surrounding matter with impunity.” And neither light, nor gamma rays, nor radio waves can. During a supernova we can see the exploding shell via showers of electromagnetic radiation, but only hours or days after the initial massive implosion—the gravitational collapse. During the collapse, while a neutron star or black hole is being formed, nothing but gravity waves (and, theoretically, neutrinos) can escape.

“We’ve opened, at least partially, all the electromagnetic windows onto the universe,” says Thorne. “With gravity wave astronomy, we will open a unique new window onto fascinating, explosive events that cannot be well studied any other way—births and collisions of black holes, star quakes, collapses to neutron stars. This is the real bread and butter of modern high-energy astrophysics.”

But first, as the cookbooks say, you must catch your gravity wave. Until the 1950’s, no one presumed that the task was even feasible. Then Joseph Weber, a physicist at the University of Maryland, began to ponder the problem of building a gravity-wave detector, and proceeded to do so. It is no exaggeration to say that he fathered the entire field. By 1967, he and his assistants had built the first operating gravity-wave detector—a massive aluminum bar, isolated as well as possible from external vibrations and girdled by piezoelectric crystal sensors, which translated changes in the bar’s dimensions into electrical signals. Weber reported a number of events recorded on this and a twin detector at Argonne that he concluded were gravity waves [PS, May ‘72]. His report stimulated a host of other experimenters to build their own detectors. Designed by such investigators as J. A. Tyson at Bell Labs and David Douglass at Rochester, the detectors followed the same principles as Weber’s pioneering bar detector, but with greater sensitivity. These and subsequent experimenters were unable to confirm Weber’s findings; in fact, at the level Weber’s bar was capable of, theoreticians believe it was impossible to have detected gravity waves. “Either Joe Weber was wrong,” one told me, “or the whole universe is cockeyed.”

Today, three basic kinds of gravity-wave detectors are being developed. One is basically a Weber resonant-bar antenna, much refined; the second is the laser interferometer; and the third is a space-based system called Doppler tracking. Each has its advantages, and each its own devilish engineering problems.

Farthest along is the resonant bar, mostly because it has been in the works longest. The more massive such a bar is, the better (because it will respond to a gravity wave better). And its worth depends on the quality of resonating, or “ringing,” for a time after it has been struck by the wave. The longer it rings, the better an experimenter is able to pick out the effect of the wave. That quality is measured by the value called “Q”-the higher the Q, the better. For a while David Douglass and others, including Soviet scientists, have been seeking to make detectors out of such very-high-Q materials as sapphire-crystal balls. But Douglass, for one, has returned to aluminum. The reasons: New alloys of aluminum have been found with very high Q’s; sapphire can’t be fabricated in massive chunks (one of his detectors has a six-ton aluminum bar); and expense: “A 60-pound pure sapphire crystal,” he told me, “would cost about $50,000.”

Like virtually everyone else developing bar antennas, Douglass has abandoned room-temperature detectors and turned to cryogenic detectors, cooled down as close to absolute zero as possible. That includes groups at Perth, Australia, Tokyo, Moscow, Louisiana State University, Rome, Weber himself at the University of Maryland, and William Fairbank and colleagues at Stanford University.

Fairbank told me why the low-temperature route was essential: “At room temperature, the random thermal motion of the atoms in a bar is 300 times as big as the displacement we’re trying to detect. The only way to approach the sensitivities we’re after is to get rid of that thermal noise by cooling the bar.”

When I visited the Stanford campus, the detector’s five-ton aluminum bar was sealed inside its cryostat, a kind of oversized Thermos bottle. The whole assembly looked like something you could use if you wanted to freeze Frankenstein’s monster for a few centuries. And the environment was suitable, too: a vast, drafty, concrete building that could have been an abandoned zeppelin hangar.

This antenna, and others like it, is designed to respond to gravity waves with a frequency of about 1,000 Hz, characteristic of supernova radiation. Obviously the antenna must be isolated as far as possible from any external vibration at or around that frequency. This the Stanford group does by suspending the cylinder with special springs, consisting of alternating iron and rubber bars in what is called an isolation stack. “Otherwise, with our sensitivity,” Fairbank says, “this detector would make a dandy seismograph—just what we don’t want in California.” The Stanford suspension system attenuates outside noise by a factor of 10 °, enough so that you could drop a safe in its vicinity without disturbing the detector.

At LSU, William Hamilton, who is building an antenna very similar to Stanford’s (eventually it will become part of a Rome-Perth-Baton Rouge-Stanford axis looking for gravity-wave coincidences), takes another route toward seismic isolation. The very low temperature of the device allows him to levitate the bar magnetically; it is coated with a thin film of niobium-tin alloy, a material that becomes superconducting near absolute zero. If electromagnets are placed under the bar, the persistent currents running through its coating will interact with the magnetic field so that the bar literally floats in air.

Superconductivity is also the key to one of the most perplexing of all engineering problems: designing a transducer capable of sensing the tiny displacements of these antennas and converting them to a useful voltage that can be amplified and measured. “You can’t buy such things,” says David Douglass, “you have to make them, and go beyond the state of the art.” Both Douglass and Fairbank use superconducting devices whose elegant design makes them exquisitely sensitive—orders of magnitude more than the piezoelectric crystals originally used—although their approaches differ in details.

Superconducting devices may also one day—a day far in the future-allow gravity-wave astronomers to perform a feat of legerdemain called “quantum non-demolition.” To oversimplify, this means evading a fundamental limit for all resonant detectors, one that is imposed by the laws of quantum mechanics as the displacements become ever smaller. That problem will have to be faced if bar antennas are ever to be sensitive enough to detect gravity waves from supernovas in the Virgo cluster.

An alternative: laser interferometers

“One of the reasons we’re turning to laser detectors,” says Ronald Drever, “is to avoid the quantum-limit problem. Because we can make measurements over a much larger region of space, we effectively see a much larger signal. We don’t have to look for such minute changes as in a bar antenna.”

Laser interferometers bounce an argon-ion laser beam back and forth many times between two mirrors. (A generalized approach to the scheme appears in the drawing on page 92.) As a gravity wave ripples between the mirrors, the length of the light path changes, resulting in a change in the interference patterns that appear in photodetectors. Numbers of such detectors are in the planning and building stages, including ones at MIT, designed by Rainer Weiss, a pioneer in the field; at the Max Planck Institute of Astrophysics in Germany; at the University of Glasgow; and at Caltech.

“The one in Glasgow has 10-meter arms,” Drever told me, “and is working now. The one we’re working on at Caltech also has 10-meter arms, but will be stretched to 40 meters as soon as a building for it is ready. This will serve as a prototype for a much larger version—a kilometer to several kilometers long.”

Of course, laser interferometers have engineering problems, too, problems that become exacerbated as they grow larger. The laser beams must travel through vacuum pipes, and isolating pipes a kilometer long will not be simple. But Drever is convinced it can be done. “Maybe we’ll put it in a mine, or in the desert,” he says. This device may be ready by 1986, and has, Drever thinks, a chance of eventually detecting supernovas in the Virgo cluster.

One additional advantage of such laser detectors is that they are not restricted to a narrow frequency range, as are the resonant antennas, but would be sensitive to a broad band of frequencies from a few hertz to a few thousand hertz. They could therefore detect some massive black-hole events, which have lower frequencies than gravity waves from supernovas. To detect gravity waves with much lower frequencies, such as those from binary systems, you need very long baselines. “ln about 15 years,” says Rainer Weiss, “we will want big, space-based laser systems, using, say, a 10-kilometer frame in space. That way we could avoid all seismic noise.” 

The third kind of gravity-wave detector already exists in space, after a fashion. It has been used for spacecraft navigation for 20 years. It is called Doppler tracking, and is very simple—in theory. Here’s how ifs described by Richard Davies, program leader for space physics and astrophysics at Jet Propulsion Laboratory in Pasadena, Calif.: “You send a radio signal from Earth to a spacecraft, and a transponder aboard the craft sends the signal back to you. If a gravity wave passes through the solar system, it alters the distance between the two, and when you compare the frequency of the signal you sent out to the one you get back, you see that they are different—-the Doppler shift. However, the contribution of the gravity wave to this shift is minute compared to that of the spacecraft’s own velocity.

“We want to detect gravity waves with very low frequencies, maybe a thousandth of a hertz, using interplanetary spacecraft and the Deep Space Net that is used to track them. Such waves could be emitted from a collapsing system with a mass of a million to ten million suns, or from double stars that orbit each other in hours.”

A gravity-wave experiment had been planned for the International Solar Polar Mission. But, according to MIT’s Irwin Shapiro, who chaired the Committee on Gravitational Physics of the National Academy of Science’s Space Science Board, the experiment was dropped by NASA because of budget cuts.

Which of these methods will yield the first direct evidence of gravity waves? And when will that first contact come? No one really knows, and the gravity-wave seekers themselves are extremely diffident about making claims and predictions. But some time within the decade seems at least plausible.

ln the meantime, gravity-wave research is paying unexpected dividends. “It has opened up,” says Kip Thorne, “a modest new chapter in quantum electronics. Because it is pushing so hard against the bounds of modem technology, it is inventing new techniques that will have fallout elsewhere; for example, a new way to make laser frequencies more stable than ever. This will be useful in both physics and chemistry research.”

In the long run, however, the search for gravity waves is propelled by the basic drive of all scientists, and all mankind: to see a little farther, to understand a little more than we have ever done before.

Two indirect proofs for the existence of gravity waves 

The first evidence of any kind for the existence of gravity waves comes not from sensing them directly but from observing their effect on the behavior of a bizarre astronomical object called a binary pulsar. A pulsar, believed to be a rapidly spinning neutron star, emits strong radio signals in periodic beeps. But pulsar PSR 1913+16, discovered by a team of University of Massachusetts astronomers in 1974 with the world’s largest radio telescope (at Arecibo, P.R.), is unique. Its beeps decelerate and accelerate in a regular sequence lasting about eight hours. From this, the astronomers, led by Joseph Taylor, deduced that the pulsar was rapidly orbiting around another very massive object—perhaps another neutron star.

Einstein’s theory of general relativity predicts that this binary system should produce a considerable quantity of gravity waves, and that the energy radiated should be slowly extracted from the orbit of the system, gradually decreasing its period as the superdense stars spiral closer to one another. Einstein’s equations predict a decrease of one ten-thousandth of a second per year for a pulsar like PSR 1913+ 16. And after four years of observations Taylor’s team announced, in late 1978, that ultraprecise measurements of the radio signals gave a value almost exactly that amount. The closeness of the match not only provides good—even though indirect—evidence of the existence of gravity waves, but also further bolsters Einstein’s theory of gravity against some competing theories.

As Taylor said of what he called “an accidental discovery originally,” the astronomers had an ideal situation for testing the relativity theory—a moving clock (the pulsar) with a very precise rate of ticking and a high velocity—some 300 kilometers per second. “lt’s almost as if we had designed the system ourselves and put it out there just to do this measurement.” 

Another indirect indication that gravity waves do indeed exist came more recently, and more dramatically. It stemmed from an event that still has astronomers reeling. At exactly 15 hours, 52 minutes, five seconds, Greenwich time on March 5, 1979, a gamma-ray burst of unparalleled Intensity flashed through our solar system from somewhere in space. It triggered monstrous blips on detectors aboard a motley collection of nine different spacecraft throughout the solar system, which form, in effect, an international network maintained by the U.S., France, West Germany, and the Soviet Union.

Once-in-a-lifetime event

“This March 5 gamma-ray event was extraordinary,” says Thomas Cline of NASA Goddard Space Flight Center, who, with his colleague Reuven Ramaty and other U.S., French, and Russian astrophysicists, has been analyzing it ever since. “It was not like the gamma-ray bursts that have been seen a hundred times in the last decade. It’s a first and only, like something that’s seen once in a scientific lifetime.”

Because the surge of gamma rays was detected by so many satellites separated in space, astronomers were able to triangulate the position of its source and identify it with a visible object—the first time for such a feat. The object was a supernova remnant dubbed N49 in the Large Magellanic Cloud (LMC), a neighboring galaxy roughly 150,000 light-years away.

Ramaty, Cline, and colleagues posit that the genesis of the gamma-ray burst was a quivering neutron star—the ultradense, ultracompact object that many theorists believe is left over from a supernova explosion. “We believe,” Cline told me, “that a neutron star can undergo a transformation analogous to an avalanche. Snow falls on a mountain until there’s a slide. 

Similarly, dust and other material collect on a neutron star until it can’t stand being as heavy as it is. Then there’s a star quake, either in the crust or in the core. and the star shakes itself at a frequency of about 3,000 Hz, a note you could hear if you were listening to it in an atmosphere. The surface of the star-only five to 10 miles in diameter—is heaving up and down several feet, thousands of times a second. Its magnetosphere is shaken, and that’s what produces, indirectly, the gamma rays. But that’s secondary, in our model, to the gravitational waves caused by the oscillation of the neutron star.

“Could we detect these? The answer is no. After all, this is only a kind of after-gurgle, thousands of years after the star’s original collapse—the supernova. It’s like a tremor after a major earthquake, maybe only one percent as big.”

Nevertheless, Cline called all the U.S. gravity-wave experimenters who could have been “on-line” during the gamma ray burst to learn whether they had seen anything. Of them all, only Joseph Weber had an antenna working that March day, and he had observed nothing.

The gamma-ray detectors aboard the satellites were not capable of sensing the 3,000-Hz frequency predicted by the starquake model. If they had, says Cline, it would have been “a very direct link” to the existence of gravitational radiation.

But the star-quake model makes another prediction: The gravity waves generated should carry off an enormous amount of energy, far more than that in the gamma rays, and thus snuff out the star’s vibration very quickly. “The nice thing,” says Goddard’s Reuven Ramaty, “is that the damping time predicted for gravity waves in this event exactly corresponds to what we observed: The main part of the burst lasted just ‘I5 hundredths of a second, and that’s what we calculate from our model. So we now have for the second time indirect evidence of the existence of gravity waves. But both have problems, as do all indirect checks. They won’t replace direct evidence.”

Physics photo
April 1981 Popular Science cover featuring developments in solar power and automative technology.

Some text has been edited to match contemporary standards and style.

The post From the archives: Inside the tantalizing quest to sense gravity waves appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: Inside the U.S. Army’s plan to build a luxurious city under the Arctic https://www.popsci.com/environment/us-army-arctic-city/ Tue, 17 May 2022 11:00:00 +0000 https://www.popsci.com/?p=441922
Art from the 1960 article of Popular Science.
“U.S. Army builds a fantastic city under ice” (Herbert O. Johansen, February 1960). Popular Science

In 1960, Popular Science dove into the military's big subterranean plans for Camp Century, before they were abandoned in 1967.

The post From the archives: Inside the U.S. Army’s plan to build a luxurious city under the Arctic appeared first on Popular Science.

]]>
Art from the 1960 article of Popular Science.
“U.S. Army builds a fantastic city under ice” (Herbert O. Johansen, February 1960). Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

The 45-year Cold War probably never got colder than when the US Army decided to build Camp Century, a facility more than 30 feet below Greenland’s ice sheet. Herbert O. Johansen’s February 1960 Popular Science feature, “US Army Builds a Fantastic City Under Ice,” describes the Army’s unusual project, which turned out to have hidden Cold War ambitions.

The US Army Corps of Engineers began work on the camp in 1959, claiming it as a site to conduct polar research. While it’s true that the team drilled the first—and one of the most geologically revealing—ice cores ever to study climate change, the camp’s location was suspiciously close to Russia. 

Johansen acknowledges that Camp Century was “about as safe a place as you could find in case of an atomic attack,” and that it “would be hard for an enemy to find.” But the story only mentions, in passing, that the site might be handy for the military. He otherwise steers clear of the usual Cold War themes (Popular Science ran many such stories throughout the 19th century) to focus on the camp’s luxurious subglacial living conditions, with “spacious dormitories,” “hot and cold showers,” and “a laundry and a barber shop.” 

By 1967, the military abandoned Camp Century. In 1997, the Danish Institute of International Affairs released reports about Camp Century’s undisclosed military goals. Codenamed “Iceworm,” the US Army’s original plans included expanding the facility to 52,000 square miles of ice tunnels capable of deploying nuclear missiles. Even if the Danish government had approved the plan (it did not), the Greenland glacier refused to cooperate. As Johansen reported, “one thing the engineers haven’t been able to lick is the slow, plastic movement of the ice, which causes the walls to close in and the corridors to twist.” Now, climate change threatens to expose the site’s nuclear and other hazardous waste, which were supposed to remain ice-entombed forever. Perhaps, the early thaw will uncover other unsavory truths?

“U.S. Army builds a fantastic city under ice” (Herbert O. Johansen, February 1960)

The strangest boom town in the world is being built by Army engineers under Greenland’s vast icecap. Completely hidden by snow, it will be powered by atomic energy—-and will be about as safe a place as you could find in case of atomic attack. It would be hard for an enemy to find; and snow would absorb much of the shock of an atomic blast, and partially shield the occupants from radiation and fallout.

In building the fantastic community, 800 miles from the North Pole, Army Engineers, in cooperation with the Danish Government (Greenland is a part of the Kingdom of Denmark), have proved that the traditionally antagonistic Arctic can be tamed.

An electric railroad running through a tunnel cut in the snow will connect the town with the supply base at Thule Air Base 152 miles to the west. For air supply there will be landing strips of compacted snow for the largest cargo planes, and landing pads for helicopters.

It will be home—snug, comfortable, and warm—for 100 scientists, engineers, and soldiers who are expected to move in late this year. After a hard day’s work they will be able to relax with tall drinks cooled by deep-dug ice that was formed long before Columbus discovered America.

It is Camp Century—a year-round haven for the men who will study problems of living, working, or fighting in one of the world’s harshest environments, where winter temperatures drop to 70 below zero and winds whirl snow in a blinding fury at 100 miles an hour. With previous shelters, the open season was limited to five months—May through September.

Now the men will live and work in a series of insulated prefabricated buildings connected by snow corridors. Cumbersome Arctic clothing—parkas, bulky mittens, mukluks, scratchy woollies—won’t be needed even in the work areas, where the temperature will be kept at 40 degrees. It will be upped to 60 degrees in living quarters. A ventilation system will exhaust warm air from the tunnels to keep the snow walls at 20 degrees or below so they won’t melt. To prevent the snow floor from becoming slush, the buildings will be raised slightly to permit a circulation of cold air underneath.

Life in Camp Century

The inhabitants will never see daylight unless they venture outside. But during the worst months of the year—December and January— that won’t matter: The sun never rises.

At first glance the unnatural life within the confines of a buried snow town four blocks long, and three blocks wide looks forbidding, depressing, killing in its monotony. It would be if a man didn’t keep busy and have cheerful surroundings, and facilities for recreation.

Here the residents of Camp Century will score on all points. They will be busy at their jobs—from cooking to researching—eight to 10 hours a day. After that it’s up to a man’s individual taste. There will be a recreation hall and game rooms, a gymnasium, a hobby shop. A library will share quarters with a Base Exchange.

Movies will be shown every night. Television programs will come from the military TV station at Thule. Radios will tune in perfectly on Radio Moscow across the Arctic. Eating will be in a clublike dining room. Food will be the best (with an extra half ration to compensate for faster-burned-up calories) and steak will be in plentiful supply.

Spacious dormitories will replace double-down, zip-up sleeping bags; hot and cold showers the occasional lick-and-promise basin bath. A laundry and a barber shop will keep the outer man clean and neat; a 10-bed hospital with an operating room, the inner man in repair. A chaplain and chapel will serve spiritual needs.

Only one comfort of man—beyond the scope of the Army—will be lacking. It is hoped that fast air mail, ham radiotelephone chats with wife and family in the United States, and rotating four-month tours of duty will make icecap isolation seem less far away from home.

Some problems solved 

Camp Centur —so named because its original site was 100 miles out on the icecap—is being built by the Army Corps of Engineers after five years of preliminary construction experiments on the Greenland icecap. Its research programs will be directed by the Army’s Chief of Research and Development. The working areas will be dominated by scientific laboratories, but should the need come for a similar military installation—perhaps an under-ice launching site for intercontinental ballistic missiles or interceptors—we have the blueprints, the techniques, the machines. At Camp Century, the Engineers have proved that they can:

  • Make efficient use of construction materials at hand—snow and ice. 
  • Dig cut-and-cover trenching with an adaptation of Peter Snow Millers—huge rotary snow plows that for years have been keeping passes in the Swiss Alps open during winters.
  • Adapt mechanical coal-mining machines to carve caverns into ice-hard snow to make expanded quarters, as well as space for storage and food refrigeration. 
  • Solve the water-supply problem, always serious in the Arctic, by drilling wells 150-feet deep, shooting down jets of steam, and pumping out water.
  • Utilize an air-transportable nuclear power plant, with a capacity of 2,000 kw., to furnish light, heat, and power. A core of U-235 weighing less than 50 pounds will produce a year’s supply of electricity—doing the work of 35,000 barrels of fuel oil.

One thing the engineers haven’t been able to lick is the slow, plastic movement of the ice, which causes the walls to close in and the corridors to twist. With periodic shaving of the ice, they expect the camp to last about 10 years. Then it will be completely reconditioned.

The machines used for this maintenance also will carve out new and bigger storage vaults. Equipment and tools stored in them don’t rust, and food keeps indefinitely. With the techniques developed to build Camp Century, the engineers say they could hack out great ice caves to store surplus crops for use by future generations in times of famine. 

Why a Camp Century? 

The Greenland icecap, with a big, permanent supply base at Thule, is the ideal laboratory for studying snow and ice. Aside from weather implications, these are two elements we will be living with more and more. The entire Distant Early Warning radar network (DEW line) lies above the Arctic Circle—from the Kuriles at the western tip of Alaska, across Canada and Greenland. Lessons learned in the North Polar areas can be applied in the Antarctic, now increasingly important.

Greenland, with 708,000 square miles of icecap, two miles deep at its crest, is the birthplace of weather for much of the Northern Hemisphere. By drilling and bringing up core samples of ice formed through the ages, scientists can study the history of snowfalls and get information on the movement of air masses covering thousands of years.

“This information,” says Dr. Henri Bader, chief scientist of the Army’s Snow, Ice, and Permafrost Establishment, “will enable us to make fairly accurate predictions on future weather cycles.”

Air samples from the past, preserved as bubbles, are trapped in these ice samples. One Camp Century project is to try to determine how much air pollution has increased since the Industrial Revolution of the 18th Century introduced man-made smog. Ice cores from a depth of 160 feet contained volcanic dust from the great Krakatoa eruption of 1883 in the Dutch East Indies. And in the threefoot layers of snow that are deposited on the icecap each year, there is a permanent annual record of atomic fallout since Hiroshima.

This year-by-year accumulation of snow-become-ice is easily identified and accurately dated, somewhat as rings tell the age of a tree.

Samples of ice formed from snow that fell when Eric the Red set foot in Greenland in the year 982 already have been studied by scientists. With new thermal drilling equipment that will be used at Camp Century, they hope to go down 10,000 feet. The ice they then bring up will date back beyond the dawn of history—to the days when the footprints of Stone Age man were fresh upon the earth.

From the archives: Inside the U.S. Army’s plan to build a luxurious city under the Arctic
The cover of the February 1960 issue of Popular Science featuring the “city under ice.”

Some text has been edited to match contemporary standards and style.

The post From the archives: Inside the U.S. Army’s plan to build a luxurious city under the Arctic appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: The discovery of electrons breaks open the subatomic era https://www.popsci.com/science/discovery-electron/ Mon, 16 May 2022 13:00:00 +0000 https://www.popsci.com/?p=441912
An image of the 1901 issue of Popular Science Monthly
“On bodies smaller than atoms” (J. J. Thomson, 1901). Popular Science

In the August 1901 issue of Popular Science, physicist J. J. Thomson excitedly detailed his methods for discovering the electron.

The post From the archives: The discovery of electrons breaks open the subatomic era appeared first on Popular Science.

]]>
An image of the 1901 issue of Popular Science Monthly
“On bodies smaller than atoms” (J. J. Thomson, 1901). Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

As far as human knowledge is concerned, the electron turned 125 on April 30, 2022. Of course, the subatomic particles have been around since shortly after the Big Bang, but here on Earth nobody knew about them until British physicist, J. J. Thomson announced his discovery on April 30, 1897 at the Royal Institution in London. 

In August 1901, Thomson wrote  “On Bodies Smaller Than Atoms” for Popular Science, detailing his discovery and methods. By today’s standards, the piece reads like a hybrid journal article and memoir, capturing his pride and the thrill of discovery. Thomson was awarded the Nobel Prize in Physics for isolating electrons as something fundamental to all atoms.

At the time of Thomson’s finding, no one had ever detected anything smaller than a hydrogen atom (one proton and one electron, no neutron). However, electricity’s ability to flow through materials—coupled, as Thomson cites, with MarieCurie’s radiation experiments and associated electric fields—suggested the possibility. 

Thomson did more than discover electrons; his method, which involved accelerating particles between electrodes, kicked off a new way to study the subatomic world, using accelerators and colliders to smash apart the smallest of the small. By 1911, Ernest Rutherford presented his atomic model, which confirmed Thomson’s electron discovery, but disproved his broader hypothesis that atoms were uniformly distributed (protons paired with electrons). Today, a potpourri of particles smaller than electrons, such as quarks and neutrinos, make up the Standard Model of the universe, developed in the 1970s. The most elusive, perhaps, is the Higgs boson—believed to be the origin of mass for all subatomic particles—first spied in 2012 by physicists at CERN’s Large Hadron Collider. But even the Standard Model has its gaps, like dark matter and antimatter, which, a century on, continues to fuel the quest for bodies smaller than atoms.

“On bodies smaller than atoms” (J. J. Thomson, August 1901)

The masses of the atoms of the various gasses were first investigated about thirty years ago by methods due to Loschmidt, Johnstone Stoney and Lord Kelvin. These physicists, using the principles of the kinetic theory of gasses and making certain assumptions, which it must be admitted are not entirely satisfactory, as to the shape of the atom, determined the mass of an atom of a gas: and when once the mass of an atom of one substance is known the masses of the atoms of all other substances are easily deduced by well-known chemical considerations. 

“The results of these investigations might be thought not to leave much room for the existence of anything smaller than ordinary atoms, for they showed that in a cubic centimeter of gas at atmospheric pressure and at 0° C. there are about 20 million, million, million (2 X 1019) molecules of gas.

Though some of the arguments used to get this result are open to question, the result itself has been confirmed by considerations of quite a different kind. Thus Lord Rayleigh has shown that this number of molecules per cubic centimeter gives about the right value for the optical opacity of the air, while a method, which I will now describe, by which we can directly measure the number of molecules in a gas leads to a result almost identical with that of Loschmidt. This method is founded on Faraday’s laws of electrolysis; we deduce from these laws that the current through an electrolyte is carried by the atoms of the electrolyte, and that all these atoms carry the same charge, so that the weight of the atoms required to carry a given quantity of electricity is proportional to the quantity carried. We know too, by the results of experiments on electrolysis, that to carry the unit charge of electricity requires a collection of atoms of hydrogen which together weigh about 1/10 of a milligram; hence if we can measure the charge of electricity on an atom of hydrogen we see that 1/10 of this charge will be the weight in milligrams of the atom of hydrogen. This result is for the case when electricity passes through a liquid electrolyte. I will now explain how we can measure the mass of the carriers of electricity required to convey a given charge of electricity through a rarefied gas. In this case the direct methods which are applicable to liquid electrolytes cannot be used, but there are other, if more indirect, methods, by which we can solve the problem. The first case of conduction of electricity through gasses we shall consider is that of the so-called cathode rays, those streamers from the negative electrode in a vacuum tube which produce the well-known green phosphorescence on the glass of the tube. These rays are now known to consist of negatively electrified particles moving with great rapidity. Let us see how we can determine the electric charge carried by a given mass of these particles. We can do this by measuring the effect of electric and magnetic forces on the particles. If these are charged with electricity they ought to be deflected when they are acted on by an electric force. It was some time, however, before such a deflection was observed, and many attempts to obtain this deflection were unsuccessful. The want of success was due to the fact that the rapidly moving electrified particles which constitute the cathode rays make the gas through which they pass a conductor of electricity; the particles are thus as it were moving inside conducting tubes which screen them off from an external electric field; by reducing the pressure of the gas inside the tube to such an extent that there was very little gas left to conduct, I was able to get rid of this screening effect and obtain the deflection of the rays by an electrostatic field. The cathode rays are also deflected by a magnet, the force exerted on them by the magnetic field is at right angles to the magnetic force, at right angles also to the velocity of the particle and equal to Hev sin 𝜽 where H is the magnetic force, e the charge on the particle and 𝜽 the angle between H and v. Sir George Stokes showed long ago that, if the magnetic force was at right angles to the velocity of the particle, the latter would describe a circle whose radius is mv/eH (if m is the mass of the particle); we can measure the radius of this circle and thus find m/ve. To find v let an electric force F and a magnetic force H act simultaneously on the particle, the electric and magnetic forces being both at right angles to the path of the particle and also at right angles to each other. Let us adjust these forces so that the effect of the electric force which is equal to Fe just balances that of the magnetic force which is equal to Hev; when this is the case Fe = Hev or v = F. We can thus find v, and knowing from the previous experiment the value of vm/e, we deduce the value of m/e. The value of m/e found in this way was about 10-7, and other methods used by Wiechert, Kaufmann and Lenard have given results not greatly different. Since m/e = 10-7, we see that to carry unit charge of electricity by the particles forming the cathode rays only requires a mass of these particles amounting to one ten thousandth of a milligram while to carry the same charge by hydrogen atoms would require a mass of one-tenth of a milligram.* 

Thus to carry a given charge of electricity by hydrogen atoms requires a mass a thousand times greater than to carry it by the negatively electrified particles which constitute the cathode rays, and it is very significant that, while the mass of atoms required to carry a given charge through a liquid electrolyte depends upon the kind of atom, being, for example, eight times greater for oxygen than for hydrogen atoms, the mass of cathode ray particles required to carry a given charge is quite independent of the gas through which the rays travel and of the nature of the electrode from which they start.

The exceedingly small mass of these particles for a given charge compared with that of the hydrogen atoms might be due either to the mass of each of these particles being very small compared with that of a hydrogen atom or else to the charge carried by each particle being large compared with that carried by the atom of hydrogen. It is therefore essential that we should determine the electric charge carried by one of these particles. The problem is as follows: suppose in an enclosed space we have a number of electrified particles each carrying the same charge, it is required to find the charge on each particle. It is easy by electrical methods to determine the total quantity of electricity on the collection of particles and knowing this we can find the charge on each particle if we can count the number of particles. To count these particles the first step is to make them visible. We can do this by availing ourselves of a discovery made by C. T. R. Wilson working in the Cavendish Laboratory. Wilson has shown that when positively and negatively electrified particles are present in moist dust-free air a cloud is produced when the air is closed by a sudden expansion, though this amount of expansion would be quite insufficient to produce condensation when no electrified particles are present: the water condenses round the electrified particles, and, if these are not too numerous, each particle becomes the nucleus of a little drop of water. Now Sir George Stokes has shown how we can calculate the rate at which a drop of water falls through air if we know the size of the drop, and conversely we can determine the size of the drop by measuring the rate at which it falls through the air, hence by measuring the speed with which the cloud falls we can determine the volume of each little drop; the whole volume of water deposited by cooling the air can easily be calculated, and dividing the whole volume of water by the volume of one of the drops we get the number of drops, and hence the number of the electrified particles. We saw, however, that if we knew the number of particles we could get the electric charge on each particle; proceeding in this way 1 found that the charge carried by each particle was about 6.5 × 10-10 electrostatic units of electricity or 2.17 × 10-20 electro-magnetic units. According to the kinetic theory of gasses, there are 2 × 1019 molecules in a cubic centimeter of gas at atmospheric pressure and at the temperature 0° C.; as a cubic centimeter of hydrogen weighs about 1/11 of a milligram each molecule of hydrogen weighs about 1/ (22 × 1019) milligrams and each atom therefore about 1/(44 × 1019) milligrams and as we have seen that in the electrolysis of solutions one-tenth of 2 milligram carries unit charge, the atom of hydrogen will carry a charge equal to 10/(44 × 1019)= 2.27 × 10-29 electro-magnetic units. The charge on the particles in a gas we have seen is equal to 2.17 × 10-20 units, these numbers are so nearly equal that, considering the difficulties of the experiments, we may feel sure that the charge on one of these gaseous particles is the same as that on an atom of hydrogen in electrolysis. This result has been verified in a different way by Professor Townsend, who used a method by which he found, not the absolute value of the electric charge on a particle, but the ratio of this charge to the charge on an atom of hydrogen and he found that the two charges were equal.

As the charges on the particle and the hydrogen atom are the same, the fact that the mass of these particles required to carry a given charge of electricity is only one-thousandth part of the mass of the hydrogen atoms shows that the mass of each of these particles is only about 1/1000 of that of a hydrogen atom. These particles occurred in the cathode rays inside a discharge tube, so that we have obtained from the matter inside such a tube particles having a much smaller mass than that of the atom of hydrogen, the smallest mass hitherto recognized. These negatively electrified particles, which I have called corpuscles, have the same electric charge and the same mass whatever be the nature of the gas inside the tube or whatever the nature of the electrodes; the charge and mass are invariable. They therefore form an invariable constituent of the atoms or molecules of all gasses and presumably of all liquids and solids.

Nor are the corpuscles confined to the somewhat inaccessible regions in which cathodic rays are found. I have found that they are given off by incandescent metals, by metals when illuminated by ultraviolet light, while the researches of Becquerel and Professor and Madame Curie have shown that they are given off by that wonderful substance the radio-active radium. In fact in every case in which the transport of negative electricity through gas at a low pressure (i.e., when the corpuscles have nothing to stick to) has been examined, it has been found that the carriers of the negative electricity are these corpuscles of invariable mass.

A very different state of things holds for the positive electricity. The masses of the carriers of positive electricity have been determined for the positive electrification in vacuum tubes by Wien and by Ewers, while I have measured the same thing for the positive electrification produced in a gas by an incandescent wire. The results of these experiments show a remarkable difference between the property of positive and negative electrification, for the positive electricity, instead of being associated with a constant mass 1/1000 of that of the hydrogen atom, is found to be always connected with a mass which is of the same order as that of an ordinary molecule. and which, moreover, varies with the nature of the gas in which the electrification is found.

These two results, the invariability and smallness of the mass of the carriers of negative electricity, and the variability and comparatively large mass of the carriers of positive electricity, seem to me to point unmistakably to a very definite conception as to the nature of electricity. Do they not obviously suggest that negative electricity consists of these corpuscles or, to put it the other way, that these corpuscles are negative electricity: and that positive electrification consists in the absence of these corpuscles from ordinary atoms? Thus this point of view approximates very closely to the old one-fluid theory of Franklin; on that theory electricity was regarded as a fluid, and changes in the state of electrification were regarded as due to the transport of this fluid from one place to another. If we regard Franklin’s electric fluid as a collection of negatively electrified corpuscles, the old one-fluid theory will, in many respects, express the results of the new. We have seen that we know a good deal about the ‘electric fluid’; we know that it is molecular or rather corpuscular in character; we know the mass of each of these corpuscles and the charge of electricity carried by it; we have seen too that the velocity with which the corpuscles move can be determined without difficulty. In fact the electric fluid is much more amenable to experiment than an ordinary gas, and the details of its structure are more easily determined.

Negative electricity (i.e., the electric fluid) has mass; a body negatively electrified has a greater mass than the same body in the neutral state; positive electrification, on the other hand, since it involves the absence of corpuscles, is accompanied by a diminution in mass.

An interesting question arises as to the nature of the mass of these corpuscles which we may illustrate in the following way. When a charged corpuscle is moving, it produces in the region around it a magnetic field whose strength is proportional to the velocity of the corpuscle; now in a magnetic field there is an amount of energy proportional to the square of the strength, and thus, in this case, proportional to the square of the velocity of the corpuscle.

Thus if e is the electric charge on the corpuscle and v its velocity, there will be in the region round the corpuscle an amount of energy equal to ½βe2v2? where β is a constant which depends upon the shape and size of the corpuscle. Again if m is the mass of the corpuscle its kinetic energy is ½mv2, and thus the total energy due to the moving electrified corpuscle is ½(m + βe2)v2,so that for the same velocity it has the same kinetic energy as a non-electrified body whose mass is greater than that of the electrified body by βe2. Thus a charged body possesses in virtue of its charge, as I showed twenty years ago, an apparent mass apart from that arising from the ordinary matter in the body. Thus in the case of these corpuscles, part of their mass is undoubtedly due to their electrification, and the question arises whether or not the whole of their mass can be accounted for in this way. I have recently made some experiments which were intended to test this point; the principle underlying these experiments was as follows: if the mass of the corpuscle is the ordinary “mechanical mass, then, if a rapidly moving corpuscle is brought to rest by colliding with a solid obstacle, its kinetic energy being resident in the corpuscle will be spent in heating up the molecules of the obstacle in the neighborhood of the place of collision, and we should expect the mechanical equivalent of the heat produced in the obstacle to be equal to the kinetic energy of the corpuscle. If, on the other hand, the mass of the corpuscle is “electrical,” then the kinetic energy is not in the corpuscle itself, but in the medium around it, and, when the corpuscle is stopped, the energy travels outwards into space as a pulse confined to a thin shell traveling with the velocity of light. I suggested some time ago that this pulse forms the Rontgen rays which are produced when the corpuscles strike against an obstacle. On this view, the first effect of the collision is to produce Rontgen rays and thus, unless the obstacle against which the corpuscle strikes absorbs all these rays, the energy of the heat developed in the obstacle will be less than the energy of the corpuscle. Thus, on the view that the mass of the corpuscle is wholly or mainly electrical in its origin, we should expect the heating effect to be smaller when the corpuscles strike against a target permeable by the Rontgen rays given out by the tube in which the corpuscles are produced than when they strike against a target opaque to these rays. I have tested the heating effects produced in permeable and opaque targets, but have never been able to get evidence of any considerable difference between the two cases. The differences actually observed were small compared with the total effect and were sometimes in one direction and sometimes in the opposite. The experiments, therefore, tell against the view that the whole of the mass of a corpuscle is due to its electrical charge. The idea that mass in general is electrical in its origin is a fascinating one, although it has not at present been reconciled with the results of experience.

The smallness of these particles marks them out as likely to afford a very valuable means for investigating the details of molecular structure, a structure so fine that even waves of light are on far too large a scale to be suitable for its investigation, as a single wavelength extends over a large number of molecules. This anticipation has been fully realized by Lenard’s experiments on the obstruction offered to the passage of these corpuscles through different substances. Lenard found that this obstruction depended only upon the density of the substance and not upon its chemical composition or physical state. He found that, if he took plates of different substances of equal areas and of such thicknesses that the masses of all the plates were the same, then, no matter what the plates were made of, whether of insulators or conductors, whether of gasses, liquids or solids, the resistance they offered to the passage of the corpuscles through them was the same. Now this is exactly what would happen if the atom of the chemical elements were aggregations of a large number of equal particles of equal mass; the mass of an atom being proportional to the number of these particles contained in it and the atom being a collection of such particles through the interstices between which the corpuscle might find its way. Thus a collision between a corpuscle and an atom would not be so much a collision between the corpuscle and the atom as a whole, as between a corpuscle and the individual particles of which the atom consists; and the number of collisions the corpuscle would make, and therefore the resistance it would experience, would be the same if the number of particles in unit volume were the same, whatever the nature of the atoms might be into which these particles are aggregated. The number of particles in unit volume is however fixed by the density of the substance and thus on this view the density and the density alone should fix the resistance offered by the substance to the motion of a corpuscle through it; this, however, is precisely Lenard’s result, which is thus a strong confirmation of the view that the atoms of the elementary substances are made up of simpler parts all of which are alike. This and similar views of the constitution of matter have often been advocated; thus in one form of it, known as Prout’s hypothesis, all the elements were supposed to be compounds of hydrogen. We know, however, that the mass of the primordial atom must be much less than that of hydrogen. Sir Norman Lockyer has advocated the composite view of the nature of the elements on spectroscopic grounds, but the view has never been more boldly stated than it was long ago by Newton who says:

“The smallest particles of matter may cohere by the strongest attraction and compose bigger particles of weaker virtue and many of these may cohere and compose bigger particles whose virtue is still weaker and so on for divers succession, until the progression ends in the biggest particles on which the operations in Chemistry and the colours of natural bodies depend and which by adhering compose bodies of a sensible magnitude.”

The reasoning we used to prove that the resistance to the motion of the corpuscle depends only upon the density is only valid when the sphere of action of one of the particles on a corpuscle does not extend as far as the nearest particle. We shall show later on that the sphere of action of a particle on a corpuscle depends upon the velocity of the corpuscle, the smaller the velocity the greater being the sphere of action and that if the velocity of the corpuscle falls as low as 107 centimeters per second, then, from what we know of the charge on the corpuscle and the size of molecules, the sphere of action of the particle might be expected to extend further than the distance between two particles and thus for corpuscles moving with this and smaller velocities we should not expect the density law to hold. 

Existence of free corpuscles or negative electricity in metals

In the cases hitherto described the negatively electrified corpuscles had been obtained by processes which require the bodies from which the corpuscles are liberated to be subjected to somewhat exceptional treatment. Thus in the case of the cathode rays the corpuscles were obtained by means of intense electric fields. in the case of the incandescent wire by great heat, in the case of the cold metal surface by exposing this surface to light. The question arises whether there is not to some extent, even in matter in the ordinary state and free from the action of such agencies, a spontaneous liberation of those corpuscles a kind of dissociation of the neutral molecules of the substance into positively and negatively electrified parts, of which the latter are the negatively electrified corpuscles.

Let us consider the consequences of some such effect occurring in a metal, the atoms of the metal splitting np into negatively electrified corpuscles and positively electrified atoms and these again after a time re-combining to form neutral system. When things have got into a steady state the number of corpuscles re-combining in a given time will be equal to the number liberated in the same time. There will thus be diffused through the metal swarms of these corpuscles, these will be moving about in all directions like the molecules of a gas and, as they can gain or lose energy by colliding with the molecule of the metal, we should expect by the kinetic theory of gasses that they will acquire such an average velocity that the mean kinetic energy of a corpuscle moving about in the metal is equal to that possessed by a molecule of a gas at the temperature of the metal; this would make the average velocity of the corpuscles at 0° C. about 107 centimeters per second. This swarm of negatively electrified corpuscles when exposed to an electric force will be sent drifting along in the direction opposite to the force; this drifting of the corpuscles will be an electric current, so that we could in this way explain the electrical conductivity of metals.

The amount of electricity carried across unit area under a given electric force will depend upon and increase with (1) the number of free corpuscles per unit volume of the metal, (2) the freedom with which these can move under the force between the atoms of the metal; the latter will depend upon the average velocity of these corpuscles, for if they are moving with very great rapidity the electric force will have very little time to act before the corpuscle collides with an atom, and the effect produced by the electric force annulled. Thus the average velocity of drift imparted to the corpuscles by the electric field will diminish as the average velocity of translation, which is fixed by the temperature, increases. As the average velocity of translation increases with the temperature, the corpuscles will move more freely under the action of an electric force at low temperatures than at high, and thus from this cause the electrical conductivity of metals would increase as the temperature diminishes. In a paper presented to the International Congress of Physics at Paris in the autumn of last year, I described a method by which the number of corpuscles per unit volume and the velocity with which they moved under an electric force can be determined. Applying this method to the case of bismuth, it appears that at the temperature of 20° C, there are about as many corpuscles in a cubic centimeter as there are molecules in the same volume of a gas at the same temperature and at a pressure of about ¼ of an atmosphere, and that the corpuscles under an electric field of 1 volt per centimeter would travel at the rate of about 70 meters per second. Bismuth is at present the only metal for which the data necessary for the application of this method exists, but experiments are in progress at the Cavendish Laboratory which it is hoped will furnish the means for applying the method to other metals. We know enough, however, to be sure that the corpuscles in good conductors, such as gold, silver or copper, must be much more numerous than in bismuth, and that the corpuscular pressure in these metals must amount to many atmospheres. These corpuscles increase the specific heat of a metal and the specific heat gives a superior limit to the number of them in the metal.

An interesting application of this theory is to the conduction of electricity through thin films of metal. Longden has recently shown that when the thickness of the film falls below a certain value, the specific resistance of the film increases rapidly as the thickness of the film diminishes. This result is readily explained by this theory of metallic conduction, for when the film gets so thin that its thickness is comparable with the mean force path of corpuscle the number of collisions made by a corpuscle in a film will be greater than in the metal in bulk, thus the mobility of the particles in the film will be less and the electrical resistance consequently greater.

The corpuscles disseminated through the metal will do more than carry the electric current, they will also carry heat from one part to another of an unequally heated piece of metal. For if the corpuscles in one part of the metal have more kinetic energy than those in another, then, in consequence of the collisions of the corpuscles with each other and with the atoms, the kinetic energy will tend to pass from those places where it is greater to those where it is less, and in this way heat will flow from the hot to the cold parts of the metal, as the rate with which the heat is carried will increase with the number of corpuscles and with their mobility, it will be influenced by the same circumstances as the conduction of electricity, so that good conductors of electricity should also be good conductors of heat. If we calculate the ratio of the thermal to the electric conductivity on the assumption that the whole of the heat is carried by the corpuscles we obtain a value which is of the same order as that found by experiment.

Weber many years ago suggested that the electrical conductivity of metals was due to the motion through them of positively and negatively electrified particles, and this view has recently been greatly extended and developed by Riecke and by Drude, the objection to any electrolytic view of the conduction through metals is that, as in electrolysis, the transport of electricity involves the transport of matter, and no evidence of this has been detected, this objection does not apply to the theory sketched above, as on this view it is the corpuscles which carry the current, these are not atoms of the metal, but very much smaller bodies which are the same for all metals.

It may be asked if the corpuscles are disseminated through the metal and moving about in it with an average velocity of about 107 centimeters per second, how is it that some of them do not escape from the metal into the surrounding air? We must remember, however, that these negatively electrified corpuscles are attracted by the positively electrified atoms and in all probability by the neutral atoms as well, so that to escape from these attractions and get free a corpuscle would have to possess a definite amount of energy, if a corpuscle had less energy than this then, even though projected away from the metal, it would fall back into it after traveling a short distance. When the metal is at a high temperature, as in the case of the incandescent wire, or when it is illuminated by ultraviolet light some of the corpuscles acquire sufficient energy to escape from the metal and produce electrification in the surrounding gas. We might expect too that, if we could charge a metal so highly with negative electricity, that the work done by the electric field on the corpuscle in a distance not greater than the sphere of action of the atoms on the corpuscles was greater than the energy required for a corpuscle to escape, then, the corpuscles would escape and negative electricity stream from the metal. In this case the discharge could be effected without the participation of the gas surrounding the metal and might even take place in an absolute vacuum, if we could produce such a thing. We have as yet no evidence of this kind of discharge, unless indeed some of the interesting results recently obtained by Earhart with very short sparks should be indications of an effect of this kind.

A very interesting case of the spontaneous emission of corpuscles is that of the radio-active substance radium discovered by M. and Madame Curie. Radium gives out negatively electrified corpuscles which are deflected by a magnet. Becquerel has determined the ratio of the mass to the charge of the radium corpuscles and finds it is the same as for the corpuscles in the cathode rays. The velocity of the radium corpuscles is, however, greater than any that has hitherto been observed for either cathode or Lenard rays: being, as Becquerel found, as much as 2 X 1010 centimeters per second or two-thirds the velocity of light. This enormous velocity explains why the corpuscles from radium are so very much more penetrating than the corpuscles from cathode or Lenard rays; the difference in this respect is very striking, for while the latter can only penetrate solids when they are beaten out into the thinnest films, the corpuscles from radium have been found by Curie to be able to penetrate a piece of glass 3 millimeters thick. To see how an increase in the velocity can increase the penetrating power, let us take as an illustration of a collision between the corpuscle and the particles of the metal the case of a charged corpuscle moving past an electrified body; a collision may be said to occur between these when the corpuscle comes so close to the charged body that its direction of motion after passing the body differs appreciably from that with which it started. A simple calculation shows that the deflection of the corpuscle will only be considerable when the kinetic energy, with which the corpuscle starts on its journey towards the charged body is not large compared with the work done by the electric forces on the corpuscle in its journey to the shortest distance from the charged body. If d is the shortest distance, e and e’ the charge of the body and corpuscles, the work done is ee’/d; while if m is the mass and v the velocity with which the corpuscle starts the kinetic energy to begin with is ½mv2; thus a considerable deflection of the corpuscle, i.e., a collision will occur only when ee’/d is comparable with ½mv2; and d the distance at which a collision occurs, will vary inversely as v2. As d is the radius of the sphere of action for collision and as the number of collisions is proportional to the area of a section of this sphere, the number of collisions is proportional to d2, and therefore varies inversely as v4. This illustration explains how rapidly the number of collisions and therefore the resistance offered to the motion of the corpuscles through matter diminishes as the velocity of the corpuscles increases, so that we can understand why the rapidly moving corpuscles from radium are able to penetrate substances which are nearly impermeable to the more slowly moving corpuscles from cathode and Lenard rays.

Cosmical effects produced by corpuscles

As a very hot metal emits these corpuscles it does not seem an improbable hypothesis that they are emitted by that very hot body, the sun. Some of the consequences of this hypothesis have been developed by Paulsen, Birkeland and Arrhenius who have developed a theory of the Aurora Borealis from this point of view. Let us suppose that the sun gives out corpuscles which travel out through interplanetary space; some of these will strike the upper regions of the Earth’s atmosphere and will then or even before then, come under the influence of the Earth’s magnetic field. The corpuscles when in such a field, will describe spirals round the lines of magnetic force; as the radii of these spirals will be small compared with the height of the atmosphere; we may for our present purpose suppose that they travel along the lines of the Earth’s magnetic force. Thus the corpuscles which strike the Earth’s atmosphere near the equatorial regions where the lines of magnetic force are horizontal will travel horizontally, and will thus remain at the top of the atmosphere where the density is so small that but little luminosity is caused by the passage of the corpuscles through the gas; as the corpuscles travel into higher latitudes where the lines of magnetic force dip, they follow these lines and descend into the lower and denser parts of the atmosphere, where they produce luminosity, which on this view is the Aurora.

As Arrhenius has pointed out the intensity of the Aurora ought to be a maximum at some latitude intermediate between the pole and the equator, for, though in the equatorial regions the rain of corpuscles from the sun is greatest, the Earth’s magnetic force keeps these in such highly rarefied gas that they produce but little luminosity, while at the pole, where the magnetic force would pull them straight down into the denser air, there are not nearly so many corpuscles; the maximum luminosity will therefore be somewhere between these places. Arrhenius has worked out this theory of the Aurora very completely and has shown that it affords a very satisfactory explanation of the various periodic variations to which it is subject.

As a gas becomes a conductor of electricity when corpuscles pass through it, the upper regions of the air will conduct, and when air currents occur in these regions, conducting matter will be driven across the lines of force due to the Earth’s magnetic field, electric currents will be induced in the air, and the magnetic force due to these currents will produce variations in the Earth’s magnetic field. Balfour Stewart suggested long ago that the variation on the Earth’s magnetic field was caused by currents in the upper regions of the atmosphere, and Schuster has shown, by the application of Gauss’ method, that the seat of these variations is above the surface of the Earth.

The negative charge in the Earth’s atmosphere will not increase indefinitely in consequence of the stream of negatively electrified corpuscles coming into it from the sun, for as soon as it gets negatively electrified it begins to repel negatively electrified corpuscles from the ionized gas in the upper regions of the air, and a state of equilibrium will be reached when the Earth has such a negative charge that the corpuscles driven by it from the upper regions of the atmosphere are equal in number to those reaching the Earth from the sun. Thus, on this view, interplanetary space is thronged with corpuscular traffic, rapidly moving corpuscles coming out from the sun while more slowly moving ones stream into it.

In the case of a planet which, like the moon, has no atmosphere there will be no gas for the corpuscles to ionize, and the negative electrification will increase until it is so intense that the repulsion exerted by it on the corpuscles is great enough to prevent them from reaching the surface of the planet.

Arrhenius has suggested that the luminosity of nebulae may not be due to high temperature, but may be produced by the passage through their outer regions of the corpuscles wandering about in space, the gas in the nebulae being quite cold. This view seems in some respects to have advantages over that which supposes the nebulae to be at very high temperatures. These and other illustrations, which might be given did space permit, seem to render it probable that these corpuscles may play an important Dart in cosmical as well as in terrestrial physics.

*Professor Schuster in 1889 was the first to apply the method of the magnetic deflection of the discharge to get a determination of the value of m/e; he found rather widely separated limiting values for this quantity and came to the conclusion that it was of the same order as in electrolytic solutions, the result of the method mentioned above as well as those of Wiechert, Kaufmann and Leonard make it very much smaller.

Particle Physics photo
The cover of August 1901’s Popular Science Monthly.

Some text has been edited to match contemporary standards and style.

The post From the archives: The discovery of electrons breaks open the subatomic era appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: When food allergies were ‘strange pranks’ for scientists to decipher https://www.popsci.com/science/food-allergies-history/ Fri, 13 May 2022 11:00:00 +0000 https://www.popsci.com/?p=441293
A collage of images from the Popular Science article “Food or Poison? … Strange Pranks of a Medical Mystery” (Frederic Damrau, M.D., November 1936)
“Food or Poison? … Strange Pranks of a Medical Mystery” (Frederic Damrau, M.D., November 1936). Popular Science

A November 1936 Popular Science article presented the available data on food allergies, albeit limited.

The post From the archives: When food allergies were ‘strange pranks’ for scientists to decipher appeared first on Popular Science.

]]>
A collage of images from the Popular Science article “Food or Poison? … Strange Pranks of a Medical Mystery” (Frederic Damrau, M.D., November 1936)
“Food or Poison? … Strange Pranks of a Medical Mystery” (Frederic Damrau, M.D., November 1936). Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

When Popular Science ran the article “Food or Poison?” in November 1936, it would be more than 30 years before Kimishige Ishikaka and his wife Teruko would discover Immunoglobulin E or IgE, the antibody responsible for allergic reactions. But, even in 1936, the hunt for clues that would identify and explain food allergies had been underway for decades. 

German dermatologist Josef Jadasshon may have been the first to devise a test for diagnosing such sensitivities in 1896. Known as the patch test, Jadasshon would bind a swatch infused with the prospective allergen to skin to see if a rash developed. In 1912, American pediatrician Oscar Menderson Schloss was the first to use a skin prick test, which is still in use today. Nearly half a century before the IgE discovery, Carl Pausnitz and Heinz Kustner identified the role of antibodies in producing allergic reaction, then known as “reagins.”

Frederic Damrau, the MD who penned Popular Science’s 1936 story, describes a new test developed by American allergist Warren Vaughan that detected a “noticeable decrease in the number of white corpuscles [platelets] in the blood” when patients were fed allergy-triggering foods—a tantalizing clue implicating antibodies. Damrau’s account is filled with humorous and alarming anecdotes, illuminated by illustrations from artist Benjamin Goodwin Seielstad. 

Today, food allergies have been on the rise for decades, primarily in industrialized countries. And, despite considerable diagnostic progress over the last century, there is no cure for food allergies, a malady that affects 1 in 13 children in the US.

“Food or Poison? … Strange Pranks of a Medical Mystery” (Frederic Damrau, M.D., November 1936)

Not long ago, a man arrived at the famous Mayo Clinic, at Rochester, Minn. This was his curious story: Every morning at eleven o’clock, no matter whether he was in a business conference or driving his car, he dropped asleep!

Dr. Walter Alvarez, oi the clinic, followed clew after clew. Finally, he traced the ailment back to the man’s breakfast, to his cup of coffee, and even to the cream in his morning beverage. When the patient eliminated cream from his coffee, the trouble disappeared!

Just as amazing are thousands of other instances of that strange and often fantastic disorder known as allergy. Victims are upset when they eat, breathe, or touch substances which are harmless to the average person. Literally, what is meat for millions is poison for them.

If eggs make you break out in a rash, if strawberries give you hives, if cats set you sneezing, you are allergic. Between l0,000,000 and 15,000,000 Americans, it is estimated, are allergic to something.

Every time a boy who lives in Brooklyn, N. Y., chews gum, he starts to cough and sneeze. He is sensitive to chicles. Every time a girl in Chicago, Ill., smells chrysanthemums, her eyes puff up. She is allergic to the flower’s pollen. Every time a man in the South puts catchup on his steak, he chokes and gasps for breath. He is affected by tomatoes in any form. Every time a woman in St. Louis, Mo., eats an onion, she gets blue spots on her skin. Every time-—but the list goes on, indefinitely.

I have known people allergic to such familiar things as wallpaper, Christmas trees, sauerkraut, rubber, red plums, seed corn, asters, rice, dates, ginger ale, flyspecks, corn silk. I have heard of a butcher allergic to mutton; a florist sensitive to primroses; a carpenter affected by wood dust. And medical research is continually adding new trouble makers to the list

At a recent meeting of the American Medical Association, the noted Kansas City, Mo., specialist, Dr. William W. Duke, reported a case of “scratch allergy.” The patient was hypersensitive to mechanical irritation. Even a scratch might prove fatal, not from infection but from the shock of the tiny injury.

Some months ago, a physician friend of mine discovered what he thought was “aunt allergy.” Each time a particular aunt visited a six-year-old child, the boy broke out in a rash resembling measles!

In the end, however, the doctor discovered that the youngster was violently allergic to eggs. The aunt invariably had bacon and eggs for breakfast and when she kissed her nephew traces remaining on her lips were sufficient to upset him!

Even more astonishing cases are familiar to medical men. Several patients have proved so sensitive to eggs that meat from a hen caused them to break out in a rash while meat from a rooster gave them no trouble. Infinitesimal traces of egg in the hen’s meat were responsible. Likewise, meat from a cow may upset a patient violently allergic to milk, while beef from a steer proves harmless.

So sensitive was one patient to buckwheat that a single drop of honey made by bees after visiting buckwheat flowers would produce severe abdominal pains. It was not the sugar content of the honey that caused the trouble. It was the remnants of buckwheat. In the laboratory, if you remove the water and sugar from honey by dialysis, or the use of special membranes, virtually nothing remains behind. Yet, it was this “nothing” which brought on the attack!

When a physician encounters an allergic patient, his work more than ever resembles that of a detective. He hunts clews; he eliminates suspects; he traces effect back to cause. The commonest method of tracking down an outlaw substance is known as the “scratch test.” How it works is illustrated by a mystifying case reported from the Middle West.

One day, an elderly man licked the flap of an envelope in sealing a letter. A few minutes later, he began to tingle from head to foot. Then, his face grew purple, his breath came in gasps, and he dropped to the floor, unconscious. It was fifteen minutes before he came to. But in half an hour he was as well as ever. On another occasion, he tried on a pair of shoes that had just come back from the cobbler’s. Hardly could he tear them from his feet before he fainted. What was the secret of the strange attacks?

His physician suspected that the root of the trouble was allergy. Making tiny scratches on the patient’s arm, he bound various substances tightly against the skin. In this test, harmless substances produce no effect; harmful ones cause a rash on the skin or react in some other way. Almost the instant that fish glue touched his arm, the patient started, and began to gasp for breath. This glue was a violent poison to his system and he had encountered it on both the insoles of the shoes and the flap of the envelope.

By means of the scratch test, one doctor found that an unfortunate four-year-old child was allergic to twenty-eight different things. She suffered from hay fever, asthma, hives, and a constantly upset stomach, all caused by the everyday substances that were poison to her system. They included potatoes, eggs, salmon, cod fish, mustard, green peppers. black pepper, Chicken feathers, cattle hair, ragweed pollen. cockleburs, and aspirin.

Recently, the famous Richmond, Va., allergist. Dr. Warren T. Vaughan, announced a new and more sensitive test for foods that cause trouble, based on the pioneer work of the French scientist, Dr. F. Widal. After a twelve-hour fast, the patient dines on the suspected food. Then, blood samples, taken at half-hour intervals, go under the microscope. If the food is the trouble maker, there will be a noticeable decrease in the number of white corpuscles in the blood.

Only a few weeks after the new test was made public, it gave dramatic proof of its value. For eight years. a patient had been confined in a middle western sanitarium with a persistent fever. Doctors diagnosed her condition as tuberculosis. Using the Vaughan test, a physician proved that she was suffering from allergy and was continually being upset by the very foods she was being fed to make her well! When these foods were eliminated, the fever subsided and she was able to leave the sanitarium where she had spent nearly a decade.

Strangely enough, it is often the most wholesome foods that cause the most trouble. Eggs, wheat, and milk are, in that order, first on the list of troublemakers. Also, the allergy victim rarely dislikes the food that makes him sick—and oftentimes it is his favorite dish!

If you ask a specialist to explain just what such a food does in the system, he will be hard put to answer. Someone once asked Thomas A. Edison for the definition of electricity. He replied that any schoolboy could give as good a definition as he could. He knew what electricity does but not what it is. So with allergy. We know its effects, but much concerning how the effects are produced is shrouded in mystery. Two national groups, the Association for the Study of Allergy and the Society for the Study of Asthma and Allied Conditions, are now seeking to penetrate these mysteries.

One widely accepted theory is that the reaction is caused by foreign substances reaching the bloodstream. The patient’s system has built up a standing army of tiny bodies in the blood to fight this particular substance. When more of it is introduced, many specialists believe, these bodies go into action so quickly that the health of the patient is upset.

This theory that the reaction takes place in the blood stream would explain an occurrence in an eastern hospital that reads, at first glance, like a page from some Baron Munchausen of medicine.

In an emergency, a patient received a blood transfusion that saved his life. But, shortly afterwards, he began to sneeze repeatedly. Investigation showed that the donor of the blood was allergic to chicken feathers and that the allergy had been transferred temporarily to the patient, who was immediately affected by the feathers in his pillow!

Another curious instance of temporary allergy was reported to the American Medical Association. After an abdominal operation, a woman developed symptoms of hay fever. Her physician discovered that she was allergic to the catgut used in sewing up the incision. It had been treated to last forty days. At the end of that time, when the catgut had been absorbed by the body, the “hay fever” disappeared.

Occasionally, some common drug, such as quinine or aspirin, will produce an unexpected result because the patient is allergic to it. One man, in the South, who was dying of diabetes, could not take insulin. A girl who had an infected hand made matters worse by putting on a flaxseed poultice. She was allergic to flaxseed.

Cosmetics—face powders, lip sticks, perfumes, hair lotions, soaps—often act as poisons to sensitive persons. I remember one case in which a wealthy woman traveled thousands of miles—to California, Florida, Africa— in search of a climate that would relieve her asthma. Then, she found she was carrying her asthma wherever she went—in her powder compact. She was allergic to orris root, one of the ingredients in the powder she used.

That recalls one of the funniest cases I ever encountered. A young man found that whenever he kissed his sweetheart he began to wheeze and sniffle. A special brand of face powder was the explanation.

Again, there is the record of a sea captain who had an attack of asthma whenever he came into port but who was free from the disorder at sea. Investigation showed that he was sensitive to orris root in face powder. At sea, where there were no women and no face powder, his asthma disappeared.

To aid thousands of persons who are allergic to orris root, a Chicago manufacturer has put on the market a powder free from the troublesome substance. This same company is turning out a complete line of nonallergic cosmetics which are sold from coast to coast.

Other concerns are catering to the trade of those sensitive to various foods and dusts. A milk substitute made from soybeans which can be digested by patients who are upset by ordinary milk is now on the market and a process recently patented by an Ohio inventor will make it possible for allergic people to drink cows’ milk without ill effects. Special heating chambers remove the objectionable elements. Incidentally, it is rarely the milk itself that causes trouble. Rather it is the traces of something the cow has eaten, such as bran, weeds, or various flowers.

In Massachusetts, a large furniture factory is doing a flourishing business supplying special chairs, beds, and sofas to buyers who are allergic to feathers and animal hairs. Large department stores throughout the United States are also selling specially designed covers that slip on over chairs and sofas to prevent dust from the interior from reaching the air.

Of course, the commonest form of allergy due to floating particles is hay fever. Hundreds of thousands of people travel millions of miles a year to escape the air-borne pollen which causes them misery. That it is not necessary even to swallow or breathe a substance to have it upset you is illustrated by a host of curious cases in the records of allergy. Here are two with a humorous twist.

A New York woman went to an oculist to be fitted with new glasses. On her way home, she noticed that people were turning and staring at her as they went by. When she glanced in the mirror, she understood the reason. Across each cheek was a huge cherry-red welt. The composition used in the frames of the spectacles evidently contained some element to which her system was allergic.

Imagine the amazement of another eastern woman when her upper lip began to swell as soon as she started playing the flute in an orchestra of which she was a member! By the end of the concert, it was puffed up as though a bee had stung it. That night, the swelling went down. But the next day, when she started to practice, it puffed up again. Every time she put the flute to her mouth, her upper lip began to swell!

She took the instrument and her weird story to her family physician. He made tests and learned that a new mouthpiece recently placed on the flute was made of wood to which the woman was highly sensitive. When another mouthpiece replaced it, her mystifying disorder was ended.

In conclusion, here is the question which is most commonly asked me about allergy. Is it inherited? If your father is upset by milk or eggs or primrose pollen, does that mean you will be too?

Science can give a definite answer. It is: No. After studying the family histories of 250 allergic children and 315 normal ones, Dr. Bret Ratner, professor of children’s diseases at New York University College of Medicine, recently reported that he found no more allergic parents in the first group than in the second.

However, it is known that a tendency to be sensitive to something often is handed down from father to son. For example, I know a man who can ride horseback all day long, while his son begins to sneeze if he comes within half a block of a horse. But, the same son can eat walnuts whenever he wants while the slightest taste makes his father break out with hives!

In this strange world of idiosyncrasies, laws of which we know little are constantly at work. And, in their functioning, they produce some of the most fascinating, as well as bewildering, pages in the story of medicine.

Medicine photo
The cover of Popular Science‘s November 1936 issue featuring breakthrough inventions and early DIY tips.

Some text has been edited to match contemporary standards and style.

The post From the archives: When food allergies were ‘strange pranks’ for scientists to decipher appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The resurgence of open public streets is a centuries-old idea https://www.popsci.com/science/complete-streets-open-city-design-pandemic/ Thu, 12 May 2022 13:00:00 +0000 https://www.popsci.com/?p=442432
a black and white and purple stylized collaged image of a man holding a drafting pencil over a model of a city and on the right hand side another cut out of a circular street design
Skyscraper architect Harvey W. Corbett came up with a future city that converted streets into gathering places. That idea is making a comeback. Popular Science

Nearly a century ago, a skyscraper architect designed a future city that transformed streets into outdoor gathering spaces. That idea is making a comeback.

The post The resurgence of open public streets is a centuries-old idea appeared first on Popular Science.

]]>
a black and white and purple stylized collaged image of a man holding a drafting pencil over a model of a city and on the right hand side another cut out of a circular street design
Skyscraper architect Harvey W. Corbett came up with a future city that converted streets into gathering places. That idea is making a comeback. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

By the mid-18th century, a narrow foot trail that once rose from its marshy source had been widened into a cobbled avenue, running from busy docks along the island’s southern end. It circuited past tree-lined shops and tall Dutch-eave houses, bell-towered churches and small public gardens, traders with handcarts and open-air markets, and even a crowded bowling green, until, to the north, the broad way’s roadbed eventually faded back to dirt and vanished into a forest that cloaked the island’s spine. Pedestrians and horse-led wagons moved up and down the vibrant avenue where the heart of young New York pulsed. 

In 1925, Harvey W. Corbett, an accomplished skyscraper architect, rendered his vision of a futuristic city for Popular Science. His blueprint called for a return to holistic city streets—now called complete streets—like the one depicted above. These pathways have ample room for pedestrians, outdoor gatherings, and nonmotor traffic. A century ago, New York had already shunted its thronging street life to narrow sidewalks in order to pave the way for industrialization. By then, more than half the US population had moved into cities, overwhelming their largely organic street designs. Much like New York’s Broadway, streets evolved from trade routes and open spaces meant to facilitate outdoor gatherings—where the exchange of goods, news, and ideas forged dynamic urban cores—to congested motor-vehicle corridors. In the decades since Corbett suggested we turn back the clock, progress has been spotty. Complete-street calculus shifted in 2020, however, when the pandemic enabled city officials to press reset. But will it last?

“The complete streets concept and movement has come out of the idea that streets need to be more than just places for cars to move quickly,” notes Corinne Kisner, Executive Director of the National Association of City Transportation Officials (NACTO), an association of 86 North American cities and transit agencies. It’s about making sure that “streets are public places where people can connect with each other and move around their city in ways that are safe and equitable.” It’s the way streets used to be until the late 19th century.

As industrialization gained momentum in the early 1900s, however, and the first industrial revolution (steam engines and mechanization) gave way to the second (mass production), factories and tenements sprouted in cities, and automobiles sped up and down increasingly congested avenues and winding streets, which stank of coal sulfur and engine exhaust. Beset by congestion, increasingly toxic living conditions, and the erosion of their historic cores, city managers turned to a small but growing field of urban planners and designers to fix the problem. By 1923, Harvard was the first university in the US to offer a post-graduate degree in urban planning—a complex field of engineering, architecture, transportation, social science, and politics.

[Related: The surprising politics of sidewalks]

Corbett’s vision captured the essence of the budding urban planning movement, whose primary focus was sustainable growth and development. He also cited a desire to restore the historic vibrance of city life, which, for millennia, had flowed from the streets, commons, marketplaces, forums, churches, and temples where people congregated.

Corbett’s street design was a multi-level network fit for a modern urban lifestyle. He included a dedicated park-like upper tier for pedestrians, with ready access to public spaces, shops, restaurants, and entertainment; two subterranean levels for “motor” traffic; and a fourth for mass transit. He predicted that by 1950, most Americans would be living in his vision of a future city, with half-mile-high skyscrapers, elevated parks, and airship landing strips atop buildings. Corbett wasn’t wrong about the percentage of Americans living in US cities—64 percent in 1950, 83 percent today—but he missed the mark on street design, largely because moving more cars faster has remained the priority.

Advocate and urbanism expert David Goldberg coined the term “complete streets” in late 2003 and breathed new life into unified urban planning efforts. As a member of the federal advocacy group Smart Growth America, he hoped the phrase would help the biking advocacy organization America Bikes push bike-friendly legislation through US Congress. The catchy term was taken up by many urban planning organizations across America, and by 2017 cities had enacted more than 100 complete-streets policies. The efforts all advocate for a mix of common features, much like Corbett’s design, which emphasized pedestrians, public spaces, sustainability, accessibility, and greenery—all while combating traffic congestion. 

a design of a bustling city street where lanes for vehicles and pedestrians run more parallel for a more seamless flow
Photorealistic rendering based on concepts in the NACTO Blueprint for Autonomous Urbanism. Copyright © 2017 Bloomberg Philanthropies. Reproduced with permission by Bloomberg Philanthropies and the National Association of City Transportation Officials.

While Corbett’s nearly century-old concepts align with contemporary complete-streets goals (all but the “pure air piped in from the country” and “spiral escalators”), Kisner offers a different take on his vision. “There has been this assumption for a long time in this field that we can engineer our way out of congestion—you know, design more roads, or make multi-leveled roads that will solve everything, and it’s just not true.” More roads, more lanes, more levels mean more congestion if moving cars remains the goal. 

Instead, Kisner, like other complete streets advocates, wants to break the moving-cars-as-priority paradigm. “We know how to design streets in ways that are less dominated by automobile traffic,” she adds. In 2017, NACTO commissioned its own futuristic blueprint of city streets, dubbed Autonomous Urbanism, which shifts the focus back on pedestrians rather than vehicles. Autonomous Urbanism “prioritizes people walking, biking, rolling, and taking transit, putting people at the center of urban life and street design, while taking advantage of new technologies,” the authors write in the NACTO report. It offers today what Corbett offered in 1925: an optimistic rendering of city life. Unlike Corbett, NACTO was wise enough not to set a timetable.

[Related: Urban sprawl defines unsustainable cities, but it can be undone]

But it would take a pandemic to truly break the moving-cars paradigm—or at least interrupt it—and open up city streets to pedestrians and merchants once more in an effort to keep cities’ commercial and cultural hearts beating. In many ways, pandemic-driven open streets initiatives restored the essence of city life that urban planners have been striving to recapture since the dawn of industrialization: taking city streets back from fast locomotion and restoring them to slow congregation. It’s the sort of transformation that has rippled around the world, and has often manifested in cleaner air, quieter neighborhoods, and more resident interaction, even despite a lockdown and social distancing. This transformation is exemplified nowhere better, perhaps, than on the normally traffic-filled 34th Avenue in Queens, NY. Every day from 7:30 a.m. to 8 p.m., the streets close off to cars and make way for salsa dancing, yoga, and arts and crafts. The communal gathering point offers a rare shared space for culturally diverse groups to interact and is viewed by some locals as a miracle. 

According to an October 2020 report by traffic analytics firm, Inrix, Manhattan saw “a significant 31 percent jump in activity near Open Restaurants” in July that year when outdoor dining went into effect. But cities now face the decision to either reinstate pre-pandemic traffic—closing their streets to pedestrians and markets—or seize the opportunity to move toward the more livable version of city life envisioned by Corbett as far back as 1925.

“City Departments of Transportation all over the country are having conversations with their communities about what we have learned throughout the past year,” says Kisner. “I think we’ll see more evolution as things shift from this temporary or pilot space. And we’ve got time to uncover what worked, what didn’t work, and what we need to do to make sure that the way we’re designing streets continues to be really inclusive and sustainable.” 

The post The resurgence of open public streets is a centuries-old idea appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: The Theory of Relativity gains speed https://www.popsci.com/science/theory-of-relativity-popularity/ Thu, 12 May 2022 11:00:00 +0000 https://www.popsci.com/?p=441261
A collage of images from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914)
“The theory of relativity and the new mechanics” (William Marshall, June 1914). Popular Science

A June 1914 article in Popular Science Monthly explored the precedents and implications of Einstein's 1905 Theory of Relativity.

The post From the archives: The Theory of Relativity gains speed appeared first on Popular Science.

]]>
A collage of images from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914)
“The theory of relativity and the new mechanics” (William Marshall, June 1914). Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Although it may seem like Albert Einstein’s Theory of Relativity caught the world by surprise at the turn of the 20th century, in fact, it was a long time coming. Relativity’s roots can be traced to Galileo’s writings in 1632. To prove Copernicus’s heliocentric system, physics had to show that although Earth swung through space and rotated on its axis, observers on Earth would have no direct way of knowing that they were the ones in motion relative to the cosmos. Since early 17th century mathematics lacked the tools to aid Galileo’s proof, he conducted a thought experiment that employed the cabin of a ship to demonstrate the principle of relativity—how space and time are relative to frames of reference.

Even when Einstein published his theory in 1905, it did not arrive with a thunderclap. Rather, it slipped into the world almost incognito, in an Annalen der Physik article, “On the Electrodynamics of Moving Bodies.” By the time Popular Science published a detailed account of Einstein’s Theory of Relativity in 1914, its profound implications—such as light dictating the speed limit for everything, and the notion that time is not the same for everyone—had finally made its way through scientific circles. But as mathematician William Marshall, who penned Popular Science’s eminently readable explanation of the new theory, pointed out, Einstein’s work—somewhat poetically—was not accomplished in isolation. 

“The theory of relativity and the new mechanics” (William Marshall, June 1914)

He who elects to write on a mathematical topic is confronted with a choice between two evils. He may decide to handle his subject mathematically, using the conventional mathematical symbols, and whatever facts, formulas and equations the subject may demand—save himself who can! Or he may choose to abandon all mathematical symbols, formulas and equations, and attempt to translate into the vernacular this language which the mathematician speaks so fluently. In the one case there results a finished article which only the elect understand, in the other, only a rather crude and clumsy approximation to the truth. A similar condition exists in all highly specialized branches of learning, but it can safely be said that in no other science must one fare so far, and accumulate so much knowledge on the way, in order to investigate or even understand new problems. And so it is with some trepidation that the attempt is made to discuss in the following pages one of the newest and most important branches of mathematical activity. For the writer has chosen the second evil, and, deprived of his formulas, to borrow a figure of Poincaré’s, finds himself a cripple without his crutches.

After this mutually encouraging prologue let us introduce the subject with a definition. What is relativity? By relativity, the theory of relativity, the principle of relativity, the doctrine of relativity, is meant a new conception of the fundamental ideas of mechanics. By the relativity mechanics, or as we may sometimes say, the new mechanics, is meant that body of doctrine which is based on these new conceptions. Now this is a very simple definition and one which would be perfectly comprehensible to everybody, provided the four following points were made clear: first, what are the fundamental concepts of mechanics, second, what are the classical notions about them, third, how are these modified by the new relativity principles, and fourth, how did it come about that we have been forced to change our notions of these fundamental concepts which have not been questioned since the time of Newton? These four questions will now be discussed, though perhaps not in this order. The results reached are, to say the least, amazing, but perhaps our astonishment will not be greater than it was when first we learned, or heard rather, that the Earth is round, and that there are persons directly opposite us who do not fall off, and stranger yet, do not realize that they are in any immediate danger of doing so. 

In the first place then, how has it come about that our conceptions of the fundamental notions of mechanics have been proved wanting? This crime like many another may safely be laid at the door of the physicists, those restless beings who, with their eternal experimenting, are continually raising disturbing ghosts, and then frantically imploring the aid of the mathematicians in order to exorcize them. Let us briefly consider the experiment which led us into those difficulties from which the principle of relativity alone apparently can extricate us.

Consider a source of sound A at rest (Fig. 1), and surrounded by air, in which sound is propagated, also at rest. 

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

Now, as every schoolboy knows, the time taken for sound to go to B is the same as that taken to go to C, if B and C are at the same distance from A. The same is true also if A, B and C are all moving with uniform velocity in any direction, carrying the air with them. This may be realized by a closed railway car or a boat. But if the points A, B, and C are moving with uniform velocity, and the air is at rest relative to them, or what is the same thing, if they are at rest and the air is moving past them with uniform velocity, the state of affairs is very different. If the three points are moving in the direction indicated by the arrow (Fig. 2), and if the air is at rest, and if a sound wave is sent out from A, then the time required for this sound wave to go from A to C is not the same as that required from A to B. Now as sound is propagated in air, so is light in an imaginary medium, the ether. Moreover, this ether is stationary, as many experiments show, and the earth is moving through it, in its path around the sun with a considerable velocity. Therefore we have exactly the same case as before, and it should be very easy to show that the velocity of light in a direction perpendicular to the Earth’s direction of motion is different from that in a direction which coincides with it. But a famous experiment of Michelson and Morley, carried out with the utmost precision, showed not the slightest difference in these velocities. So fundamental are these two simple experimental facts, that it will be worthwhile to repeat them in slightly different form. If the three points A, B, C (Fig. 2), are moving to the right with a uniform unknown velocity through still air, and if a sound wave were sent out from A, it would be exceedingly simple to determine the velocity of the point A by a comparison of the time necessary for sound to travel from A to B and from A to C. But now if the same three points move through stationary ether, and if the wave emanating from A is a light wave, there is absolutely no way in which an observer connected with these three points can determine whether he is moving or not. Thus we are, in consequence of the Michelson and Morley experiment, driven to the first fundamental postulate of relativity: The uniform velocity of a body can not be determined by experiments made by observers on the body.

Consider now one of the fundamental concepts of mechanics, time. Physicists have not attempted to define it, admitting the impossibility of a definition, but still insisting that this impossibility was not owing to our lack of knowledge, but was due to the fact that there are no simpler concepts in terms of which time can be defined. As Newton says: “Absolute and real time flows on equably, having no relation in itself or in its nature to any external object.”

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

Let us examine this statement, which embodies fairly our notion of time, in the light of the first fundamental principle of relativity just laid down. Suppose A and B (Fig. 3) are two observers, some distance apart, and they wish to set their clocks together. At a given instant agreed upon beforehand, A sends out a signal, by wireless if you wish, and B sets his clock at this instant. But obviously the signal has taken some time to pass from A to B, so B’s clock is slow. But this seems easy to correct; B sends a signal and A receives, and they take the mean of the correction. But says the first principle of relativity, both A and B are moving through the ether with a velocity which neither knows, and which neither can know, and therefore the time taken for the signal to pass from A to B is not the same as that taken to pass from B to A. Therefore the clocks are not together, and never can be, and when A’s clock indicates half-past two, B’s does not indicate this instant, and worse yet, there is absolutely no way of determining what time it does indicate. Time then is purely a local affair. The well-known phrase, “at the same instant” has no meaning for A and B, unless a definition be laid down giving it a meaning. The “now” of A may be the “past” or “future” of B. To state the case in still other words, two events can no more happen simultaneously at two different places, than can two bodies occupy the same position.

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

But doubtless the reader is anxious to say, this matter of adjusting the clocks together can still be settled. Let there be two clocks having the same rate at a point A, and let them be set together. Then let one of them be carried to the point B, can not they then be said to be together? Let us examine this relative motion of one clock with respect to another, in the light of the first principle of relativity. Let there be two observers as before with identical clocks, and for simplicity, suppose A is at rest and B moving on the line BX (Fig. 4). Suppose further BX parallel to AY. Let now A send out a light signal which is reflected on the line BX and returns to A. The signal has then traveled twice the distance between the lines in a certain time. В then repeats the same experiment, for, as far as he knows, he is at rest, and A moving in the opposite direction. The signal traverses twice the distance between the lines, and B’s clock must record the same interval of time as A’s did. But now suppose B’s experiment is visible to A. He sees the signal leave B, traverse the distance between the lines, and return, but not to the point B, but to the point to which B has moved in consequence of his velocity. 

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

That is, A secs the experiment as in Fig. 5, where the position of B’ depends on B’s velocity with respect to A. The state of affairs is to A then simply this: A signal with a certain known velocity has traversed the distance ABA while his (A’’s) clock has registered a certain time interval. The same signal, moving with the same velocity, has traversed the greater distance BCB’ while B’s clock registers exactly the same time interval. The only conclusion is that to A, B’s clock appears to be running slow as we say, and its rate will depend on the relative velocity of A and B. Thus we are led to a second conclusion regarding time in the relativity mechanics. To an observer on one body the time unit of another body moving relative to the first body varies with this relative velocity. This last conclusion regarding time is certainly staggering, for it takes away from us what we have long regarded as its most distinguishing characteristic, namely, its steady, inexorable, onward flow, which recognizes neither place nor position nor movement nor from anything else. But now in the new mechanics it appears only as a relative notion, just as velocity is. There is no more reason why two beings should be living at the same rate, to coin an expression, than that two railroad trains should be running at the same speed. It is no longer a figure of speech to say that a thousand years are but as yesterday when it is past, but a thousand years and yesterday are actually the same time interval provided the bodies on which these two times are measured have a sufficiently high relative velocity.

It is to be noted that in the above discussion, use was made of the fact that the light signal sent out by B appeared to A to have the same velocity as one sent out by A himself. This stated in general terms, the velocity of light in free space appears the same to all observers, regardless of the motion of the source of light or of the observer, is the second fundamental postulate of relativity. It is an assumption pure and simple, reasonable on account of the analogy between sound and light, and does not contradict any known facts.

Now there is a second fundamental concept of mechanics, very much resembling time in that we are unable to define it, namely, space. Instead of being one-dimensional, as is time, it is three-dimensional, which is not an essential difference. From the days of Newton and Galileo, physicists have agreed that space like time is everywhere the same, and that it too is independent of any motion or external object. To fix the ideas, consider any one of the units in measuring length, the yard, for example. To be sure, the bar of wood or iron, which in length more or less nearly represents this yard, may vary, as everyone knows, in its dimensions, on account of varying temperature or pressure or humidity, or whatnot, but the yard itself, this unit of linear space which we have arbitrarily chosen, according to all our preconceived notions, neither depends on place nor position, nor motion, nor any other thinkable thing. But let us follow through another imaginary experiment in the light of the two fundamental postulates of relativity.

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

Consider again our two observers A and B (Fig. 6), each furnished with a clock and a yardstick, A at rest, B moving in the direction indicated by the arrow. Suppose A sends out a light signal and adjusts a mirror at C say, so that a ray of light goes from A to C and returns in say one second. A then measures the distance AC with his yardstick and finds a certain number. Then B, supposing that he himself is at rest and A in motion, sends out a light signal and adjusts a mirror at D so that a ray travels the distance BD and back again in one second of his time. 

Physics photo
An illustration from the Popular Science article “The theory of relativity and the new mechanics” (William Marshall, June 1914).

B then measures the distance BD with his yardstick, and since the velocity of light is the same in any system, B comes out with the same number of units of length in BD as A found in AC. But A watching B’s experiment sees two remarkable facts: first, that the light has not traversed the distance BDB at all, but the greater distance BD’B’ (Fig. 7), where D’ and B’ are the points, respectively, to which D and B have moved in consequence of the motion; second, since B’s clock is running slow, the time taken for light to traverse this too great distance is itself too great. Now if too great a distance is traversed in too great a time, then the velocity will remain the same provided the factor which multiplies the distance is the same as that which multiplies the time. But unfortunately, or fortunately, a very little mathematics shows that this multiplier is not the same. A sees too short a distance being traversed by light in a second of time, and therefore B’s yardstick is too short, and by an amount depending on the relative velocity of A and B. Thus we are led to the astonishing general conclusion of the relativity theory with reference to length: If two bodies are moving relative to each other, then to an observer on the one, the unit of length of the other, measured in the direction of this relative velocity, appears to be shortened by an amount depending on this relative velocity. This shortening must not be looked upon as due to the resistance of any medium, but, as Minkowski puts it, must be regarded as purely a gift of the gods, a necessary accompaniment of the condition of motion. The same objection might be raised here as in the case of the time unit. Perhaps the length of the yardstick appears to change, but does the real length change? But the answer is, there is no way of determining the real length, or more exactly, the words real length have no meaning. Neither A nor B can determine whether he is in motion or at rest absolutely, and if B compares his measure with another one traveling with him, he learns nothing, and if he compares it with one in motion relative to him, he finds the two of different length, just as A did.

This startling fact, that a railway train as it whizzes past us is shorter than the same train at rest, is at first a trifle disturbing, but how much of our amazement is due to our experience, or lack of it. [EDITOR’S NOTE: The author, below, demonstrated his point by means of an unfortunately racist analogy.] A certain African king, on beholding white men for the first time, reasoned that as all men were black, these beings, being white, could not be men. Are we any more logical when we say that since in our experience no yardsticks have varied appreciably on account of their velocity, hence it is absurd to admit the possibility of such a thing.

Perhaps it might be well at this point to give some idea of the size of these apparent changes in the length of the time unit and the space unit, although the magnitude is a matter of secondary importance. The whole history of physics is a record of continual striving after more exact measurements, and a fitting of theory to meet new corrections, however small. So it need not occasion surprise to learn that these differences are exceedingly minute; the amazing thing, and the thing of scientific interest, is that they exist at all. If we consider the velocity of the earth in its orbit, which is about 19 miles per second, the shortening of the Earth’s diameter due to this velocity as seen by an observer at rest relative to the earth would be approximately a couple of inches only. Similarly for the relative motion of the Earth and the sun, the shortening of the time unit would be approximately one second in five years. Even if this were the highest relative velocity known, the results would still be of importance, but the Earth is by no means the most rapid in its movement of the heavenly bodies, while the velocity of the radium discharge is some thousand times the velocity of the most rapidly moving planet.

In addition to space and time there is a third fundamental concept of mechanics, though the physicists have not yet settled to the satisfaction of everybody whether it is force or mass. But in any case, the one taken as the fundamental motion, mass say, is, in the classical mechanics, independent of the velocity. Mass is usually defined in physics as the quantity of matter in a body, which means simply that there is associated with every body a certain indestructible something, apart from its size and shape, independent of its position or motion with respect to the observer, or with respect to other masses. But in the relativity mechanics this primary concept fares no better than the other ones, space and time. Without going into the details of the argument by means of which the new results are obtained, and this argument, and the experiment underlying it, are by no means simple, it may suffice to say that the mass of a body must also be looked upon as depending on the velocity of the body. This result would seem at first glance to introduce an unnecessary and almost impossible complication in all the considerations of mechanics, but as a matter of fact exactly the opposite is true. It has been known for some time, that electrons moving with the great velocity of the electric discharge, have suffered an apparent increase of mass or inertia due to this velocity, that physicists for some time have been accustomed to speak of material mass and electromagnetic mass. But now in the light of the principles of relativity, this distinction between material mass and electromagnetic mass is lost, and a great gain in generality is made. All masses depend on velocity and it is only because the velocity of the electric discharge approaches that of light, that the change in mass becomes striking. This perhaps may be looked upon as one of the most important of the consequences of the theory of relativity in that it subjects electromagnetic phenomena to those laws which underlie the motions of ordinary bodies.

In consequence of this revision of our notions of space, time and mass, there result changes in the derived concepts of mechanics, and in the relations between them. In fact the whole subject of mechanics has had to be rewritten on this new basis, and a large part of the work of those interested in the relativity theory has been the building up of the mathematics of the new subject. Some of the conclusions, however, can be understood without much mathematics. For example, we can no longer speak of a particle moving in space, nor can we speak of an event as occurring at a certain time. Space and time are not independent things, so that when the position of a point is mentioned, there must also be given the instant at which it occupied this position. The details of this idea, as first worked out by Minkowski, may be briefly stated. With every point in space there is associated a certain instant of time, or to drop into the language of mathematics for a moment, a point is determined by four coordinates, three in space and one in time. We still use the words space and time out of respect for the memory of these departed ideas, but a new term including them both is actually in use. Such a combination, i. e., a certain something with its four coordinates, is called by Minkowski a world point. If this world point takes a new position, it has four new coordinates, and as it moves it traces out in what Minkowski calls the world, a world-line. Such a world-line gives us then a sort of picture of the eternal life history of any point, and the so-called laws of nature can be nothing else than statements of the relations between these world-lines. Some of the logical consequences of this world-postulate of Minkowski appear to the untrained mind as bordering on the fantastic. For example, the apparatus for measuring in the Minkowskian world is an extraordinarily long rod carrying a length scale and a time scale, with their zeros in coincidence, together with a clock mechanism which moves a hand, not around a circle as in the ordinary clock, but along the scale graduated in hours, minutes and seconds.

Some of the conclusions of the relativity mechanics with reference to velocity are worth noting. In the classical mechanics we were accustomed to reason in the following way: Consider a body with a certain mass at rest. If it be given certain impulse, as we say, it takes on a certain velocity. The same impulse again applied doubles this velocity, and so on, so that the velocity can be increased indefinitely, and can be made greater than any assigned quantity. But in the relativity mechanics, a certain impulse produces a certain velocity, to be sure; this impulse applied again does not double the velocity; a third equal impulse increases the velocity but by a still less amount, and so on, the upper limit of the velocity which can be given to a body being the velocity of light itself. This statement is not without its parallel in another branch of physics. There is in heat what we call the absolute zero, a value of the temperature which according to the present theory is the lower limit of the temperature as a body is indefinitely cooled. No velocity then greater than the velocity of light is admitted in the relativity mechanics, which fact carries with it the necessity for a revision of our notion of gravitational action, which has been looked upon as instantaneous.

In consequence of the change in our ideas of velocity, there results a change in one of the most widely employed laws of velocity, namely the parallelogram law. Briefly stated, in the relativity mechanics, the composition of velocities by means of the parallelogram law is no longer allowable. This follows evidently from the fact that there is an upper limit for the velocity of a material body, and if the parallelogram law were to hold, it would be easy to imagine two velocities which would combine into a velocity greater than that of light. This failure of the parallelogram law to hold is to the mathematician a very disturbing conclusion, more heretical perhaps than the new doctrines regarding space and time.

Another striking consequence of the relativity theory is that the hypothesis of an ether can now be abandoned. As is well known, there have been two theories advanced in order to explain the phenomena connected with light, the emission theory which asserts that light effect is due to the impinging of particles actually sent out by the source of light, and the wave theory which assumes that the sensation we call light is due to a wave in a hypothetical universal medium, the ether. Needless to say this latter theory is the only one which recently has received any support. And now the relativists assert that the logical thing to do is to abandon the hypothesis of an ether. For they reason that not only has it been impossible to demonstrate the existence of an ether, but we have now arrived at the point where we can safely say that at no time in the future will any one be able to prove its existence. And yet the abandoning of the ether hypothesis places one in a very embarrassing position logically, as the three following statements would indicate: 

1. The Michelson and Morley experiment was only possible on the basis of an ether hypothesis.

2. From this experiment, follow the essential principles of the relativity theory.

3. The relativity theory now denies the existence of the ether. Whether there is anything more in this state of affairs than mere filial ingratitude is no question for a mathematician.

It should perhaps be pointed out somewhat more explicitly that these changes in the units of time, space and mass, and in those units depending on them, are changes which are ordinarily looked upon as psychological and not physical. If we imagine that A has a clock and that about him move any number of observers,. B, C, D, . . . , in different directions and with different velocities, each one of these observers sees A’s clock running at a different rate. Now the actual physical state of A’s clock, if there is such a state, is not affected by what each observer thinks of it; but the difficulty is that there is no way for any one except A to get at the actual state of A’s clock. We are then driven to one of the two alternatives: Either we must give up all notion of time at all, for bodies in relative motion, or we must define it in such a way as will free it of this ambiguity, and this is exactly what the relativity mechanics attempts to do.

Any discussion of the theory of relativity would be hardly satisfactory without a brief survey of the history of the development of the subject. As has been stated, for many years the ether theory of light has found general acceptance, and up to about twenty-five years ago practically all of the known phenomena of light, electricity and magnetism were explained on the basis of this theory. This hypothetical ether was stationary, surrounded and permeated all objects, did not, however, offer any resistance to the motion of ponderable matter. There came then, in 1887, into this fairly satisfactory state of affairs, the famous Michelson and Morley experiment. This experiment was directly undertaken to discover, if possible, the so-called ether drift.

In this experiment, the apparatus was the most perfect that the skill of man could devise, and the operator was perhaps one of the most skillful observers in the world, but in spite of all this no result was obtained. Physicists were then driven to seek some theory which would explain this experiment, but with varying success. It was proposed that the ether was carried along with the Earth, but a host of experiments show this untenable. It was suggested that the velocity of light depends on the velocity of the source of light, but here again there were too many experiments to the contrary. Michelson himself offered no theory, though he suggested that the negative result could be accounted for by supposing that the apparatus underwent a shortening in the direction of the velocity and due to the velocity, just enough to compensate for the difference in path. This idea was later, in 1892, developed by Lorentz, a Dutch physicist, and under the name of the Lorentz-shortening hypothesis has had a dignified following. The Michelson and Morley experiment, together with certain others undertaken for the same purpose, remained for a number of years as an unexplained fact-a contradiction to ascertained well-established and orderly physical theory. Then there appeared in 1905, in the Annalen der Physik, a modest article by A. Einstein, of Bern, Switzerland, entitled, “Concerning the Electrodynamics of Moving Bodies.” In this article Einstein, in a very unassuming way, and yet in all confidence, boldly attacked the problem and showed that the astonishing results concerning space and time which we have just considered, all follow very naturally from very simple assumptions. Naturally a large part of his paper was devoted to the mathematical side-to the deduction of the equations of transformation which express mathematically the relation between two systems moving relative to each other. It may safely be said that this article laid the foundation of the relativity theory.

Einstein’s article created no great stir at the time, but within a couple of years his theory was claiming the attention of a number of prominent mathematicians and physicists. Minkowski, a German mathematician of the first rank, just at this time turning his attention to mathematical physics, came out in 1909 with his famous world postulate, which has been briefly described. It is interesting to note that within a year translations of Minkowski’s article appeared in English, French and Italian, and that extensions of his theories have occupied the attention of a number of Germany’s most famous mathematicians. Next Poincaré, perhaps the most brilliant mathematician of the last quarter century, stamped the relativity theory with the unofficial approval of French science, and Lorentz, of Holland, one of the most famous in a land of famous physicists, aided materially to the development of the subject. Thus we find within five years of the appearance of Einstein’s article, a fairly consistent body of doctrine developed, and accepted to a surprising degree by many of the prominent mathematical physicists of the foremost scientific nations. No sooner was the theory in a fairly satisfactory condition, than the attempt was made to verify some of the hypotheses by direct experiment. Naturally the difficulties in the way of such experimental verification were very great-insurmountable in fact for many experiments, since no two observers could move relative to each other with a velocity approaching that of light. But the change in mass of a moving electron could be measured, and a qualitative experiment by Kaufmann, and a quantitative one by Bucherer gave results which were in good agreement with the theoretical equations. It was the hope of the astronomers that the new theory would account for the long-outstanding disagreement between the calculated and the observed motion of Mercury’s perihelion, but while the relativity mechanics gave a correction in the right direction, it was not sufficient. To bring this very brief historical sketch down to the present time, it will perhaps be sufficient to state that this theory is at present claiming the attention of a large number of prominent mathematicians and physicists. The details are being worked out, the postulates are being subjected to careful mathematical investigation, and every opportunity is being taken to substantiate experimentally those portions of the theory which admit of experimental verification. Practically all of the work which has been done is scattered through research journals in some six languages, so that it is not very accessible. Some idea of the number of articles published may be obtained from the fact that a certain incomplete bibliography contains the names of some fifty-odd articles, all devoted to some phase of this subject-varying all the way from the soundest mathematical treatment, at the one end of the scale, to the most absurd philosophical discussion at the other. And these fifty or more articles include only those in three languages, only those which an ordinary mathematician and physicist could read without too great an expenditure of time and energy, and with few exceptions, only those which could be found in a rather meager scientific library.

In spite of the fact that the relativity theory rests on a firm basis of experiment, and upon logical deductions from such experiments, and notwithstanding also that this theory is remarkably self-consistent, and is in fact the only theory which at present seems to agree with all the facts, nevertheless it perhaps goes without saying that it has not been universally accepted. Some objections to the theory have been advanced by men of good standing in the world of physics, and a fair and impartial presentation of the subject would of necessity include a brief statement of these objections. I shall not attempt to answer these objections. Those who have adopted the relativity theory seem in no wise concerned with the arguments put forward against it. In fact, if there is one thing which impresses the reader of the articles on relativity, it is the calm assurance of the advocates of this theory that they are right. Naturally the theory and its consequences have been criticized by a host of persons of small scientific training, but it will not be necessary to mention these arguments. They are the sort of objections which no doubt Galileo had to meet and answer in his famous controversy with the Inquisition. Fortunately for the cause of science, however, the authority back of these arguments is not what it was in Galileo’s time, for it is not at all certain just how many of those who have enthusiastically, embraced relativity would go to prison in defense of the dogma that one man’s now is another man’s past, or would allow themselves to be led to the stake rather than deny the doctrine that the length of a yardstick depends upon whether one happens to be measuring north and south with it, or east and west.

In general it may be said that the chief objection to the relativity theory is that it is too artificial. The end and aim of the science of physics is to describe the phenomena which occur in nature, in the simplest manner which is consistent with completeness, and the objectors to the relativity theory urge that this theory and especially its consequences, are not simple and intelligible to the average intellect. Consider, for example, the theory which explains the behavior of a gas by means of solid elastic spheres. This theory may be clumsy, but it is readily understood, rests upon an analogy with things which can be seen and felt, in other words is built up of elements essentially simple. But the objectors to the relativity theory say that it is based on ideas of time and space which are not now and which never can be intelligible to the human mind. They claim that the universe has a real existence quite apart from what anyone thinks about it, and that this real universe, through the human senses, impresses upon the normal mind certain simple notions which can not be changed at will. Minkowski’s famous world-postulate practically assumes a four-dimensional space in which all phenomena occur, and this say the objectors, on account of the construction of the human mind, can never be intelligible to any one in spite of its mathematical simplicity. They insist that the words space and time, as names for two distinct concepts, are not only convenient, but necessary. Nor can any description of phenomena in terms of a time which is a function of the velocity of the body on which the time is measured ever be satisfactory, simply because the human mind can not now nor can it ever appreciate the existence of such a time. To sum up, then, this model of the universe which the relativists have constructed in order to explain the universe, can never satisfactorily do this, for the reason that it can never be intelligible to everybody. It is a mathematical theory and can not be satisfactory to those lacking the mathematician’s sixth sense.

A second serious objection urged against the relativity theory is that it has practically abandoned the hypothesis of an ether, without furnishing a satisfactory substitute for this hypothesis. As has been previously stated, the very experiment which the relativity theory seeks to explain depends on interference phenomena which are only satisfactorily accounted for on the hypothesis of an ether. Then too, there are in electromagnetism certain equations of fundamental importance, known as the Maxwell equations, and it is perhaps just as important that the relativity theory retain these equations, as it is that it explain the Michelson and Morley experiment. But the electro-magnetic equations were deduced on the hypothesis of an ether, and can be explained, or at least have been explained only on the hypothesis that there is some such medium in which the electric and magnetic forces exist. So, say the objectors to the relativity theory, the relativists are in the same illogical (or worse) position that they occupy with reference to the Michelson and Morley experiment, in that they deny the existence of the medium which made possible the Maxwell equations, which equations the relativity theory must retain at any cost. Professor Magie, of Princeton, who states with great clearness the principal objections to the theory, waxes fairly indignant on this point, and compares the relativists to Baron Munchausen, who lengthened a rope which he needed to escape from prison, by cutting off a piece from the upper end and splicing it on the lower. The objectors to the relativity theory point out that there have been advocated only two theories which have explained with any success the propagation of light and other phenomena connected with light, and that of these two, only the ether theory has survived. To abandon it at this time would mean the giving up of a theory which lies at the foundation of all the great advances which have been made in the field of speculative physics.

It remains finally to ask and perhaps also to answer the question, whither will all this discussion of relativity lead us, and what is the chief end and aim and hope of those interested in the relativity theory. The answer will depend upon the point of view. To the mathematician the whole theory presents a consistent mathematical structure, based on certain assumed or demonstrated fundamental postulates. As a finished piece of mathematical investigation, it is, and of necessity must remain, of theoretical interest, even though it be finally abandoned by the physicists. The theory has been particularly pleasing to the mathematician in that it is a generalization of the Newtonian mechanics, and includes this latter as a special case. Many of the important formulas of the relativity mechanics, which contain the constant denoting the velocity of light become, on putting this velocity equal to infinity, the ordinary formulas of the Newtonian mechanics. Generality is to the mathematician what the philosopher’s stone was to the alchemist, and just as the search for the one laid the foundation of modern chemistry, so is the striving after the other responsible for many of the advances in mathematics.

On the other hand, those physicists who have advocated the theory of relativity see in it a further advance in the long attempt to rightly explain the universe. The whole history of physics, is, to use a somewhat doubtful figure of speech, strewn with the wrecks of discarded theories. One does not have to go back to the middle ages to find amusing reading in the description of these theories, which were seriously entertained and discarded only with the greatest reluctance. But all the arguments of the wise, and all the sophistries of the foolish, could not prevent the abandoning of a theory, if a few stubborn facts were not in agreement with it. Of all the theories worked out by man’s ingenuity, no one has seemed more sure of immortality than the one we know as the Newtonian mechanics. But the moment a single fact appears which this system fails to explain, then to the physicist with a conscience this theory is only a makeshift until a better one is devised. Now this better one may not be the relativity mechanics-its opponents are insisting rather loudly that it is not. But in any case, the entire discussion has had one result pleasing alike to the friends and foes of relativity. It has forced upon us a fresh study of the fundamental ideas of physical theory, and will give us without doubt, a more satisfactory foundation for the superstructure which grows more and more elaborate.

It can well happen that scientists, some generations hence, will read of the relativity mechanics with the same amused tolerance which marks our attitude towards, for example, Newton’s theory of fits of easy transmission and reflection in his theory of the propagation of light. But whatever theory may be current at that future time, it will owe much to the fact that in the early years of the twentieth century, this same relativity theory was so insistent and plausible, that mathematicians and physicists in sheer desperation were forced either to accept it, or to construct a new theory which shunned its objectionable features. Whether the relativity theory then is to serve as a pattern for the ultimate hypothesis of the universe or whether its end is to illustrate what is to be avoided in the construction of such a hypothesis, is perhaps after all not the important question.

Physics photo
The very minimalist cover of the June 1914 issue of Popular Science Monthly.

Some text has been edited to match contemporary standards and style.

The post From the archives: The Theory of Relativity gains speed appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: During a devastating polio epidemic, a vaccine was finally on the horizon https://www.popsci.com/science/polio-vaccine/ Wed, 11 May 2022 11:00:00 +0000 https://www.popsci.com/?p=441244
A collage of images from the Popular Science article “How they’re closing in on polio” by Marguerite Clark, May 1953
“How they’re closing in on polio” (Marguerite Clark, May 1953). Popular Science

A May 1953 Popular Science article attempted to take stock of the polio epidemic and brought hopes of a vaccine.

The post From the archives: During a devastating polio epidemic, a vaccine was finally on the horizon appeared first on Popular Science.

]]>
A collage of images from the Popular Science article “How they’re closing in on polio” by Marguerite Clark, May 1953
“How they’re closing in on polio” (Marguerite Clark, May 1953). Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

President Franklin D. Roosevelt, who had lost his ability to walk in 1921 at age 39 due to polio, founded the National Foundation for Infantile Paralysis in 1938. When entertainer Eddie Cantor urged people to contribute to the organization by mailing dimes to the White House, they responded with more than 2.6 million ten-cent pieces, and the catchier March of Dimes was born. (FDR’s profile was later enshrined on the US dime in 1946.)  

When Newsweek’s long-time medical editor Marguerite Clark wrote a feature for Popular Science in May 1953, a vaccine for polio, or infantile paralysis, was close, but not yet assured. While the author may have overstated polio’s fatality rate (the CDC estimates less than one percent), the disease was on the rise in the US. Despite advances in understanding polio’s unique viral strains, which spread mostly during summer through nasal and oral droplets, the epidemic continued to escalate, paralyzing as many as 20,000 Americans a year at its peak in 1952.  

For most, polio was not debilitating, but for the 1-2 percent who developed paralytic polio medical care and rehabilitation therapy required extensive resources. Besides arms and legs, Clark describes how polio can sometimes paralyze the throat and chest. Crude respirators, or iron lungs, were required to force patients’ chests up and down to keep them alive. Then there was the psychological toll. At the time of the article’s writing, many experts wrongly prescribed little more than positive attitudes to overcome such mental challenges. “His abilities are greater than his disabilities,” Clark writes, paraphrasing a psychologist’s view of disabled polio victims, “provided he has enough courage to develop them.”

From the March of Dimes inception, it took seventeen years, even with copious existing research (most involving chimpanzees, who respond like humans to the virus, as Popular Science reported in February 1938), before Jonas Salk’s vaccine was released in 1955. By 1979 polio was eradicated in the US, thanks to vaccine mandates. Today, only two countries, Pakistan and Afghanistan, are polio-endemic.

“How they’re closing in on polio” (Marguerite Clark, May 1953)

Things are happening fast to raise hopes for final victory in this grim war—here’s what you should know about it NOW.

The year 1952 was the worst in the history of poliomyelitis. Some 55,000 men, women and children were struck. 

Yet 1952, a year of tragedy, brought progress that makes 1953 a year of bright hope. Polio researchers, backed by March of Dimes funds, have developed a safe and inexpensive vaccine that one day will give long-time protection against this disease.

Sometime this spring or fall there will be large-scale vaccinations, possibly of as many as 25,000 children. By 1954, if the National Foundation of Infantile Paralysis feels that the time has come for public vaccination on a mass scale, all the children in the United States may get the vaccine.

Scientists worked 14 years

Here are the steps by which the scientists, after 14 years of intensive research, reached their final goal:

1. Polio is caused by a virus so small that it cannot be detected even through an electron microscope. For a long time, researchers have known that this virus has several strains. And before a polio vaccine, effective against all the strains, could be made, these various strains had to be identified. After three years, and at a cost of £1.400.000, scientists at four universities–Utah. Kansas, Pittsburgh and Southern California–proved definitely the existence of three strains of polio virus–Brunhilde, named for a chimpanzee used in a polio experiment in Baltimore; Leon, for a Los Angeles boy, who died of the disease; and Lansing, for a young man in Lansing, Michigan, who had a fatal polio attack. 

2. Once scientists had used the infected spinal cords of laboratory monkeys as the sole source of polio virus. But this infected substance, when injected into human beings, could bring on a dangerous allergy, or even death. So hope of developing a safe vaccine seemed slim. Then Dr. John F. Enders of Harvard worked out a new technique for growing the polio virus in test-tube cultures of ordinary, human tissues. The new substances, which contained all three polio strains, were easy to make and safe to inject in human beings. From that time on the search for a polio vaccine was less difficult.

3. The vaccine, which soon will be ready for large field trials, and is the key weapon in the victory against polio, has been prepared by Dr. Jonas E. Salk of the University of Pittsburgh. It is made of inactivated virus, incapable of causing infection and damaging nerve cells but still powerful enough, when injected into an animal or a child, to build up antibodies against polio. Shots are given in mineral oil, a substance which seems to stimulate the forming of antibodies. Dr. Salk’s vaccine contains all three strains of polio virus—Leon, Lansing and Brunhilde; it is made easily and inexpensively in test tubes, using the Enders technique. From the start, the vaccine was successful in protecting the laboratory monkeys and chimpanzees against polio. Since polio in monkeys and chimps follows the human course of the disease, the researchers did not doubt that a similar vaccine would prevent polio in children. Controlled tests on boys and girls followed. None developed polio, but all began to build a supply of antibodies to fight it.

For the coming polio season, it is expected that the vaccine field trials will follow the plan used in 1952 when gamma-globulin shots were given to large groups of children. Half of the youngsters will get the vaccine; the other half, harmless shots of a substance that will resemble the vaccine but with no polio-fighting power. Later, blood from the vaccinated children will be tested, and the level of the polio-fighting antibodies will be compared with that in the children’s blood before vaccination and with that of the children who got the vaccine substitute.

Meantime, the use of gamma globulin will continue. This development, although overshadowed by the vaccine, promises to be a powerful weapon against polio until the vaccine is perfected. For a long time, polio investigators have tried to find the time and place to immunize a patient against polio before the virus enters his nervous system (brain and spinal cord) and causes paralysis. In the spring of 1952, Dr. Dorothy Horstmann of Yale University and Dr. David Bodian of Johns Hopkins discovered that the polio virus, entering the body through the mouth and traveling through the digestive tract, stays in the bloodstream for a few days before it moves on to attack the nervous system.

Gammaglobulin first tried on monkeys

This gave the researchers a lead. Why not try to immunize patients against paralytic polio while the virus lingered in the bloodstream? As yet, there was no vaccine available. So Drs. Horstmann and Bodian gave small doses of gamma globulin (the blood fraction which contains antibodies to fight polio) to laboratory monkeys previously infected with polio virus. When the gamma-globulin shots were given, the animals did not develop paralytic polio; when they were not given the shots, they were paralyzed within 10 to 15 days.

On the basis of the successful experiments with monkeys, plans were made to set up the now famous gamma-globulin tests on children. In Provo, Utah, Houston, Texas, and Sioux City, Iowa, polio researchers under Dr. William McD. Hammon of Pittsburgh, proved beyond doubt the power of gamma globulin to prevent the paralytic form of polio in human beings. In these three large field tests, which cost the National Foundation for Infantile Paralysis over $1,500,000, the American Red Cross furnished enough gamma globulin to immunize 55,000 children. One injection protects a child for a period of five weeks following exposure to polio.

Blood donors needed to fight Polio

It takes about a pint of whole blood to make an average dose of GG to use in polio. To get enough of the serum for the needs of the 1953 polio season, the Red Cross has expanded its blood-collection program. The Red Cross will gather and process the blood and turn the gamma globulin over to the Office of Defense Mobilization. There, the blood derivative will be allocated to state health officers, who in turn will be responsible for local use in measles, infectious hepatitis (a serious virus disease of the liver) and infantile paralysis in epidemic areas. Because of its scarcity, GG will be given to children from one to 11 years only.

It has cost the National Foundation for Infantile Paralysis over $18,000,000 to reach the point where immunization can be promised. At the same time, the largest share of the March of Dimes funds about $140,000,000) has been spent on polio patients’ medical care during illness, and on expert rehabilitation therapy. More money will be needed for them, and for future victims of polio. But the dramatic research achievements of the last two years renew hope that soon every parent will be freed from the dread of infantile paralysis.

Until then as every polio specialist emphasizes, good treatment given promptly will help thousands of polio victims to recover with little or no after-effects.

That is why every mother and father must heed the warning signs: sore throat, a head cold, nausea and vomiting, fever, diarrhea, or sometimes constipation, loss of appetite, pain, particularly in the armns and leg muscles, and stiffness of the neck or back.

Tonsillectomy risky in polio season

The latest research confirms the theory that the chance of polio is increased by the removal of tonsils and adenoids during the polio season. Also, some scientists believe there is a link between the susceptibility to polio and the shots given to protect children against diphtheria and whooping cough. Shots can be given and tonsils and adenoids removed during the times when there is little or no polio in the neighborhood. 

Infantile paralysis kills about five percent of its victims. But new methods of treatment and faster and more accurate diagnosis are rapidly increasing the chance to live. Latest respirators and iron lungs are better than those models used 10 years ago. The reliable but awkward tank respirator has been replaced in many cases by a small, comfortable cylinder respirator, or even by a lightweight plastic chest respirator, which gives the patient a wider range of movement.

Electronic device aids breathing

At the Harvard School of Public Health, where the iron lung was first developed, scientists have constructed an electronic breathing device, known as the Electro-Phrenic-Respirator. A hollow needle containing a copper wire is attached to the phrenic nerve, in the side of the patient’s neck, which serves both lungs and diaphragm. When the current goes on, the nerve is stimulated and causes the diaphragm to contract and draw air into the lungs. The current is then decreased automatically, relaxing the diaphragm and forcing out air. 

When paralysis or crippling follows polio, delicate surgery often can correct this disability. Strong muscles can be transplanted to take over the job of weakened ones unable to carry on their work. Weak joints can be treated so that useless legs can again bear weight. When necessary, legs can be slowed in their growth or shortened by surgery to match the polio-shortened limb. 

Operation controls bone growth

A simple operation in which growth in one leg can be halted until a short leg can catch up with it is performed by Dr. William T. Green and Dr. Thomas Gucker III, of the Children’s Hospital, Boston. Small sections of thigh or leg bone containing cartilage are removed, then grafted back on the leg bone. The graft serves as a clamp, checking the bone’s growth.

When paralysis is present, polio specialists depend on physical therapy, heat, water, massage and electricity to keep the muscles healthy while waiting for injured nerves to recover. So far, there is no drug, chemical or antibiotic that will cure polio. The men whose job it is to search for the ideal drug have some hopeful leads. But they still say: “Not yet.”

EDITOR’S NOTE: The following section uses dated and insensitive language and characterizations of people suffering from polio and its long-term effects.

Rehabilitation for crippled victims

But they are doing a lot to rehabilitate crippled victims. For example, the remaining healthy muscle fibers in the arms and legs affected by polio must be exercised for proper development. For this, doctors now are using special progressive resistance exercises in which all the muscles are exercised electrically by means of a single pulley system. At the same time, a cathode-ray oscillograph (a writing device) records the child’s muscle potentials on a graph.

Polio may cripple and deform the patient’s personality, just as it cripples his body. Polio virus rarely if ever affects the patient’s mind, but an embittered, sick child may develop a personality maladjustment that will do him greater harm than a twisted arm or leg.

Dr. Morton A. Seidenfeld, director of the psychological services of the National Foundation for Infantile Paralysis, describes a typical case:

“When a child goes to a hospital, he is entered as ‘Poliomyelitis, acute.’ But that isn’t his name at all. He is Johnny Jones, called ‘Red’ by his buddies; he’s 12, and only a few hours ago, he was captain of his sand-lot baseball team, and pleased as punch because his coach, Bill Smith, said he was a natural for the big leagues. Now he’s lonely, afraid, and sure he will never know the feel of a bat or a catcher’s mitt again.”

Handicaps can be overcome

To Dr. Seidenfeld, it is just as important for the doctor and nurse to let Johnny talk about his memories, hopes and fears, as it is to give him hot packs and to bathe and feed him. At 12, Johnny is old enough to understand a frank and honest discussion of his disability. This talk will help him to adjust normally to a world in which other boys will play baseball while he, Johnny, sits in the bleachers.

Above all, Johnny must be made to see his future in terms of a keen competition in which he can be the victor if he can rise above his limitations. Almost any kind of education is open to him, almost any profession that does not call for hard physical work. In any case, his abilities are greater than his disabilities, provided he has enough courage to develop them. If this truth can be brought home to Johnny, and to other children disabled by polio, they need feel no handicap. 

Vaccines photo
The cover of the May 1953 issue of Popular Science featuring military tech, car news, and family trip tips.

Some text has been edited to match contemporary standards and style.

The post From the archives: During a devastating polio epidemic, a vaccine was finally on the horizon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: NASA dispatches drone to help rescue the ozone layer https://www.popsci.com/environment/ozone-drone/ Tue, 10 May 2022 11:00:00 +0000 https://www.popsci.com/?p=441229
A collage of images from “Ozone Drone” by Steven Ashley in the July 1992 issue of Popular Science.
“Ozone Drone” (Steven Ashley, July 1992). Popular Science

The July 1992 Popular Science issue explored NASA's mission to find out what's happening to the ozone layer using a craft called Perseus.

The post From the archives: NASA dispatches drone to help rescue the ozone layer appeared first on Popular Science.

]]>
A collage of images from “Ozone Drone” by Steven Ashley in the July 1992 issue of Popular Science.
“Ozone Drone” (Steven Ashley, July 1992). Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Thirty years ago, the noxious clouds of chlorofluorocarbons that had been gathering in Earth’s stratosphere for half a century would chew a seasonal hole in the protective ozone layer over Antarctica twice the diameter of Pluto. While the Antarctic feature was extreme, it underscored a disaster unfolding across Earth’s atmosphere. With less ozone in the stratosphere to shield flora and fauna from the sun’s ultraviolet rays, crops would suffer and skin cancer would soar. 

By the time Popular Science ran a feature in July 1992 describing the urgent efforts of scientists across the globe to understand the dynamics of ozone destruction, our outlook was dire. “Earth’s ozone shield seems to be failing,” wrote Popular Science Senior Editor Steven Ashley, “and researchers need to find out why—fast.” According to Ashley NASA had pulled out all the stops, building a robotic data-gathering drone to ply Earth’s polar vortex—the upper reaches of the atmosphere over Antarctica. The craft, called Perseus, used GPS and a programmed route to sniff out ozone. 

 In 1987, every country on Earth (a first and only) ratified a treaty to reverse the damage. The Montreal Protocol established guidelines to rapidly phase out a list of 100 manufactured chemicals called ozone depleting substances or ODS. Since Popular Science’s feature ran in 1992, ODS emissions have been reduced by 98 percent. And while the Antarctic ozone hole fluctuates in size and severity year to year, driven by myriad factors including seasonal temps and moisture, an improving trend has been consistent. Experts forecast full recovery by 2070. Besides representing a rare environmental success story, there’s a lesson in ozone: Amazing things are possible—even on a planetary scale—when everyone gets on board.

Unfortunately, such unity has proved elusive for greenhouse gasses. Since 1992, world leaders have taken three swings at treaties to reduce the substances, the latest being the Paris Climate Agreement. None have achieved unanimity, although the Paris Agreement is close now that the US has rejoined.

“Ozone Drone” (Steven Ashley, July 1992)

The rupture of Earth’s ozone shield has become a global concern. But how can scientists gain the high-altitude data they need to find solutions? This unmanned power glider might be the answer.

Eighty thousand feet above Antarctica’s vast frozen expanse, a lone aircraft will cruise the stratosphere on long, tapered wings. The unmanned powered glider, called Perseus, is expected in 1994 to fly higher than any previous prop plane to find out what’s gone wrong with Earth’s stratospheric ozone shield. It will be programmed to search the cold, thin air over Antarctica for ozone-killing chemicals and bring back crucial air samples that have eluded atmospheric scientists for years.

The plane’s 14.4-foot, variable-pitch propeller—so long that it is unable to spin until Perseus is aloft—will require the robot craft to be hawed into the air from its base at Antarctica’s McMurdo Station by a winch-wound cable. Once airborne, its engine will be engaged and the cable detached.

Perseus will then spiral upward toward the center of the ozone hole at about 40 knots, reaching a speed of 200 knots at altitude. Although a technician will pilot the plane remotely via line-of-sight radio controls when it’s near the ground, Perseus will largely pilot itself. Its on-board flight computer will carry preprogrammed navigation commands based on data beamed from Global Positioning Satellites.

Ultimately, it is intended that sensors mounted in the craft’s nose will respond if the high-flying probe enters a wispy, pinkish assemblage of tiny ice crystals, a suspected hotbed of ozone destruction researchers call a polar stratospheric cloud. The computer on-board will direct the craft’s air-sampling apparatus to engage. When its sensors no longer detect the ice, Perseus will reverse course and continue to fly a zig-zag pattern in order to map the boundaries of the noxious cloud.

Total flight duration will be about six hours, with an hour for air sampling. Perseus can carry only enough fuel for the climb, so it will glide silently, after the engine halts, to a landing at its base on the ice shelf.

Such a flight cannot come too soon for scientists studying ozone depletion. Earth’s ozone shield seems to be failing and researchers need to find out why—fast. Last October, NASA’s Nimbus-7 satellite measured the lowest Concentration of ozone over Antarctica in 13 years. This huge ozone hole has so far been restricted to the Southern Hemisphere, but NASA aircraft recently found an abundance of ozone-hole precursor chemicals high in the arctic air, raising the specter of a northern ozone hole. Perhaps even more alarming is the discovery of thinning ozone levels over the northern mid-latitudes, including populated areas of Canada and New England, Britain, France, and Scandinavia. (This past year’s conditions were unusually warm, say scientists, so no northern ozone hole materialized.)

Since 1988, pilots in NASNs ER-2 reconnaissance aircraft—converted U-2 spy planes—have climbed 13 miles above the remote and desolate polar regions to gather air samples for scientists. These missions are anything but routine. If one of the single-engine airplanes were to encounter trouble during these arduous eight-hour, 1,500-mile nights, the solo pilot would almost surely die.

So far, the returns have been worth the risks, however, for the high-flying collectors have provided scientists with the evidence they needed to implicate man-made chlorine compounds called chlorofluorocarbons (CFCs) in ozone’s destruction and call for their ban. Nevertheless, researchers’ ability to further model and predict changes in the ozone layer are currently limited by a dearth of crucial air samples from the heart of the hole, which lies at altitudes beyond any piloted plane’s ceiling, says Jim Anderson, atmospheric chemist at Harvard University. Anderson, also mission scientist for NASA’s six-month-long Airborne Arctic Stratospheric Experiment-2, says that current atmospheric models (used to guide the government’s environmental policy decisions) lack information on chemistry and movement at altitudes near 15 miles, or 82,000 feet-a crucial area in the formation and destruction of ozone. “Satellites are good for broad-brush maps of simple measurements,” Anderson says, “but to understand the ozone-depletion mechanism you need both-satellites for the climatological view and direct measurements by air vehicles to understand the mechanism.”

Giant helium-filled research balloons have been used for decades to haul instruments to extreme altitudes, but these unwieldy craft are subject to the vagaries of the weather, leading to launch delays and occasional lost payloads. And the only available airplane that can fly high enough is Lockheed’s SR-71 Blackbird, but the black aircraft’s supersonic speed would make sampling impossible. Perseus, then, would seem to be poised to provide many answers.

Massachusetts Institute of Technology-trained aeronautical engineer John Langford, president of Aurora Flight Sciences Corp. in Manassas, Va., is working to craft Perseus to offer extreme altitude capability, pilotless operation, and the ability to carry scientific instruments aloft at relatively low cost. The nucleus of the Aurora staff are veterans of the MIT Daedalus Project, which developed the lightweight, human-powered aircraft that was pedaled 69 miles between the Greek isles Crete and Santorin [“88- pound Pedal Plane,” Feb. ‘87]. The development of Perseus owes a lot to its seemingly simple forerunner.

Daedalus’ high-efficiency wings, designed by Mark Drela, associate professor of aeronautics and astronautics at MIT, kept the flimsy-looking composite craft airborne despite being driven only by its human engine. Langford and Drela knew that its long, thin wing shape would work in the thin air and extreme altitudes relevant to ozone sampling. “It was obvious that much of the airfoil and structures technology would be applicable to high· flying aircraft,” Drela recalls.

The need for a low-cost, high-altitude, unmanned platform for in situ atmospheric research was established a few years ago by a panel of experts from NASA, the National Oceanic and Atmospheric Administration, and the National Science Foundation. Besides ozone chemistry, the panel wanted a vehicle that could help determine the role of clouds in global warming, investigate a stratosphere/troposphere mixing phenomena for a new Department of Energy study on climatic change, find the causes of severe storms, and assess the impact of future supersonic airliner exhaust emissions [“The Next SST,” Feb. ‘91].

“The key point was that the vehicle be available in the 1993-’94 time frame,” recalls Jennifer Baer-Riedhart, project manager of the resulting Small High-Altitude Science Aircraft program at NASA’s Ames-Dryden Flight Research Facility in Edwards, Calif. Aurora, already well on its way to developing such a craft, was awarded a $2.25 million, two-year NASA contract to deliver two Perseus planes.

To keep costs down, Langford notes that the strategy has been to modify off-the-shelf components and existing designs, rather than developing custom technology.

The result is a lightweight 1,320 pound), “unmanned version of a sailplane,” Langford says, with a 59- foot wingspan and low-drag aerodynamic design. The wings, propeller, tail surfaces, and tail boom are molded from resin-impregnated Kevlar aramid cloth, Nomex honeycomb cores, and graphite cloth.

“Perseus’ composite structure is like that of a sport glider pushed to extremes,” says Siegfried Zerweckh, who has worked as leader of Aurora’s aerostructures group. “The fact that the plane is unmanned and that its structures don’t have to perform forever like those of a commercial aircraft [that is, without an inspection following each flight] means that we can push the materials to the limit.

“We use sandwich construction for stiffness in almost every part, including the wings, tail surfaces, and tailboom,” Zerweckh continues. The three-piece, 30-foot wings, for example, have only four ribs supporting them in the span-wise direction, 80 the structural sandwich panels must be largely self-supporting. A 19.7-foot· long wing panel, for instance, weighs in at 170 pounds. The result is a relatively light structure.

An on-board flight control/navigation computer, a fly-by-wire electronic control system, and an unusual closed-cycle propulsion system complete much of the plane’s bulk. NASA thought Perseus’s propulsion system was important enough to the success of the project to fund it in a separate, half-million-dollar effort.

In keeping with Aurora’s penchant for classical monikers, the propulsion system for Perseus was dubbed Arion. It is an unusual closed-cyc1e system that includes a liquid-cooled, 65-horsepower rotary Norton, a two-speed reduction gearbox with provisions for clutching and locking the propeller, a stiff carbon-fiber drive shaft, the large, variable-pitch propeller, storage tanks for gasoline and liquid oxygen, and a large condenser to cool the exhaust.

Much of this is the work of Martin Waide, former chief engineer for Aurora, who has been an engineer for Group Lotus in Britain and various American manufacturers of military remotely piloted vehicles.

A closed-cycle combustion engine system, which was chosen for Perseus because it was cheapest and fastest to develop, derives from work done for torpedoes and submarines. Instead of compressing external air in a heavy, expensive turbocharger to maintain power, the engine exhaust is fed back into the intake along with fuel and oxygen. Senior propulsion engineer Stephen Hendrickson reports that the entire engine complement was ground tested in May—successfully.

Burning the fuel-air mixture produces exhaust temperatures of nearly 2,000° Fahrenheit, which ordinarily would be dumped overboard. But because Perseus’s exhaust will be recycled, large radiators above the wing must carry off its heat. The Aurora team is developing large stainless steel and aluminum fin-and-tube-type heat exchangers that will work at low atmospheric pressure, where heat transfer is slow.

This past November, the prototype Perseus A reached nowhere near its extreme altitude goals in its maiden flights over the El Mirage dry lake bed in California’s Mojave Desert, limited as it was to a 3,000-foot safety ceiling. But the three short test flights provided data that will pave the way for high-flying missions two years hence, when Perseus A will be airlifted in pieces to McMurdo Station. There, a ground crew of seven will quickly assemble and prepare the aircraft for launch.

Harvard’s Anderson designed the lightweight, nose-mounted instrument package that Perseus will carry. His 110-pound air sampling/analysis system employs an optical ultraviolet-absorption technique to measure ozone concentration and a more sophisticated photon-scattering apparatus that measures the levels of ozone-destroying precursor compounds in parts per trillion. In March, NASA balloon specialists completed a series of difficult test flights during which the miniaturized sensor package and its’ electronics survived -80°C temperatures when they were lofted from the western coast of Greenland.

A widely held theory reported recently by Anderson and two colleagues spells out why tracking these precursor compounds is so vital.

It is known that unimpeded ultraviolet (UV) radiation can cause skin cancer, cataracts, disabled immune systems, as well as disruptions of natural ecosystems and agriculture.

In winter when the sun leaves the poles, the stratospheric air rapidly becomes so cold that nitric acid trihydrate (NAT) in the air freezes. These tiny nitric acid crystals seed the formation of water-ice particles, which gather into wispy, pinkish clouds (the very clouds that Perseus’ detectors will be trained on).

As soon as the ice-nitric acid particles form, fast reactions involving hydrochloric acid and chlorine nitrate occur on the ice surface, which acts as a catalyst (see The Chlorine Connection). The former is adsorbed onto the edges of the crystals, while collisions of the ice particles with the latter liberates molecular chlorine (C12). “Nobody expected that the ice surfaces would act as catalysts for the release of molecular chlorine,” Anderson says.

While the polar air masses cool, they sink. As surrounding air flows in to take the cold air’s place, the Coriolis Force—caused by the spinning Earth—steers the in-rushing air into continent-size rotating jets. These polar vortices act as semi-impermeable walls, isolating the air inside them. Despite the polar subsidence, the free molecular chlorine remains high up.

With the return of the spring sunlight, virtually all chlorine molecules split into free chlorine radicals—chlorine atoms hungry to recombine. This chlorine feeds a series of catalytic reactions that together destroy ozone.

“Free chlorine monoxide chews up ozone like Pac-Man,” Anderson notes. “At the concentrations we’ve observed-more than one part per billion by volume, we estimate that 1 percent of the ozone is lost each day.”

Later in the season, planetary-scale air waves pummel the polar vortices, breaking them up and replenishing the polar ozone. It’s thought that the arctic ozone hole has yet to form because the northern vortices are unstable due to nearby mountain ranges.

A number of scientists are aware that sampling the stratosphere is vital to finding a solution to our ozone depletion problems. Several other high-flying planes are planned. Already developed, but as yet unused, is the giant Condor pilotless aircraft, which was developed by the Boeing Co. of Seattle in a secret Defense Department project. The 20,000-pound Condor is powered by 8 pair of liquid-cooled, 175-hp Teledyne Continental engines with two-stage turbocharging and intercooling driving three-bladed, 16-foot-long props. Though the reportedly $20 million craft completed eight test flights in 1989, the government lacks the funds to operate it. In one of those flights, Boeing’s Condor set the world altitude record for propeller-driven aircraft at 67,028 feet. In another, the classified drone stayed aloft for two and a half days, flying an estimated 20,000 miles.

Other aircraft developers are taking the manned route. A German group from the Deutsche Forschungsanstalt fiir Luft and Rahrfahrt (DLR) in Oberpfaffenhafen has proposed development of a two-seat plane called Strato·2C that is to be capable of reaching 85,000 feet or flying for 10,000 miles. The composite aircraft is to be powered by twin 402-hp Teledyne Continental engines with turbochargers.

Aurora’s engineers are planning several derivative versions of the Perseus “jeep” (as NASA terms the next larger size vehicle). Fitted with an efficient turbocharged engine, Perseus B could cruise for several days at somewhat lower heights than the A model to circle above hurricanes, for instance. With a 188-foot wingspan and twin pusher-prop power plants, Theseus—a “van”-size craft —could fly a 440-pound payload at around 100,000 feet for about a month. Farther down the road, the solar-powered Odysseus “truck” could cruise the stratosphere for as long as a year with a 110·pound payload on board.

By working to extend flight duration and elevation, these propeller-driven stratospheric cruisers may well come to act nearly as “poor man’s satellites.”

Drones photo
The cover of the July 1992 special issue of Popular Science, focusing on the intersection of environment and technology.

Some text has been edited to match contemporary standards and style.

The post From the archives: NASA dispatches drone to help rescue the ozone layer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: Bill Gates hypes up the Apple mini Mac in 1984 https://www.popsci.com/technology/macintosh-review/ Mon, 09 May 2022 13:00:00 +0000 https://www.popsci.com/?p=441191
A collage of images from Popular Science's “Apple’s mighty mini Mac: Can little Mac challenge Big Blue?” by Jim Schefter, March 1984
“Apple’s mighty mini Mac: Can little Mac challenge Big Blue?” (Jim Schefter, March 1984). Popular Science

In the March 1984 issue of Popular Science, we reviewed the Macintosh release (and got really into MacPaint).

The post From the archives: Bill Gates hypes up the Apple mini Mac in 1984 appeared first on Popular Science.

]]>
A collage of images from Popular Science's “Apple’s mighty mini Mac: Can little Mac challenge Big Blue?” by Jim Schefter, March 1984
“Apple’s mighty mini Mac: Can little Mac challenge Big Blue?” (Jim Schefter, March 1984). Popular Science

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

When Jim Schefter interviewed Steve Jobs for Popular Science’s March 1984 review of Apple’s new Macintosh, Jobs asked Schefter, “What do you think?” Schefter confessed, “I didn’t have an answer.” He did, however, find an answer, sharing his wonder at the ease of use of novel applications like MacPaint, with never-before-experienced drag-and-drop capabilities. “It was all too much to absorb,” writes Schefter. 

While Schefter focused on Macintosh’s unrivaled functionality and acknowledged its unique form factor (“it looks different”), his review overlooked the significance of Apple’s seminal commercial from just months before: On January 22, 1984, with 6:32 left to play in the third quarter of Super Bowl XVIII, the announcers cut to a commercial directed by Ridley Scott—one that would forever change TV-ad norms and raise the Super Bowl bar. Leaning in on Orwell’s literary classic, 1984, IBM (or Big Blue) was Big Brother and Apple’s Macintosh was here to save the world from its dystopian stranglehold on technology.

Despite the excitement, Apple’s early success with Macintosh fizzled less than a decade later, and the company was on the ropes until Jobs, who had left, rejoined in 1997. He knew what made Apple Apple. With Macintosh’s debut, the company had transformed technology into a status symbol. Apple still pulls that winning marketing move from its playbook with major new product releases. It wasn’t that Apple had (or even has) the most advanced technology, it’s that they’d discovered how to package and market it to give it universal appeal and cachet.

“Apple’s mighty mini Mac: Can little Mac challenge Big Blue?” (Jim Schefter, March 1984)

It’s small. It’s got a funny name. But watch out. At $2,495, this third-generation 32-bit Macintosh Apple computer is designed to slice up its competition. At its core is a daring attempt to leapfrog technology—and the IBM PC—to once again make Apple the leader in the microcomputer industry.

“What do you think?” asked Steven Jobs, the young man who made personal computers a multibillion-dollar business when he helped develop the first Apple computer in a Silicon Valley garage years ago. “What don’t you like about it?”

I didn’t have an answer. The Macintosh is a totally new machine-a third-generation computer. It uses a 32-bit microprocessor and comes supplied with a 128-kilobyte random-access memory, a 64K read-only memory that contains an advanced operating system, a built-in Sony 3.5 inch floppy-microdisk drive with a storage capacity of 400K, a detachable keyboard, and a mouse.

All of these qualities give it extremely high operating speed, superb ease and flexibility of use, and advanced graphics capability, which, until now, was unavailable in the microcomputer field.

Priced at $2,495, Mac will go head-to-head with the IBM PC. Will Apple recapture its number-one position with the new machine? Jobs thinks so. To find out why he has such great expectations, I spent a few days at Apple’s corporate headquarters talking to its hardware and software engineers and trying out the Mac for myself. Here’s what I learned.

Mac not only is different, it looks different. It’s packaged in an unusual vertical cabinet, 20 inches high, and has a tiny 10-inch-square base. It weighs just 20 pounds and is transportable with an optional carrying case that fits under a standard airline seat.

Its nine-inch gray screen, controlled by specially designed electronics, has one of the best graphics and alphanumeric displays in the industry. It won’t show color, but it will display shades of gray. Screen resolution is 512 by 342 dots, or pixels, at 78 dots per inch.

Inside the Mac

The simple appearance of Mac’s internal electronics belies the three-plus years of innovative design and engineering that preceded its introduction. The computer has just two boards, a digital logic-and-memory board and a power-supply board with the video-display electronics.

Most computers need additional circuit boards or chips inside in case you want to expand capability. For example, to add an extra disk drive, you’d have to add a disk-drive-controller board. Not with the Mac.

“The digital board includes all the chips, controllers, and output ports you’ll ever need,” said an enthusiastic Burrell Smith, a design engineer whose business card titles him “Hardware Wizard.”

“We’re maniacs about getting it to work as fast as we can,” added Smith, whose stocky physique shows only marginal effects from the number of pizzas the Macintosh team consumed on the job while working nights and weekends to perfect the computer.

Among the handful of chips on Mac’s digital board are two that were custom-designed for the computer and six others with specific modifications to give the machine high speed.

“In raw processing power, it’s four to six times faster than the PC. Output from the serial ports can be up to one million bits per second.” That’s equivalent to the computer sending or receiving about 100,000 characters per second, which is staggering if you consider that, at best, most personal computers communicate at less than 1,000 characters per second.

In addition, the circuits are designed to accommodate the new 256K memory chips when their price makes them economical. That gives the Macintosh the potential for more than two megabytes of internal RAM, much more than twice the maximum amount capable in other machines.

Unlike many computers, no other change or addition is possible inside the cabinet. “There are no expansion slots,” Jobs said. “Everything is built in.”

Peripheral equipment-mouse, printer, modem, additional disk drives, external speaker-all connect through the five icon-marked ports at the rear. There is also a connection for “Applebus,” a communications network that ties together all Apple hardware and computers. Add-on disk drives, which may be useful but aren’t mandatory for running software, will be in short supply. Most units, to be delivered by Sony, are already dedicated to Macintosh production.

The $550 Macintosh printer is a modified C. Itoh Image Writer, which was redesigned to Mac specifications. ‘‘The major difference in the printer is a change in its read-only memory,” said Apple engineer Rick Hoiberg. Like nearly all dot-matrix printers, the Image Writer’s memory contained a “look-up table” of text and graphics characters. On receiving a character from a computer, the printer searched for it in the table, then sent the necessary instructions to the print head. If a character wasn’t stored in the table, it couldn’t be printed.

With the Macintosh, the table is used only for draft-quality printing.

All other instructions go directly from the computer to the print head. Each of the 177,840 pixels on the Macintosh screen can be sent individually to the print head.

“Fonts are controlled by the computer, not by the printer,” Hoiberg said. “In Macintosh, even text is considered to be graphics.” The result: You’re not stuck with a single type style and size, as with most printers. You can choose among many. And you can do highly complex graphics presentations, as I found out when I had a chance to use Mac (more about this later).

Trying it

Macintosh easily outstrips the eightbit Apple II and the 16-bit PC in both performance and ease of use. All operations are based on the upscale Lisa computer [PS, June ‘83], introduced a year ago: overlapping displays (called “windows”) on its screen that are controlled with the small, hand-held mouse rather than through the keyboard. You slide the mouse across a desk top, and an on-screen arrow moves in the same direction. Move it to point to a desired item-word processing, for example-and one click of the mouse’s button makes your selection: A menu of word-processing commands appears.

All this allows you to move freely between a variety of software programs and use common files for integrating text, graphics, spread sheets, data bases, or other information on the same screen.

To put Mac to work, I tried two new programs called MacWrite and MacPaint-two programs sold as a package for $195. Within 10 seconds of inserting the hard-cased microdisk into Mac, the screen displays a menu showing every file on the disk. Gliding the mouse under my right hand, I moved the on-screen arrow to MacWrite and clicked the mouse button. Immediately a small icon, resembling a wristwatch, appeared. “That’s telling you to wait while it loads,” said software expert Joe Shelton. “Typical applications load in 10 seconds.” The disk drive operated so quietly, it appeared that nothing was happening.

Then, suddenly, MacWrite was on the screen.

Across the top of the high-resolution screen ran a line of available options: an apple icon to call up other programs, and the words FILE, EDIT, SEARCH, FORMAT, STYLE, and FONTS. Beneath this list were a header bar that would soon contain the name of my document, a ruler showing the character spacing and margin-tab settings, and another row of icons to automatically change the format of the text shown on the screen.

I mouse-moved the arrow to FILE. With one click, a menu appeared. I chose NEW from the menu to start a new document. After typing its name on the keyboard, I was ready to work. The Macintosh keyboard has a firm and comfortable feel that will appeal to touch typists. But I was more interested in experimenting than typing. Shifting the mouse on the desktop, I looked at the rest of MacWrite’s menus. My choices were legion. For example, all of the standard word-processing functions-cut-and-paste, search and-replace, copy-and-move, and more were instantly available.

Being accustomed to moving a cursor or calling up menus through a keyboard, I initially found mouse control difficult. It’s a matter of retraining for current computer users-who would typically need a three- or four-hour adjustment period-but that won’t be necessary for first-time computer users.

In the next hours, I discovered a new world of word-processing and graphics imagery. Under the STYLE menu, I found that I could change the type size of anything from a single letter to an entire document (there are five choices, from 10-point to 24- point, which is approximately 1/1s-inch to 1/4-inch-high type) or change the type style, choosing from among plain text, bold, italic, underlined, outlined, and shadowed type. Moving to FONTS. I had 10 choices of typefaces in each of the sizes and styles. “Popular Science” looks strange in Old English italic.

Such capabilities will find wide use in any business doing presentations, reports, and similar documents. But it’s the graphic artistry of MacPaint that such professionals will most appreciate. Calling up the program, I was faced with an incredible variety of choices.

There were 20 small boxes on the screen’s left side, another 41 along the bottom, five styles of straight lines in the lower-left corner, and a standard menu across the top. Want to draw a simple box? Mouse-select the unfilled rectangle from the left column, and move the cursor to the screen: MacPaint draws a box. Using the mouse again, move the box anywhere on the screen, enlarge or reduce it, change the ratios of its sides, or fill it with any of 41 shaded patterns.

Use a lasso icon to rope in words or diagrams, then move them or overlay them on other graphics. Use the marquee icon to include the original background. Use the spray can, the paint brush, or the pencil for freehand additions. Add circles or other shapes. Then use the eraser icon to wipe it all out or just smooth an edge.

It was all too much to absorb. With MacPaint, a good artist is virtually unlimited. And even a novice can create usable business graphics. The finished product is easily moved-again with the mouse-into a report or other document where its size and position can be tailored, or it can be printed out for reproduction or slide preparation.

“You can create the ugliest documents anyone has ever seen,” Burrell Smith joked.

“There’s more,” Shelton said, pulling down a menu that revealed something called FatBits. With FatBits, MacPaint enlarges selected portions of a drawing or text to show on the screen enlargements of the individual pixels. Each pixel can be independently modified with the mouse changed in shape or shading-to give the user absolute control over the final product. New typefaces also can be designed and stored in memory by building them dot by dot.

The Macintosh graphics alone will make the machine attractive to the business community. But the computer’s real power is its ability to shift freely and to integrate text, graphics, financial analyses, and any other data between different software applications, even those written by different suppliers. And software compatibility is the key to any computer’s success.

Compatibility-now and future

Macintosh and Lisa form the core of what the company calls its new Apple 32 system. (Simultaneously with Macintosh, Apple announced the Lisa-2 in three models: a basic unit with one half megabyte of random-access memory at $3,495; a model with one-half megabyte of RAM and a five-megabyte hard disk that will go for $4,495; and a model with a 10-megabyte hard disk that will cost $5,495.)

All Macintosh software will run on Lisa machines, and some Lisa programs, such as a $99 “Project Manager,” are being modified for use on Mac. (Neither computer will run Apple II or Ill software.) Included with the Mac is a limited-software package that includes a desk-top manager, on-screen calculator, on-screen clock-calendar, and some simple games.

But Apple expects most of Mac’s software to come from outside developers. Scores of sophisticated software packages, independently written and marketed, have contributed to making popular both the Apple II and the IBM PC. In 1981, in a move to duplicate the software phenomena for Macintosh, Apple delivered its specifications to more than 100 software developers.

“We’ve been working with Apple for almost two years on the Macintosh,” said Bill Gates, chairman of MicroSoft. ‘‘We helped develop and debug some of its interior software, and we have five packages of our own that we will be marketing.”

Included are MicroBasic and MicroPlan, now being delivered, and software for data-base management, word processing, and graphics. Each one expands Mac’s formidable capabilities and will sell for less than $200.

Other software companies with Macintosh packages that are either already completed or are about to be released include Lotus, with a Mac version of its number-one national selling 1-2-3 package, previously available ·only for the IBM PC; AshtonTate, with Mac versions of its popular dBase-II and Friday! data-base managers; and Software Publishers, which modified its PFS software line.

Gates sees Macintosh as a crucial test of whether any personal-computer company can take an independent road in a market dominated by IBM. “If Macintosh isn’t a success, then the market is left to the PC,” he said. “But we’re super-enthusiastic. If Apple can meet its production goals, we expect half of MicroSoft’s retail sales in 1984 to be Macintosh-related.”

Computers photo
The cover of the March 1984 issue of Popular Science featuring Apple’s mini Mac and an early “smart” window.

Some text has been edited to match contemporary standards and style.

The post From the archives: Bill Gates hypes up the Apple mini Mac in 1984 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: ‘Do beavers rule on Mars?’ https://www.popsci.com/science/do-beavers-rule-on-mars/ Fri, 06 May 2022 11:00:00 +0000 https://www.popsci.com/?p=439960
“Do beavers rule on Mars?” (Thomas Elway, May 1930)
Illustrations from “Do beavers rule on Mars?” (Thomas Elway, May 1930). Popular Science

In the May 1930 issue of Popular Science, Thomas Elway proposed a very imaginative take on life on Mars.

The post From the archives: ‘Do beavers rule on Mars?’ appeared first on Popular Science.

]]>
“Do beavers rule on Mars?” (Thomas Elway, May 1930)
Illustrations from “Do beavers rule on Mars?” (Thomas Elway, May 1930). Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Perhaps best known for his colorful depiction of life on Mars in Popular Science’s May 1930 feature, “Do Beavers Rule On Mars?”, science writer Thomas Elway was no stranger to conjecture. In addition to his  prediction of a ruling class of Red Planet beavers whose “eyes might be larger than those of the Earthly beaver because the sunlight is not so strong,” and whose “bodies might be larger because of lesser Martian gravity,” Elway also described a species of crab that might inhabit the Moon (“The Moon is Made of Cinders”, Popular Science, December 1929). These shellfish donned hard outer shells to “prevent loss of bodily fluids into airless space” and “eyes which could turn sunlight into food.” 

When it came to fantasizing about life among the cosmos, Elway was not alone at the turn of the last century. Advances in physics, telescope technology, and rocket science sparked the imaginations of more than just science journalists. Hugo Gernsback launched America’s first science fiction magazine, Amazing Stories, in 1926, which featured tales and images of alien life. Often blurring the lines between science fiction and science fact, the budding genre was known as scientifiction.

To be fair, not all Elway’s predictions flirted so openly with make-believe. In a 1924 story for Popular Radio, “Rapid Transit By Radio,” he predicted that the same electromagnetic forces used to propagate radio waves would soon be harnessed to levitate trains. Elway’s “Radio Express” would run through “air-tight tubes” that might travel at speeds of 10,000 mph, whisking Midwesterners “in a few minutes to the door of a Broadway theatre.” Nearly a century later, on November 8, 2020, passengers traveled 680 miles per hour through an airtight tube in a trial of Virgin’s Hyperloop. Elon Musk is chasing the “Radio Express” too. But even massive wealth can’t transform science fiction into science fact. Go ask Elway.

“Do beavers rule on Mars?” (Thomas Elway, May 1930)

No trace of human intelligence has been found on the red planet, and it is thought that evolution, through lack of the stress that helped on earth, may have halted with some animal adapted to a land and water life.

Mars is so like the Earth that men might live there. It has air, water, vegetation, a twenty-four-hour succession of day and night, and daily temperatures no hotter and nights not much colder than are known on Earth. But because Mars has no mountain ranges and probably never had an Ice Age, it is considered highly improbable that it is inhabited by manlike creatures or by any that possess what men call intelligence. The evolution of life on Mars must have been different from that on Earth.

One of the best signs of intelligence on Mars, Dr. Clyde Fisher, of the American Museum of Natural History, New York City, said recently, would be some indication of artificial light on the planet. Undoubtedly, lighted cities on Mars could be seen through the telescopes now in use. However, there is one condition that prevents satisfactory and conclusive observation. When Mars is closest to the Earth, both planets are on the same side of the sun. Then only the sunlit side of liars is seen. To see any part of the night side of Mars, observation must be made when it is part way around in its orbit toward the far side of the sun, so that a slice of both the dark and the lighted sides can be seen. When even a part of the night side is visible, Mars is relatively far away and difficult to see dearly. The Martians, if there are any, would not have equal difficulty in observing the dark side of the Earth, for when the two planets are nearest to each other, the Earth is showing Mars its dark side.

These consequences of the orbits in which the two planets move might make it difficult for the dim glow of lighted Martian villages, were any such in existence, to be detected from the Earth. Cities as bright as New York or Paris, on the other hand, undoubtedly would be visible. With the new 200-inch telescope which, it is planned, will be erected in California, it surely would be possible, Dr. Fisher predicted, to distinguish such brightly lighted cities, if any such Martian centers of civilization exist. If such artificial lights are never seen, he added, it might go a long way toward proving that Mars does not possess intelligent life. Other students of the subject, however, say it is possible that Martian civilization may correspond to that of an earlier. pre-artificial light era on Earth. In any case, astronomers agree that there is a practical certainty that Mars possesses kinds of life below human intelligence. 

Any deduction about the life forms on Mars or other planets, in the opinion of leading astronomers, must start, if it is to be at all reasonable, with the idea of the distinguished Swedish scientist, Dr. Svante Arrhenius, of one kind of life-germ pervading the entire solar system. There is no reasonable way even to guess the form of this life-germ. It may, perhaps, have drifted, as tiny living spores, from planet to planet, whirled through space by the pressure of light.

Whatever its form, the life-germ, biologists assume, probably developed on Mars, much as it did on Earth, in oceans which have evaporated in the course of ages. Early conditions on the two planets are supposed to have been very similar.

The theory that Martian life evolved along lines similar to those followed by evolution of life on Earth is supported by at least one definite fact. Careful spectroscopic studies at ML Wilson Observatory, near Pasadena, Calif., and elsewhere have disclosed that gaseous oxygen exists in the Martian atmosphere. The presence of oxygen gas is highly significant, since the only known way in which any planet can obtain a supply of this gas is through the life activities of plants.

Following the lead of the great expert in Martian astronomy, the late Professor Percival Lowell, astronomers long have recognized on Mars dark-colored spots which are believed to be covered with vegetation. The oxygen which spectroscopes show in the Martian air is taken as another proof that this vegetation exists. 

Since the activity of plants is the only known process of cosmic chemistry by which free oxygen can be produced on the surface of a cooled planet, the presence of oxygen in the rarefied air of Mars indicates that vegetation there must have produced oxygen out of water and sunlight as it has done on earth. It is difficult to exaggerate the importance to Martian theorizing of the definite fact that Mars has oxygen and, therefore, vegetation.

A certain way along the path of evolution, Martian life shows evidence of having undergone a development like that on Earth. What happened after that is a matter of deduction. The known facts about Mars are the fruits of years of astronomical observation and study. The dark and light markings on its surface can be seen through a large telescope. The lighter ones are reddish or yellowish and usually are interpreted as being deserts. The darker areas are greenish or bluish in color and are universally ascribed to vegetation. Mars possesses two white polar caps. Recent measurements of Martian temperatures by Dr. W. W. Coblentz and Dr. C. O. Lampland, at the Flagstaff Observatory indicate that these are composed of snow and ice.

In the Martian autumn these caps increase and become whiter. In the planet’s spring they shrink and often seem to be surrounded by wide rings of bluish or blackish material, which may be sheets of water or vegetation. Still more significant are the springtime changes in the planet’s area of supposed vegetation. Many of these darken in color. Others widen or lengthen. Often new dark areas appear where none had been visible during the Martian winter. Few astronomers now doubt that these dark areas represent some kind of vegetation. 

So far, everything runs strikingly parallel with evolution on Earth. It is probable that it will be found to have run parallel farther still and that animal life on both planets, too, has been similar—for at least part of the evolutionary story. But during all the years of earnest and competent research not one clear sign of manlike life on Mars had been detected. Professor Lowell’s famous Martian “canals,” which for a long time were considered a probable sign of the intelligent direction of water, are now believed to be wide, shallow river valleys.

This lack of manlike life is precisely what a biologist would expect. Man and man’s active mind are believed to be products of the Great Ice Age, for that time of stress and competition on Earth is what is supposed to have turned mankind’s anthropoid ancestors into humans. The period of ice and cold over wide areas of the earth was caused, at least in part, but the elevation of continents and mountain ranges. On Mars, no mountain ranges exist, and it probably never had an Ice Age.

It is on these hypotheses that science bases its assumption that there is no human intelligence on Mars, and that animal life on the planet is still in the age of instinct. The thing to expect on Mars, then, is a fish life much like that on earth, the emergence of this fish life onto the land, and the evolution of these Martian land-fishes into reptile-like creatures. Finally, animals resembling Earth’s present rodents like rats, squirrels, and beavers would make their appearance.

The chief reason to expect this final change of Martian reptiles into primitive mammals lies in the fact that on Earth this evolution seems to have been forced by changeable weather. And Mars now possesses seasonal changes like those on Earth.

Pure biological reasoning makes it probable, therefore, that the evolution of warm-blooded animals may have occurred on Mars much as it did here. There seems no reason to believe that Martian life has gone farther than that. Mars is a relatively changeless planet. Biologists suppose that the rise and fall of mountains, the increase and decrease in volcanic activity, and the ebb and flow of climate forced life on earth along its upward path. Martian life of recent ages seems to have lacked these natural incentives to better things.

Now, there is one creature on Earth for the development of whose counterpart the supposed Martian conditions would be ideal. That animal is the beaver. It is either land-living or water-living. It has a fur coat to protect it from the 100 degrees below zero of the Martian night.

The Martian beavers, of course, would not be exactly like those on Earth. That they would be furred and water-loving is probable. Their eyes might be larger than those of the earthly beaver because the sunlight is not so strong, and their bodies might be larger because of lesser Martian gravity. Competent digging tools certainly would be provided on their claws. The chests of these Martian beavers would be larger and their breathing far more active, as there is less oxygen in the air on Mars.

Such beaver-Martians are nothing more than pure speculation, but the idea is based upon the known facts that there is plenty of water on Mars; that vegetation almost certainly exists there; that Mars has no mountains and could scarcely have had an Ice Age; and that evidences of Martian life are not accompanied by signs of intelligence.

Herds of beaver-creatures are at least a more reasonable idea than the familiar fictional one of humanlike Martians digging artificial water channels with vast machines or the still more fantastic notion of octopuslike Martians sufficiently intelligent to plan the conquest of the Earth.

Mars photo
The cover of the May 1930 issue of Popular Science featuring stunt people, conmen, alcohol and extraterrestrial rodents.

Some text has been edited to match contemporary standards and style.

The post From the archives: ‘Do beavers rule on Mars?’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: When 1970s cellular technology made ‘traveling telephones’ more accessible https://www.popsci.com/technology/cellular-technology-emergence/ Thu, 05 May 2022 11:00:00 +0000 https://www.popsci.com/?p=439359
An illustration from the January 1978 issue of Popular Science from an article about mobile phones.
"Traveling telephone –new technology expands mobile/portable service" by John Mason, January 1978. Popular Science

In the January 1978 issue of Popular Science, we explored the latest innovations in wireless services and their implications for the future.

The post From the archives: When 1970s cellular technology made ‘traveling telephones’ more accessible appeared first on Popular Science.

]]>
An illustration from the January 1978 issue of Popular Science from an article about mobile phones.
"Traveling telephone –new technology expands mobile/portable service" by John Mason, January 1978. Popular Science

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Until Heinrich Hertz discovered radio waves in 1887, the vast and invisible electromagnetic spectrum was a silent wilderness, punctuated by nature’s static bursts. But Hertz set in motion a new era that would quickly fill that void with low-end radio waves, mid-range microwaves, and high-end gamma rays (medical imaging). Despite the breadth of the wireless spectrum, a small slice (roughly 500 MHz–3.5 GHz) has been staked out like no other: As Popular Science reported in January 1978, it turns out to be the optimal range to propagate signals to and from mobile devices, like traveling telephones. 

Scarcity has driven mobile network innovation ever since the first frequencies were set aside by the FCC in the 1940s. Even as the size of mobile phones shrank from crate-sized in the 1950s and ‘60s to brick-sized in the ‘70s, the pinch was less phone form factor than it was inefficient use of spectrum. Until the late ‘70s, each new mobile phone that went live in a city required its own dedicated frequency, the way a local radio station requires its own channel. As John Mason reported, “because radio channels are so limited, there are long waiting lists in most cities for mobile-phone service.” 

In 1974, to address pent-up demand, the FCC released more spectrum but insisted that companies find a better way to use it. As Mason explains with geeky precision, cellular technology got its name from its design, deploying short-range transmission towers to divide large regions, like cities, into honeycomb-shaped cells, enabling frequency reuse. More than any other technology, cellular (first conceived in 1948 but not computationally practical until the 1970s) paved the way for the mobile era. 

Since the ‘70s, the FCC has continued to release spectrum. A mere fraction of the mobile-device slice sold for more than $20 billion in a November 2021 auction (the 3.45 GHz band). Cell networks continue to aim higher on the spectrum, shrinking cells to overcome propagation limitations and deliver more data. Today’s 5G technology will be capable of reaching all the way to 40 GHz to achieve blazing data-delivery speeds.

“Traveling Telephone–new technology expands mobile/portable service” (John Mason, January 1978)

There’s a button labeled SND on Motorola’s futuristic-looking Pulsar II radiotelephone. I pushed it, and a number stored in its microcomputer memory began stepping, digit by digit, across the red LED handset display. This amazing car telephone not only remembers 10 often-used phone numbers, but calls any of them at the press of one button.

Earlier, at Motorola’s Communications Group plant outside Chicago, I had picked up a portable Dynatac (Dynamic adaptive total area coverage) phone, tapped out a number on its Touch-Tone keypad, and called my New York office. Electronic gear at the plant patched my call directly into the phone network. A mobile/portable telephone operator wasn’t needed. Advanced mobile and portable telephones are already in use throughout the country. Motorola markets its $890 Pulsar II (less transceiver) for 150- and 450-MHz systems; its Dynatac portables aren’t available yet, although other compact portable phones are sold and leased.

But while fancy hardware for on-the-go telephone calls is readily available, the radio frequencies needed to carry today’s heavy volume of mobile/portable calls are not available. Because radio channels are so limited, there are long waiting lists in most cities for mobile phone service.

One cure for this congestion, according to communications experts I’ve talked with, is a blend of the latest in computer and RF technology and the concept of radiotelephone cells and frequency reuse. The two phones I used are examples of this new technology.

The concept is simple. A conventional mobile phone system uses one high powered central transmitter and a sensitive receiver serving all mobile units in the area. Thus a single frequency can be used by only one mobile unit at a time. In a cellular arrangement, the high-powered central transmitter/receiver is not used. Instead, many smaller transmitter/receivers that each cover only a few square miles are installed. Now, a given frequency can be used simultaneously by mobile units in several different areas or “cells” without interfering with each other. The result: A lot more calls can be placed on a given frequency band, and a lot of those people on the waiting list for portable phones can get service.

While the system is simple in principle, it is enormously complicated in practice. How do you decide which mobile unit gets which frequency at which time? And how do you make sure that two adjacent cells aren’t using the same frequency simultaneously—a situation that could possibly cause interference? The answer is a complicated system of computer control. While details vary from system to system and even within systems—more about that later—here’s how one typical setup might operate.

In the system now planned by Motorola for the Washington-Baltimore area, the entire region would be broken into five hexagonal cells, each with an 11-mile radius (see diagram).

The base antenna serving some hexagonal cells has six V-shaped sector transmit-and-receive antennas, breaking each of these areas into six smaller cells.

What happens when you’re driving around and somebody calls your number? “The system first has to locate a mobile unit in order to assign a proper channel in the proper cell,” says Motorola group product manager Andrew Daskalakis.

Pinpointing and monitoring your location is accomplished with computers nine will ultimately be used in the Washington-Baltimore developmental system.

Computer in your trunk

For an incoming call, computer data on special signaling channels are beamed over all cell transmitters. A powerful microcomputer built into the bread-box-size transceiver in your car trunk recognizes your mobile code. Your computer then transmits a signal that instantly tells a base-station computer what cell you’re in.

Next, to determine how far you are from the cell antenna, the main computer sends a six-kHz tone to your mobile. This tone triggers a transponder that retransmits the tone back to the base-station receiver. By comparing phase differences between the transmitted and received tone, the distance from the base to your car is computed.

Using this distance information, and the strength of your computer’s signal, the base computer can crank the power output of your mobile transmitter up or down. “It takes care of the portable-in-the-high-building problem, and the mobile-on-a-high hill problem,” says Daskalakis. Traveling telephones at those elevations can transmit much farther than normal, interfering with other cells.

The inaudible chit chat between computers—redundantly coded to prevent errors from static or fading—takes only a split second. Your mobile is automatically tuned to a voice channel. Once you’re “linked up,” the main computer actuates the “ringer” in your mobile.

After you answer your call, and while you’re driving, the main computer periodically scans your mobile to monitor your location. If you move to where another cell transceiver would provide better reception, the base computer switches you instantly. A similar “handshake” between computers occurs just before the dial tone when you place a call.

The system just described would go into operation in stages. In less heavily populated areas, a less complicated system would be adequate. There, the receiver section of the base station would use multiple antennas to split the cell into six pie-shaped sectors. These high-gain receive antennas can pick up signals from low-power (one watt) portable telephones. But a single omnidirectional transmit antenna could cover the whole cell. Another cellular system is now beginning experimental operation in Chicago. It is operated by Illinois Bell, and, while the principles are the same, it differs somewhat in operating details from the Motorola system. The Chicago system, for example, will initially have 10 cells, each with an eight-mile radius from its central transmitter. Cell coverage will blanket a 2100-square-mile region in Chicago.

This eight-mile cell system, however, is less sophisticated than the setup originally proposed by American Telephone & Telegraph Co. That system, presented in 1971 to the FCC, specified four-mile radius cells, with directional antennas at alternate corners of each hexagonal cell (see diagram). With four-mile cells, frequency reuse would be possible in cells 18 miles apart.

In the system being built, frequency reuse is only possible in two cells, about 48 miles apart. Conventional mobile systems usually have a reuse distance greater than 100 miles.

“We’re authorized to serve 2500 customers,” says James Troe of Bell Laboratories’ telephone service trial department, which is setting up the new system. “We’re sizing the system and channel capacity to accommodate that level,” he said, to explain why AT&T is building a less costly system.

Smaller cells would come later to meet growing demand. Companies can expand cellular systems to serve tens or hundreds of thousands—simply by adding more transmitters, shrinking cell sizes, and reusing the frequencies more often.

While the Illinois Bell system is basically compatible with the Washington-Baltimore setup, there are some differences. A Motorola Dynatac portable, for example, would not function adequately in the initial Chicago system, which lacks sectorized high-gain receive antennas, although a Dynatac car telephone would.

The drive to develop systems that use scarce radio-spectrum space efficiently goes back to 1968, when the FCC began considering what to do about the tremendous demand for mobile telephone service and the lack of frequency space to satisfy that demand. At that time, mobile phones operated in the 35-, 150-, 450-MHz bands.

In 1974, after extensive hearings and delays the FCC set aside a slice of UHF frequencies from 806 MHz to 947 MHz Parts of this so-called 900-MHz band were allocated for private land mobile companies, public service use, and utilities such as telephone companies and Radio Common Carriers (RCC’s) that now operate phone and pocket pager service [PS, July ’77] in the conventional 35-, 150-, and 450-MHz frequency bands.

When the FCC allocated part of the 900-MHz band for mobile telephone use, it also specified that companies interested in using the band would have to design systems to meet growing service demands.

But though the basic decision was made in 1974 and the equipment is ready, no such system is at present operational (the Chicago system is now under limited test, but is not yet available for use by the public). One principal factor blocking final authorization: The 700 small RCC’s that operate a lot of the country’s radiotelephone and paging service don’t want the competition. Almost anybody can go into business and serve a local area as long as he needs only one central transmit and receive location. But the new systems, which would require many base stations plus complex computer control networks, would cost more than most RCC’s could afford. Thus the RCC’s have been protesting vigorously at FCC hearings, filing court cases, and otherwise obstructing movement.

Bell is now operating in Chicago under an experimental license. Motorola, which has signed a contract with a Baltimore RCC, American Radio Telephone Service, has received FCC approval to go ahead and build its proposed Washington-Baltimore system.

Meanwhile, some experts—and some RCC’s—are arguing that cells aren’t really the most efficient way to expand traveling-telephone service. They recommend several alternative concepts-such as the use of digitized voice signals.

Other technologies

A consortium of three RCC’s has filed an application to try this technique in the Washington, D.C. area. This application, of course, is competing before the FCC with the Motorola application. The RCC noncellular concept, which was developed on paper by Harris Corp., requires one extremely powerful (375 kw) transmitter site. Voice signals would be digitized and beamed out as bursts of pulses in packets; each packet is coded for separate mobiles.

Yet another concept known as spread-spectrum is receiving attention among communications experts. Used extensively by the military, spread-spectrum signals are highly immune to jamming and interception. Imagine that each FM station spread its signal across the entire FM band from 88 MHz to108 MHz. Each station, however, would encode its output so that a special filter in your FM set could decode its signal.

For mobile telephone communications, recently developed semiconductor and electronic filter technologies might make it possible for everyone in the country to have a unique spread-spectrum decoding circuit for a traveling phone.

While you can expect to hear about various technologies for mobile systems in coming years, AT&T executive vice-president Thomas Nurnberger thinks expansion of the Chicago cellular system concept “will make it possible in the future for virtually anyone on the move to have a telephone in cars or temporary locations.” Nurnberger cites the boom in CB radios as evidence of a pent-up national need for two-way communication that AT&T thinks can be satisfied with cellular technology.

Phones photo
January 1787 cover of Popular Science featuring a cover article on mobile phone technology, waterless toilets, and microelectronics.

Some text has been edited to match contemporary standards and style.

The post From the archives: When 1970s cellular technology made ‘traveling telephones’ more accessible appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: The germ theory of disease breaks through https://www.popsci.com/science/germ-theory-of-disease-origin/ Wed, 04 May 2022 11:00:00 +0000 https://www.popsci.com/?p=439579
An excerpt from the famous germ theory of disease lecture
"The germ theory of disease" by H. Gradle, M.D, September 1883. Popular Science

Imperfect but important, the seminal lecture on the origins of disease appeared in the 1883 issue of Popular Science Monthly.

The post From the archives: The germ theory of disease breaks through appeared first on Popular Science.

]]>
An excerpt from the famous germ theory of disease lecture
"The germ theory of disease" by H. Gradle, M.D, September 1883. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Germs first came into focus, literally, under the microscope of Robert Koch, a doctor practicing in East Prussia in the 19th century. Until then, as Popular Science reported in September 1883, getting sick was attributed to everything from evil spirits to “impurities of the blood.” Koch first eyed Bacillus anthracis, or anthrax, in animal tissue in 1877, from which he linked microbes with disease. But it was his isolation of Mycobacterium tuberculosis in 1881—then known as consumption—that set off an avalanche of germ discovery. 

In the 1880s alone, Koch and others cataloged a slew of plagues: cholera (1883), salmonella (1884), diphtheria (1884), pneumonia (1886), meningitis (1887), and tetanus (1889). By 1881, Louis Pasteur, a French chemist, had already developed the world’s first vaccine, used on sheep to prevent anthrax.

Henry Gradle, a Chicago physician and the author of Popular Science’s 1883 “The Germ-Theory of Disease” (originally delivered as a lecture to the Chicago Philosophical Society in November, 1882) had been a pupil under Koch and brought word of the German and French discoveries to the US and UK. Gradle writes with great flourish, not holding back his disdain for those who disagreed with the new germ theory, likening them to “savages” of human antiquity who saw only “evil spirits in disease”—as vulgar as such phrases are in a modern context. 

Although now considered a watershed moment for medicine, at the time, germ theory had gaping holes, including an understanding of the immune system’s role in disease. Antoine Béchamp, a chemist and Pasteur’s bitter rival, argued that it was not germs but the state of the host (the patient) that caused illness otherwise, he noted, everyone would be sick all the time. Béchamp had his followers who stood fast against the germ theory. 

As Thomas Kuhn, a noted scientific philosopher, proposed in his 1962 essay, The Structure of Scientific Revolutions, paradigm shifts like germ theory were “revolutions,” because they shook both science and society.

“The germ theory of disease” (H. Gradle, M.D, September 1883)

Scourages of the human race and diseases are attributed by savages to the influence of evil spirits. Extremes often meet. What human intelligence suspected in its first dawn bas been verified by human intelligence in its highest development. Again, we have come to the belief of evil spirits in disease, but these destroyers have now assumed a tangible shape. Instead of the mere passive, unwitting efforts with which we have hitherto resisted them, we now begin to fight them in their own domain with all the resources of our intellect. For they are no longer invisible creatures of our own imagination, but with that omnipotent instrument, the microscope, we can see and identify them as living beings, of dimensions on the present verge of visibility. The study of these minute foes constitutes the germ theory. 

This germ theory of disease is rising to such importance in medical discussions that it cannot be ignored by that part of the laity who aspire to a fair general information. For it has substituted a tangible reality for idle speculation and superstition so current formerly in the branch of medical science treating the causes of disease. Formerly—that is, within a period scarcely over now—the first cause invoked to explain the origin of many diseases was the vague and much-abused bugbear “cold.” When that failed, obscure chemical changes, of which no one knew anything definitely, or “impurities of the blood,” a term of similar accuracy and convenience, were accused, while with regard to contagious diseases medical ignorance concealed itself by the invocation of a “genus epidemicus.” The germ theory, as far as it is applicable, does away with all these obscurities. It points out the way to investigate the causes of disease with the same spirit of inquiry with which we investigate all other occurrences in nature.  In the light of the germ theory, disease is a struggle for existence between the parts of the organism and some parasite invading it. From this point of view, diseases become a part of the Darwinian program of nature.

The animal body may be compared to a vast colony, consisting as it docs of a mass of cells the ultimate elements of life. Each tissue, be it bone, muscle, liver, or brain, is made up of cells of its own kind, peculiar to and characteristic of the tissue. Each cell represents an element living by itself, but capable of continuing its life only by the aid it gets from other cells. By means of the blood vessels and the nervous system, the different cells of the body are put into a state of mutual connection and dependence. The animal system resembles in this way a republic, in which each citizen depends upon others for protection, subsistence, and the supply of the requisites of daily life. Accustomed as each citizen is to this mutual interdependence, he could not exist without it. Each citizen of this animal colony, each cell, can thrive only as long as the conditions persist to which it is adapted. These conditions comprise the proper supply of food and oxygen, the necessary removal of the waste products formed by the chemical activity of all parts of the body, the protection against external mechanical forces and temperature, as well as a number of minor details. Any interference with these conditions of life impairs the normal activity of the entire body, or, as the case may be, of the individual cells concerned. But the animal system possesses the means of resisting damaging influences. Death or inactivity of one or a few citizens does not disable the state. The body is not such a rigid piece of mechanism that the breakage of one wheel can arrest the action of the whole. Within certain limits, any damage done to individual groups of cells can be repaired by the compensating powers of the organism. It is only when this compensating faculty fails, when the body can not successfully resist an unfavorable influence, that a disturbance arises which we call disease. This definition enables us to understand how external violence, improper or insufficient food, poisons, and other unaccustomed influences, can produce disease. But modern research has rendered it likely that the diseases due to such causes are not so numerous as the affections produced by invasion of the body by parasites.

Of these a few are known to be animals―for instance, the trichina, and some worms found in the blood in certain rare diseases. But the bulk of the hosts we have to contend with is of vegetable nature, and belongs to the lowest order of fungi-commonly termed bacteria.

Special names have been given to the different subdivisions of this class of microscopic beings―the rod-shaped bacteria being termed bacilli; the granular specimens, micrococci; while the rarer forms, of the shape of a screw, arc known as spirilla.

Bacteria surround us from all quarters. The surface of the earth teems with them. No terrestrial waters are free from them. They form a part of the atmospheric dust, and are deposited upon all objects exposed to the air. It is difficult to demonstrate this truth directly with the microscope, for in the dry state bacteria are not readily recognized, especially when few in number. But we can easily detect their presence by their power of multiplication. We need but provide a suitable soil. An infusion of almost any animal or vegetable substance will suffice―meat broth, for instance―though not all bacteria will grow in the same soil. Such a fluid when freshly prepared and filtered, is clear as crystal, and remains so if well boiled and kept in a closed vessel, for boiling destroys any germs that may be present, while the access of others is prevented by closure of the flask. But as soon as we sow in this fluid a single bacterium, it multiplies to such an extent that within a clay the fluid is turbid from the presence of myriads of microscopic forms. For this purpose we can throw in any terrestrial object which has not been heated previously, or we can expose the fluid to the dust of the air. Air which has lost its dust by subsidence or filtration through cotton has not the power of starting bacterial life in a soil devoid of germs. Of course, the most certain way of filling our flask with bacteria is to introduce into it a drop from another fluid previously teeming with them.

In a suitable soil each bacterium grows and then divides into two young bacteria, it may be within less than an hour, which progeny continue the work of their ancestor. At this rate a single germ, if not stinted for food, can produce over fifteen million of its kind within twenty-four hours! More astounding even seems the calculation that one microscopic being, some forty billion of which can not weigh over one grain, might grow to the terrific mass of eight hundred tons within three days, were there but room and food for this growth!

During their growth the bacteria live upon the fluid, as all other plants do upon their soil. Characteristic, however, of bacteria-growth is the decomposition of any complex organic substances in the fluid to an extent entirely disproportionate to the weight of the bacteria themselves. This destructive action occurs wherever bacteria exist, be it in the experimental fluid, or in the solid animal or vegetable refuse where they are ordinarily found. It constitutes, in fact, rotting or putrefaction. The processes of decomposition of organic substances coming under the head of putrefaction are entirely the effect of bacterial life. Any influence, like heat, which kills the bacteria, arrests the putrefaction, and the latter does not set in again until other living bacteria gain access to the substance in question. Without bacteria, no putrefaction can occur, though bacteria can exist without putrefaction, in case there is no substance on hand which they can decompose.

No error has retarded more the progress of the germ theory than the false belief that the bacteria of putrefaction are identical with the germs of disease. The most contradictory results were obtained in experiments made to demonstrate on animals either the poisonous nature, or, on the other hand, the harmlessness, of the fungi commonly found in rotting refuse. But real contradictions do not exist in science; they are only apparent, because the results in any opposite eases were not obtained under identical conditions. The explanation of the variable effects of common putrefaction―germs upon animals is self-evident as soon as we admit that each parasitic disease is due to a separate species of bacteria, characteristic of the disease, producing only this form and no other affection; while, on the other band, the same disease can not be caused by any other but its special parasite. It can be affirmed, on the basis of decisive experiments, that the bacteria characteristic of various diseases float in the air, in many localities at least. Hence rotting material, teeming with bacterial life, may or may not contain disease-producing germs, according to whether the latter have settled upon it by accident or not.  Even if these disease-­producing species were as numerous in the dust as the common bacteria of putrefaction, which we do not know, they would be at a disadvantage, as far as their increase is concerned. For experience has shown that the germs of most diseases require a special soil for their growth, and can not live, like the agents of putrefaction, upon any organic refuse. In some cases, indeed, these microscopic parasites are so fastidious in their demands that they can not grow at all outside of the animal body which they are adapted to invade. Renee, if a decomposing fluid does contain them, they form at least a minority of the inhabitants, being crowded out by the more energetically growing forms. In the microscopic world there occurs as bitter a struggle for existence as is ever witnessed between the most highly organized beings. The species best adapted to the soil crowds out all its competitors.

Though the putrefaction bacteria, or, as Dumas calls them, the agents of corruption, are not identical with disease-producing germs, they are yet not harmless by themselves. Putrid fluids cause grave sickness when introduced into the blood of animals in any quantity. But this is not a bacterial disease proper; it is an instance of poisoning by certain substances produced by the life-agency of the bacteria while decomposing their soil. The latter themselves do not increase in the blood of the animal; they are killed in their struggle with the living animal cells. The putrefaction-bacteria need not be further present in the putrid solution to produce the poisonous effect on animals. They may be killed by boiling, if only the poisonous substances there formed remain.

In order to prove the bacterial origin of a disease two requirements are necessary: First, we must detect the characteristic bacteria in every case of that disease; secondly, we must reproduce a disease in other individuals by means of the isolated bacteria of that disease. Both these demonstrations may be very difficult. Some species of bacteria are so small and so transparent that they can not be easily, if at all, seen in the midst of animal tissues. This difficulty may be lessened by the use of staining agents, which color the bacteria differently from the animal cells. But it often requires long and tedious trials to find the right dye. The obstacles in the way of the second part of the proposition mentioned are no less appalling. Having found a suspected parasite in the blood or flesh of a patient, we can not accuse the parasite with certainty of being the cause of the disease, unless we can separate it entirely from the fluids and cells of the diseased body without depriving it of its virulence. In some cases it is not easy, if possible, to cultivate the parasite outside of the body; in other instances it can be readily accomplished. Of course, all such attempts require scrupulous care to prevent contamination from other germs that might accidentally be introduced into the same soil. If we can now reproduce the original disease in other animals by infection with these isolated bacteria, the chain of evidence is complete beyond cavil and doubt. But this last step may not be the least difficult, as many diseases of mankind can not be transferred to animals, or only to some few species.

If we apply these rigid requirements, there are not many diseases of man whose bacterial origin is beyond doubt. As the most unequivocal instance, we can mention splenic fever, or anthrax, a disease of domestic animals, which sometimes attacks man, and is then known as malignant pustule. The existence of a parasite in this affection in the form of minute rods and its power of reproducing the disease are among the best-established facts in medicine. It is also known that these rods form seeds, or spores, as they are termed, in their interior, after the death of the patient, which germinate again in proper soil. These spores are the most durable and resisting objects known in animated nature. If kept in the state of spores they possess an absolute immortality; no temperature short of prolonged boiling can destroy them, while they can resist the action of most poisons, even corrosive acids, to a scarcely credible extent.

Another disease, of vastly greater importance to man, has lately been added to the list of scourges of unquestionable bacterial origin. I refer to tuberculosis, or consumption. It is true, this claim is based upon the work of but one investigator―Robert Koch. But whoever reads his original description must admit that no dart of criticism can assail his impenetrable position. Here also a rod-shaped bacillus, extremely minute and delicate, has been found the inevitable companion of the disease. With marvelous patience Koch has succeeded in getting the parasite to grow in pure blood, and freeing it from all associated matter.  It must have been a rare emotion that filled the soul of that indefatigable man, when he beheld for the first time, in its isolated state, the fell destroyer of over one eighth of all mankind! None of the animals experimented upon could withstand the concentrated virulence of the isolated parasite. This bacillus likewise produces spores of a persistent nature, which every consumptive patient spits broadcast into the world.

Relapsing fever is another disease of definitely proved origin. If we mention, furthermore, abscesses, the dependence of which on bacteria has lately been established, we have about exhausted the list of human afflictions about the cause of which there is no longer any doubt. Some diseases peculiar to lower animals belong also to this category. The classical researches of Pasteur have assigned the silkworm disease and chicken cholera to the same rank. Several forms of septicemia and pyemia have also been studied satisfactorily in animals. Indeed, the analogy between these and the kindred forms of blood-poisoning in man is so close that there can be no reasonable doubt as to the similarity of cause. This assumption, next door to certainty, applies equally to the fevers of childbirth. The experimental demonstration of the parasitic nature of leprosy, erysipelas, and diphtheria is not yet complete, though nearly so. Malarial fever also is claimed to belong to the category of known bacterial diseases, but the proofs do not seem as irreproachable to others as they do to their authors.

The entire class of contagious diseases of man can be suspected on just grounds of being of bacterial origin. All analogies, and not a few separate observations, are in favor of this view, while against it no valid argument can be adduced; but it must be admitted that the absolute proof is as yet wanting. Many diseases also, not known to be contagious, like pneumonia, rheumatism, and Bright’s disease, have been found associated with parasites, the role of which is yet uncertain. It is not sophistry to look forward to an application of the germ theory to all such diseases, if only for the reason that we know absolutely no other assignable cause, while the changes found in them resemble those known to be due to parasites. In the expectation of all who are not blinded by prejudice, the field is a vast one, which the germ theory is to cover some day, though progress can only continue if we accept nothing as proved until it is proved.

There can be little doubt that in many, perhaps in most instances, the disease-producing germs enter the body with the air we breathe. At any rate, the organism presents no other gate so accessible to germs as the lungs. Moreover, it has been shown that an air artificially impregnated with living germs can infect animals through the lungs. How far drinking water can be accused of causing sickness as the vehicle of parasites can not be stated with certainty. There is, as yet, very little evidence to the point, and what there is is ambiguous. Thus, exposed from all quarters to the attacks of these merciless invaders, it seems almost strange that we can resist their attacks to the extent that we do. In fact, one of the arguments used against the germ theory―a weak one, it is true―is, that, while it explains why some fall victims to the germs, it does not explain why all others do not share their fate. If all of us are threatened alike by the invisible enemies in the air we breathe, how is it that so many escape? If we expose a hundred flasks of meat-broth to the same atmosphere, they will all become tainted alike, and in the same time. But the animal body is not a dead soil in which bacteria can vegetate without disturbance. Though our blood and juices are the most perfect food the parasites require, though the animal temperature gives them the best conditions of life, they must still struggle for their existence with the cells of the animal body. We do not know yet in what way our tissues defend themselves, but that they do resist, and often successfully, is an inevitable conclusion. We can show this resistance experimentally in some cases. The ordinary putrefaction-bacteria can thrive excellently in dead blood, but if injected into the living blood-vessels they speedily perish. Disease-producing germs, however, are better adapted to the conditions they meet within the body they invade, and hence they can the longer battle with their host, even though they succumb in the end.

The resistance or want of resistance which the body opposes to its invaders is medically referred to as the predisposition to the disease. What the real conditions of this predisposition are, we do not know. Experience has simply shown that different individuals have not an equal power to cope with the parasites. Here, as throughout all nature, the battle ends with the survival of the fittest. The invaders, if they gain a foothold at all, soon secure an advantage by reason of their terrific rate of increase. In some instances they carry on the war by producing poisonous substances, in others they rob the animal cells of food and oxygen. If the organism can withstand these assaults, can keep up its nutrition during the long siege, can ultimately destroy its assailants, it wins the battle. Fortunately for us, victory for once means victory forever, at least in many cases. Most contagious diseases attack an individual but once in his lifetime. The nature of this lucky immunity is unknown. The popular notion, that the disease has taken an alleged “poison” out of the body, has just as little substantial basis as the contrary assumption that the parasites have left in the body a substance destructive to themselves. It is not likely, indeed, that an explanation will ever be given on a purely chemical basis, but in what way the cells have been altered so as to baffle their assailants in a second attempt at invasion is as yet a matter of speculation. Unfortunately for us, there are other diseases of probable bacterial origin, which do not protect against, but directly invite, future attacks.

A question now much agitated is whether each kind of disease germs amounts to a distinct and separate species, or whether the parasite of one disease can be so changed as to produce other affections as well. When investigations on bacteria were first begun, it was taken for granted that all bacterial forms, yeast cells, and mold fungus, were but different stages of one and the same plant. This view has long since been recognized as false. But even yet some botanists claim that all bacteria are but one species, appearing under different forms according to their surroundings, and that these forms are mutually convertible. The question is a difficult one to answer, since bacteria of widely differing powers may resemble each other in form. Hence, if a species cultivated in a flask be contaminated by other germs accidentally introduced, which is very likely to happen, the gravest errors may arise. But the more our methods gain in precision, and the more positive our experience becomes, the more do we drift toward the view that each variety of bacteria represents a species as distinct and characteristic as the separate species among the higher animals. From a medical standpoint this view, indeed, is the only acceptable one.

A disease remains the same in essence, no matter whom it attacks or what its severity be in the individual case. Each contagious disease breeds only its own kind, and no other. When we experiment with an isolated disease-producing germ, it causes, always, one and the same affection, if it takes bold at all.

But evidence is beginning to accumulate that, though we can not change one species into another, we can modify the power and activity, in short, the virulence, of parasites. Pasteur has shown that when the bacteria of chicken cholera are kept in an open vessel, exposed to the air for many months, their power to struggle with the animal cells is gradually enfeebled. Taken at any stage during their decline of virulence, and placed in a fresh soil in which they can grow, be it in the body of an animal or outside, they multiply as before. But the new breed has only the modified virulence of its parents, and transmits the same to its progeny. Though the form of the parasite has been unaltered, its physiological activity has been modified: it produces no longer the fatal form of chicken cholera, but only a light attack, from which the animal recovers. By further enfeeblement of the parasite, the disease it gives to its host can be reduced in severity to almost any extent. These mild attacks, however, protect the animal against repetitions. By passing through the modified disease, the chicken obtains immunity from the fatal form. In the words of Pasteur, the parasite can be transformed into a “ vaccine virus” by cultivation under conditions which enfeeble its power. The splendid view is thus opened to us of vaccinating, some day, against all diseases―in which one attack grants immunity against another. Pasteur has succeeded in the same way in another disease of much greater importance, namely, splenic fever.  The parasite of this affection has also been modified by him, by special modes of cultivation, so as to produce a mild attack, protecting against the graver form of the disease. Pasteur’s own accounts of his results, in vaccinating, against anthrax, the stock on French farms, are dazzling.  But a repetition of bis experiments in other countries, by his own assistants, has been less conclusive. In Hungary the immunity obtained by vaccination was not absolute, while the protective vaccination itself destroyed some fourteen percent of the herds.

Yet, though much of the enthusiasm generated by Pasteur’s researches may proceed further than the facts warrant, he has at least opened a new path which promises to lead to results of the highest importance to mankind.

The ideal treatment of any parasitic disease would be to administer drugs which have a specific destructive influence upon the parasites, but spare their host, i.e., the cells of the animal body. But no substance of such virtue is known to us. All so-called antiseptics, i.e., chemicals arresting bacterial life, injure the body as much as if not more than the bacteria. For the latter of all living beings are characterized by their resistance to poisons. Some attempts, indeed, have been made to cure bacterial (if not all) diseases by the internal use of carbolic acid, but they display such innocent naivete as not to merit serious consideration. More promising than this search after a new philosopher’s stone is the hope of arresting bacterial invasion of the human body by rendering the conditions unsuitable for the development of the germs, and thus affording the organism a better chance to struggle with them. Let me illustrate this by an instance described by Pasteur. The chicken is almost proof against splenic fever. This protection Pasteur attributes to the high normal temperature of that animal, viz., 42° Cent. At that degree of warmth the anthrax-bacillus can yet develop, but it is enfeebled. The cells of the bird’s body, thriving best at their own temperature, can hence overcome the enfeebled invader. Reduction of the animal’s temperature, however, by means of cold baths, makes it succumb to the disease, though recovery will occur if the normal temperature be restored in due time. In the treatment of human diseases, we have not yet realized any practice of that nature, but research in that direction is steadily continuing.

The most direct outcome of the germ theory, as far as immediate benefits are concerned, is our ability to act more intelligently in limiting the spread of contagious diseases. Knowing the nature of the poison emanated by such patients, and studying the mode of its distribution through nature, we can prevent it from reaching others, and thus spare them the personal struggle with the parasite. In no instance has the benefit derived from a knowledge of the germ theory been more brilliantly exemplified than in the principles of antiseptic surgery inaugurated by Lister. This benefactor of mankind recognized that the great disturbing influence in the healing of wounds is the admission of germs. It had been well known, prior to this day, that wounds heal kindly if undisturbed, and that the fever and other dangers to life are an accidental, not an inevitable, consequence of wounds. But Lister was the first to point out that these accidents were due to the entrance of germs into the wound, and that this dangerous complication could be prevented. By excluding the parasites from the wound, the surgeon spares his patient the unnecessary and risky struggle, giving the wound the chance to heal in the most perfect manner. Only he who has compared the uncertainty of the surgery prior to the antiseptic period, and the misery it was incompetent to prevent, with the ideal results of the modern surgeon, can appreciate what the world owes to Mr. Lister. The amount of suffering avoided and the number of lives annually saved by antiseptic surgery constitute the first practical gain derived from the application of the germ theory in medicine.

From the archives: The germ theory of disease breaks through
The cover of Popular Science containing the September 1993 article on germ theory.

Some text has been edited to match contemporary standards and style.

The post From the archives: The germ theory of disease breaks through appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: A 1930s adventure inside an active volcano https://www.popsci.com/science/volcano-explorers/ Tue, 03 May 2022 11:00:00 +0000 https://www.popsci.com/?p=439070
An illustration of explorers inside a volcano.
From "800 feet on a fireproof rope: Inside a flaming volcano" (Arpad Kirner, April 1933). Original caption: INSIDE THE BURNING CRATER. Here is our artist's conception of the interior of Stromboli and of the author wearing his steel armor to ward off flying rocks. Popular Science

In the April 1933 Popular Science issue, explorer Arpad Kirner recounted his descent into the mouth of the flaming Stromboli volcano.

The post From the archives: A 1930s adventure inside an active volcano appeared first on Popular Science.

]]>
An illustration of explorers inside a volcano.
From "800 feet on a fireproof rope: Inside a flaming volcano" (Arpad Kirner, April 1933). Original caption: INSIDE THE BURNING CRATER. Here is our artist's conception of the interior of Stromboli and of the author wearing his steel armor to ward off flying rocks. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Despite their tragic ends at Mount Unzen in Japan in 1991, prolific volcanologists Katia and Maurice Krafft  were not the first to film hot lava at close range. In April 1933, Popular Science published the dramatic account of scientific explorer, Arpad Kirner, who was lowered by asbestos rope 800 feet into the mouth of Stromboli off the coast of Sicily to film its rumbling, fuming vent-hole. 

With a 2,000-year eruption streak and nearly constant fountains of lava, Stromboli is one of Earth’s most active volcanoes. The Lighthouse of the Mediterranean has been spewing bright bombs steadily since 1932 and has had several significant eruptions just this century. Its activity is so distinct that when other volcanoes similarly spout lava, they’re dubbed Strombolian.

While Kirner offers scientific insights into the peak’s guts—“timing the rhythm of the explosions” and “gathering samples of gases and minerals”—he’s not humble, embellishing his tale with narrative flair distinctive of its era. “Inside a Volcano,” represents a bygone style of exploration and storytelling that made up for its lack of nitty-gritty science with entertainment value.

Today, it might be said that technology has taken some of the adventure out of volcanic exploration. While the most intense footage of eruptions still puts humans in harm’s way, drones fitted with gas sensors and sampling devices, may soon offer an alternative. For some volcanologists, though, there’s nothing more exhilarating than sidling up to a lava flow.

“800 feet on a fireproof rope: Inside a flaming volcano” (Arpad Kirner, April 1933)

Dangling at the end of an asbestos rope, the intrepid author is seen, right, during his descent of 800 feet into the heart of the volcano, Stromboli. Below, a rock on a string was thrown over to get the crater’s depth before the descent was begun. At left, Arpad Kirner.

A slender white thread, a rope of asbestos, rose straight above my head to the edge of the cliff. Below me were boiling lava and billowing fumes. Dangling at the end of the rope, I was being lowered 800 feet into the mouth of an active volcano! 

A steel helmet protected my head from flying rocks. My suit, my shoes, my gloves, were all made of asbestos. Strapped to my back were oxygen tanks that enabled me to breathe amid the fumes. I was realizing a scientific adventure which I had planned for years. 

My friends thought I was crazy when I announced my intention to explore the crater of an active volcano, to descend the depths of its enormous pit, to photograph the infernal vent-hole while it fumed and grumbled, to go where explosions rapidly follow one another and where phenomena, still mysterious, constantly occur. 

None of those who had preceded me in volcanic studies had dared a descent into a crater in full activity. They had contented themselves with simple excursions to the mouth of Vesuvius or Etna during quiescent periods. If I succeeded in my plan, I knew I would witness phenomena unseen by anyone before. If I returned into the open air and sunlight after this trip into an inferno, I would bring back specimens, solid and gaseous, of unusual interest. So I determined to make the effort. 

My choice fell upon Stromboli, the volcanic cone rising from the Mediterranean north of Sicily. Why Stromboli? Because it is the only volcano in Europe of uninterrupted activity. Here I risked no dud. In its crater I was sure to find the spectacle I desired. 

For me, this volcano was an old acquaintance. I had studied it many times. I had scaled its slopes, approached its mouth and I knew that, from year to year, the shape of its summit underwent modification. To pick the most favorable spot for my descent, I visited it again. Then I prepared my equipment. All was ready! 

It was with the greatest difficulty that we hauled the equipment up the side of Stromboli, which rises sharply from the water without the slightest beach. At the spot previously selected, I prepared for the test. I was secured to the asbestos rope by means of a heavy leather belt similar to those used by mountain climbers. Control of my descent was handled from the top by means of a windlass set up several yards from the edge of the crater. To prevent the rope from being worn away by scraping against the rocks, a pulley was placed at the crater’s edge. 

Several friends, and some of the island natives chosen for their strength, had accompanied me and worked the windlass to which my rope was attached. As a means of signaling them after my entry into the crater, I carried an electric hand lamp. Wires running down the asbestos rope supplied the current for the powerful little light. 

I realized clearly the danger confronting me as I slipped over the edge of the crater and was lowered slowly into space. I knew my return was problematical. My precautions might prove insufficient. My heart and lungs might not stand the strain of the gases and the terrific heat. Suspended in space, I knew not where I was going nor where I would set down my feet. What awaited me at the end of my descent? Solid rock? Boiling lava? A sheer, slippery ledge with fire below? I could not tell. 

As I sank into the pit, I studied the walls of the crater, black, red, yellow, pierced with holes from which sulphurous vapors poured. I saw beneath me immense openings veiled in smoke. When I raised my eyes, I estimated the distance I had descended and asked myself: 

“Will the rope stand the strain? Can they ever pull me up again?” 

Suddenly, the descent was over. I landed on a ledge 800 feet below the top of the crater. The rock was extremely hot, but firm. I could stand up. I measured the temperature of the rock and found that in some places it was as much as 212 degrees Fahrenheit. The air around me had a temperature of 150 degrees and was saturated with poisonous sulphurous vapors. Thanks to my oxygen outfit, I was able to breathe and so began a tour of the crater bottom. 

Casting off my rope, I set out for the real openings of the volcano―immense vertical pits from ten to thirty feet in diameter. At intervals, with formidable explosions, these mouths threw forth jets of lava. The pits, however, slanted in such a way that the lava always descended on one side. By timing the explosions, I was able to race to the mouths and, in some cases, actually lean over them, between eruptions, looking perpendicularly into the interior as one looks down a well. 

What did I see there? Beyond a screen of smoke and strangely-colored vapors, I saw an incandescent sea of liquid lava, agitated, boiling, shaken with convulsions. 

As I watched, this molten sea welled up. The mysterious force which moves it was about to eject it violently. The time had come for the explorer to flee from his post of observation. Scarcely seconds passed before the explosion came, the orifice spewing forth its jet of lava, hurling it hundreds of feet into the air. Great flaming masses fell back into the crater. The rest, thrown farther, rolled and bounded down the flanks of the mountain and plunged into the sea with a hissing of steam. 

Three hours passed while I pursued my explorations, timing the rhythm of the explosions, and gathering samples of gases and minerals, studying the unforgettable sights around me and snapping pictures with my camera. 

Sensing exhaustion near. I gave my friends the prearranged signal with the hand lamp to haul me out. The ascent was painful beyond words. My will, stretched to the breaking point, deserted me. The oxygen reserve was exhausted and I was forced to breathe air charged with the sulphurous fumes. As I was dragged over the crater’s edge into fresh air, my over-taxed lungs gave way and I suffered a severe hemorrhage. 

When I recovered, I felt infinitely calm. After so much effort, so much nervous strain, I was happy that I had succeeded in an enterprise thought impossible by everyone. 

Some time later, accompanied by my friend Paul Muster, I had another thrilling adventure on the flank of this same volcano. On one side is a slope, a gigantic inclined plane of cinders more than half a mile wide, known as the “Sciara del Fuoco.” Down it rocks and slag and enormous blocks of lava, roll and bound toward the sea. 

No one approaches this slope. Ships that circle the island keep at a safe distance. Nevertheless, Muster and I prepared to make the ascent with motion picture cameras. For the purpose, I had prepared two suits of sheet-steel armor. They would not, of course, protect us from the great blocks of lava, but they would shield us from the small rocks which often fell in showers. 

We began the climb. After hours of painful effort, we reached a spot where we could set up our cameras to take pictures of the rocks being hurled from the fiery crater. 

With our films exhausted, we prepared to descend the slope again. An immense block of lava set deep in the cinders, some distance from the top, gave us temporary shelter. 

Then Muster observed a black rock, fifty feet away, that interested him.

Leaving our shelter, he lay flat on his stomach and wriggled toward the immense cinder. As I was watching his slow advance, admiring his courage, I heard a great clamor rising from the edge of the sea. I swung about. Our friends at the foot of the mountain were crying with terror and motioning toward the crater. I looked up just in time to see a gigantic rock detach itself, describe an immense arc through the air, strike the cinders, throwing them up like an explosion, and bounce again into the air. Horrified, I saw it was headed straight for us. 

It fell again and again. Then, with an infernal sound, it roared forty feet over our heads. The rush of air threw us down.

Hardly had we time to take breath when new trouble assailed us. Stirred by the successive shocks, the bed of cinders, slag, and stones covering the flank of the volcano was beginning to move. Great masses detached themselves, and came sliding toward us. 

Without consulting, Muster and I instantly arrived at the same idea. With a single motion, we divested ourselves of our armor, which we allowed to roll down the slope. Then abandoning ourselves to the laws of gravity, we followed in their wake. 

How long that helter-skelter, breakneck slide continued I do not know. By some miracle we neither broke our backs nor fractured our skulls. Torn by jagged cinders and covered with blood, we reached the foot of the volcano. Here our friends took us in hand, dressed our wounds, and congratulated us upon our escape.

From the archives: A 1930s adventure inside an active volcano
April 1933 cover of Popular Science.

Some text has been edited to match contemporary standards and style.

The post From the archives: A 1930s adventure inside an active volcano appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From the archives: When superconductors finally grew up https://www.popsci.com/technology/superconductors-emerging-technology/ Mon, 02 May 2022 15:00:00 +0000 https://www.popsci.com/?p=438126
The Frigid ‘Perpetual Motion’ Machines of Tomorrow by W. Stevenson Bacon appeared in the March 1967 issue of Popular Science.
The Frigid ‘Perpetual Motion’ Machines of Tomorrow by W. Stevenson Bacon appeared in the March 1967 issue. Popular Science

In 1967, half a century after the discovery of superconductive metal, Popular Science covered the emerging field and its potential futures.

The post From the archives: When superconductors finally grew up appeared first on Popular Science.

]]>
The Frigid ‘Perpetual Motion’ Machines of Tomorrow by W. Stevenson Bacon appeared in the March 1967 issue of Popular Science.
The Frigid ‘Perpetual Motion’ Machines of Tomorrow by W. Stevenson Bacon appeared in the March 1967 issue. Popular Science

To mark our 150th year, we’re revisiting the Popular Science stories (both hits and misses) that helped define scientific progress, understanding, and innovation—with an added hint of modern context. Explore the entire From the Archives series and check out all our anniversary coverage here.

Before physicists began to grok the laws of thermodynamics in the mid-1800s, inventors, lured by the idea of perpetual motion, sought to exploit the movement of heat. Alongside earnest innovators, hucksters filled the scientific void. Such was the case of Charles Redheffer, a self-proclaimed inventor who posted up in Philadelphia and New York City in 1812 to sell tickets to peep his infinitely moving machine, later revealed to be operated by an old man turning a crank in a hidden loft.

Even after thermodynamics exposed all the frauds, the notion of free work refused to die. In 1911, Heike Kamerlingh Onnes, a Dutch physicist, discovered that electricity would infinitely flow through mercury chilled to −452° Fahrenheit, near absolute zero. Perpetual motion was back! This time cloaked in superconductive metal. 

It would take another 50-plus years for engineering to make any real progress with such materials, as Popular Science reported in March 1967. And yet another half century on, the quantum mechanisms that allow electrons to rapidly flow are still being worked out, and the hunt for the perfect superconductive substance (i.e., one that works at room temperature) continues. 

“The frigid ‘perpetual motion’ machines of tomorrow” (W. Stevenson Bacon, March 1967)

Far down on the temperature scale near absolute zero (−459°F) lies a strange world of “electrical perpetual motion”—or ­superconductivity—where electric currents, once set in motion, flow forever. With new developments in materials and the methods for cooling them, truly fantastic devices are taking shape in laboratories across the country:

• Superconductive motors that operate with greater efficiency than any rotating machine ever built (the energy used to refrigerate them notwithstanding)—because of both resistance-free windings and frictionless superconductive bearings.

• Superconductive generators that put out more power with less weight and volume than anything yet known.

• Superconductive bearings and gyroscopes that “float” in vacuums or liquid helium.

• “Fast-thinking” computer logic elements known as cryotrons. The newest of these from IBM, never before revealed, is based on a phenomenon called electron tunneling, and operates at speeds of less than a billionth of a second.

• Tiny threadlike wires 1/100 inch in diameter, made of exotic materials, that carry currents of 300 amperes—without resistance, without heating. A conventional room-temperature conductor would have to be 600 times larger.

• Direct-current transformers, thought to be impossible before supercold techniques.

• Devices known as “flux pumps” that convert small voltages, currents, and magnetic fields to large ones.

• Superconductive magnets and solenoids, tiny in relation to comparable electromagnets, which form fields many times stronger than that of earth and operate forever, given a jolt of starting current. They are the first of the new “perpetual motion” machines to come of age, and one manufacturer (RCA) now makes them on an assembly-line basis.

Seeking the “impossible”

The search for electrical perpetual motion spans 50 years. It is a fascinating story, one full of accidental discoveries, years of frustration, and then the slow, gradual uncovering of new clues that have today brought us to the threshold of an exciting new technology.

Normally, metals have resistance to the flow of electricity, and much of the energy fed into a wire is wasted as heat. Why?

The atoms of copper, for example, are bound together to form molecules, and the molecules to form a highly ordered three-dimensional grillwork or lattice. There are plenty of “free’” conduction electrons that can move through the lattice carrying an electrical current. Unfortunately, at any temperature above absolute zero, heat energy causes a great deal of disorder.

The lattice structure is in a constant state of vibration, and it scatters the electrons, generating even more heat, more agitation, and more resistance to the flow of current. Around the turn of the [20th] century, Dutch physicist Kamerlingh Onnes determined to find out how much the resistance of a metal could be reduced by extremely low temperatures. He was able to do so, for he was the first to succeed in liquefying helium. At its incredibly low boiling point of −452°F (4.2 Kelvin), it offered the first practical way to cool a metal down close to absolute zero.

Working with purified mercury, Onnes measured its resistance as the temperature fell. At first, things went as predicted. Then, suddenly, inexplicably, at a temperature of 4.15 Kelvin, the resistance disappeared altogether. Once set flowing in the mercury, a current would flow forever. Dumbfounded, Onnes realized that he had stumbled onto an entirely new state of matter, one in which a kind of perpetual motion or superconductivity was possible.

It remained for German physicist Walther Meissner in 1933, 22 years later, to discover another astonishing fact. Pure superconductors, placed in a magnetic field, force out the magnetic flux. A few of the possibilities: frictionless superconducting bearings that float in a magnetic field, error-free gyroscopes—even a transit train that floats suspended above its superconducting rails by virtue of its magnetic field has been proposed.

Solving the riddle

What was superconductivity and how could it be used? The puzzle vexed scientists for 50 years. The bait—fabulously efficient ways of transmitting and using electricity—was tempting, but the problems were many. Onnes quickly discovered that his superconductors, notably lead wire, had severe limitations. He tried to build a magnet only to find that the lead ceased being superconducting in a magnetic field. A strong flow of current had the same effect.

Theory didn’t help much. It’s easy to understand why resistance gets less as temperature drops. Take away heat and you lessen lattice vibrations and electron scattering. But complete absence of any resistance is something else. To make things worse, superconductivity occurs at temperatures well above absolute zero—at above 18°K in recently discovered compounds.

Then, in 1957, the first workable theory of superconductivity was evolved by three brilliant scientists: J. Bardeen, L.N. Cooper, and J.R. Schrieffer.

Although electrons are of like charge and normally repel each other, in the frigid world close to absolute zero an unprecedented phenomenon called “electron pairing” occurs. Subjected to intense cold, they literally condense—like drops of water on a cold surface—down to a lower energy or quantum level. At this level, tiny attractive forces occur between electrons of opposite spins and equal and opposite momentums. They interact with each other and with the lattice, exchanging with it phonons (quantums of vibrational energy), much like two tuning forks of the same frequency mounted close to each other on the same base. And the electron pairs interact with other pairs in the superconductor in wavelike fashion.

What keeps the electrons from colliding with the lattice and giving up their energy as heat? The answer lies in quantum mechanics, said the scientists. A certain binding force holds the electrons together, reducing their potential energy. If one electron of a pair should be scattered, its potential energy would take a quantum jump upward, more than making up for its loss in velocity.

In other words, it is impossible for the electrons—at their low energy level—to give up energy to the lattice by colliding with it; they only gain energy. Free from energy losses, the electrons become “frictionless” -perpetual-​-motion carriers of any current impressed on them.

The last barrier

With a workable theory, the stage was set for the first of the “perpetual motion” machines. The problem remaining: materials that would take an intense magnetic field and stay superconducting.

Then, in a breakthrough comparable to the discovery of superconductivity itself, J.E. Kunzler of Bell Telephone Laboratories in 1961 found that certain superconducting alloys—combinations of niobium-tin, -vanadium-​-silicon, -vanadium-​-gallium, -molybdenum-​-rhenium, and niobium -zirconium—would withstand magnetic fields as high as 100,000 gauss, 200,000 times as strong as that of the Earth!

The new superconductors were labeled “hard” in contrast to the pure—element superconductors (lead, tantalum, mercury, tin, aluminum, for example), which are ductile and soft. They are also known as Type II or filamentary superconductors, which explains why they work.

In contrast to the pure superconductors, the new alloys permit magnetic flux to enter, turning certain areas of the wire normal. Supercurrents continue to flow, however, in tiny, threadlike filaments throughout the wire—​because of the very impure composition of the wire itself.

Magnets and machines

Superconductive magnets are often nothing more than a small coil suspended in a gleaming stainless-steel Dewar (insulated container) of liquid helium. Yet their fields compare with those of conventional electromagnets that require the entire output of a small power plant and thousands of gallons of cooling water.

What happens when you scale up a superconductive magnet? I saw the world’s largest at Avco in Boston. Under a 40-foot tower, supported by nonmagnetic aluminum beams at one side of a huge laboratory, sits an enormous Dewar that holds 6,000 liters ($24,000 worth) of liquid helium. For testing, the 10-foot, eight-ton magnet is slowly lowered into the Dewar and helium is added. Its windings, nine strands of niobium-zirconium wire, are embedded in a copper strip to keep the superconductor from developing hot spots and going normal. Thick aluminum cylinders support each layer of windings—to keep the immense forces within the magnet from bursting it with explosive violence.

A DC generator (superconductors do rapidly develop resistance to high AC currents) hums in the background as the magnet begins to charge. The process will take 25 minutes and when it is complete the magnet will hold energy of 5 million joules—equal to 91/2 sticks of dynamite.

Five million joules—for what?

I asked my host, Dr. Z.J.J. Stekly, what such huge magnets could be used for.

“MHD power generators are one possibility,” he told me. “Avco has already built a prototype that generates electricity from ionized hot gases passing through the magnetic field. Among other applications are magnets for accelerators, bubble chambers, and other research devices.

“They may be used to create magnetic ‘bottles’ for containing the plasma in generating power from thermonuclear explosions. It has even been suggested that the magnets be used to shield spaceships from the deadly radiation emitted by the sun.

“Avco is studying superconductivity for ship propulsion. A large superconductive electric motor may prove economical.”

How close are we to superconductive transmission lines—non-resistive lines saving millions of watts of power? Dr. John K. Hulm of Westinghouse expresses cautious optimism. “They’re close to being practical,” he told me. “In the near future we’ll reach the point where the economics will be such that we’ll build them.”

Dr. Hulm is in the forefront of researchers working to extend the top temperature at which superconductors operate, currently 18°K.

“We’ll find materials that exhibit superconductivity in the 20s,” he told me. “And then we’ll be able to use inexpensive hydrogen that boils at 20 degrees for cooling. Insulation will be simpler, cheaper. Who knows what we’ll discover—with superconductivity we’re at the stage where Faraday or Tesla were with electricity.”

From the archives: When superconductors finally grew up

March 1967 cover stories: driving at night, fixing car dents, travelling the world’s race tracks, and making sense of UFOs.

Some text has been edited to match contemporary standards and style.

The post From the archives: When superconductors finally grew up appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Can we make ourselves more empathetic? 100 years of research still has psychologists stumped. https://www.popsci.com/story/science/popsci-archive-empathy-machine/ Tue, 06 Apr 2021 12:04:03 +0000 https://www.popsci.com/story/?p=281226
popsci-1921-emotion-machine
PopSci author P.J. Risdon subjects himself to an emotion-tracking experiment in 1921 London. Popular Science

Polygraphs and other emotional sensors are still imperfect after a century of practice.

The post Can we make ourselves more empathetic? 100 years of research still has psychologists stumped. appeared first on Popular Science.

]]>
popsci-1921-emotion-machine
PopSci author P.J. Risdon subjects himself to an emotion-tracking experiment in 1921 London. Popular Science

P. J. Risdon slides into an armchair in University of London’s physiological laboratory, where a small army of vacuum tubes surround the room, their hot coils humming and glowing in the shadows. “Compose yourself and smoke,” the doctor tells Risdon. 

A lab assistant moves towards him holding an aluminum disk tethered to a wire. Risdon notices the assistant’s hands are badly scarred. The doctor’s doing, the assistant confirms with an affirmative nod. He dips a swatch of blotting paper in saltwater and places it on Risdon’s left hand. The wires connect to a sprawling tabletop apparatus centered around a galvanometer—a machine to detect current. 

Electricity surges through the electrodes, and a bead of light jumps across a graduated scale on the meter. The scale indicates the strength of the current passing through Risdon’s hand, establishing a baseline for his conductive activity. Without warning, the doctor grabs a pin about four inches long and lunges at Risdon’s right hand but pulls back just before jabbing it. Risdon recoils. The white light spikes. “The passage of an electric current varies according to the emotional condition of a subject,” the doctor explains. The more aroused, the higher the current.

The man behind the pin is the University of London’s physiological laboratory’s director, Augustus Desiré Waller, who’d recently invented the first practical electrocardiogram (Willem Einthoven later won the 1924 Nobel prize for perfecting the ECG, also EKG). Risdon subjected himself to these experiments voluntarily while reporting an article for the February 1921 issue of Popular Science (“such is the duty to one’s editor”) to see firsthand how Waller’s latest machine accomplished the feat of quantifying human emotions. In his account, he noted that the device was not sophisticated enough to tell the difference between “anger, sorrow, and fright,” but that it could measure their presence and intensity, even if the subject tried to conceal them. 

Over the course of this experiment, Risdon would also suffer being startled by a loud horn, burned with a match, and threatened with a “red-hot poker.” He was even asked to “think of something (other than red-hot pokers) which had been a cause of worry and anxiety.” Naturally, Risdon “began to wonder what the editor would think of [his] adventure, and the bead of light traveled out of sight.” 

Waller’s unwieldy emotion-measuring apparatus marked the beginning of a wave of technological advances that, in 2021, transects a growing appeal for the encouragement of empathy. “I can learn about emotions by using my sensors as a lens,” says Elliott Hedman, founder of mPath, a design consulting firm that applies biosensors in classrooms, shopping centers, and other real-world settings to measure everything from student engagement to consumer reactions. “But what the sensors are better at doing is communicating people’s emotions to other people.” By quantifying emotions, his sensors encourage empathy.

We find ourselves a century after Waller’s first emotion-tracking attempts faced with a confluence of crises—novel pandemic, social unrest, economic distress, widespread disinformation, and climate devastation—whose successful resolutions clamor for us to set aside our differences, gather our resources, and work together. Accomplishing that, however, requires something that’s been increasingly hard to come by: an ability to share one another’s emotions, feel one another’s “anger, sorrow, and fright,” and act on one another’s behalf, especially in times of duress. 

A few years after Risdon’s encounter with Waller, American inventor Leonarde Keeler first developed what he called an Emotograph, a machine designed to detect deception. By 1935 the Keeler Polygraph, which monitored blood pressure, pulse, and respiratory rate, secured its legacy in a criminal court case in Portage, Wisconsin, the first time the results were used to obtain a conviction. By the 1950s, it had been enhanced to incorporate skin conductance, a phenomenon in which the dermal layer becomes a better conductor of electricity whenever external or internal stimuli trigger physiological arousal. (Chalk one up for Risdon.)

The polygraph represents perhaps the most widely used application of biosensors designed to detect an emotional state, specifically deception. But its track record has been controversial. According to a 2003 study conducted by the National Academy of Sciences and the National Research Council, “overall, the evidence is scanty and scientifically weak” to support the use of polygraph tests for “security uses.” They based their conclusion on the findings that “the physiological responses measured by the polygraph are not uniquely related to deception.”

Hedman agrees—mostly. “Lie detectors do not detect lies,” he explains, “that is a misnomer, but that doesn’t mean they don’t work.” Emotions are complex, making it nearly impossible for devices like a polygraph to isolate a discrete emotion. Rather, they detect physiological arousal, or what Hedman calls “your reptilian response.”

Today’s emotion sensors—made by companies like mPath, Empatica, and Emotiv—measure everything a traditional polygraph did and more: sweat gland output, body movement, speech patterns, facial expressions, and neurological activity. Plus, they’ve been packed into wearables like wristbands, gloves, glasses, headbands, and jewelry. Companies often apply machine learning to interpret the data and make predictions. Empatica probes the feelings of autistic children, mPath quantifies student engagement, and Emotiv measures employee levels of stress. 

Yet despite their compelling emotional insights, the gadgets remain imprecise. The problem is that physiological arousal alone is not a clear-cut indicator. Anger will elevate heart rate but so will fear. When monitoring someone remotely, how do researchers tell the difference? “Context matters,” Hedman says. “Video plus skin conductance tells a much deeper story.” In fact, emotion-sensing researchers rely on multiple inputs in an approach called emototyping. For his work, Hedman employs a combination of video, eye-tracking glasses, skin conductance—and people.

But if humans are still needed to establish context, then what value does an array of biosensors really add when it comes to measuring emotions? It would seem that even after a century nothing beats the uniquely human ability to determine the mental states of others based on subtle biological cues and their context. Daniel Goleman’s 1995 book, Emotional Intelligence, promoted the decades-old notion that people who readily discern between emotions have a heightened sense of empathy. That enables them to connect with the experiences of others, making them better partners, parents, coworkers, leaders, and friends. 

Through his research, Hedman has demonstrated that when he shows a person’s skin conductance results to other people, they are seven times more likely to believe that the person is having a strong emotional reaction and to empathize with that person. 

“If you see the data of someone else’s stress, it is so much more potent. You really believe it when you put emotions into a quantitative measurement. It actually creates a sense of empathy in people.” But does Hedman believe there’s an opportunity to deploy emotion sensors widely, at a level that our shared crises demand? “These sensors,” he admits, “are bumping into a culture that isn’t really ready to have emotions first and foremost at every piece of the conversation.”

When Risdon eased himself into Waller’s laboratory chair in 1921, he could not have foreseen how the unwieldy apparatus sitting on the table beside him would help inspire an entire emotion-sensing industry a century later. Nor could he have guessed that the most compelling case of all for emotion-sensing devices might be to heighten our collective sense of empathy at a time when it seems in short supply.

The post Can we make ourselves more empathetic? 100 years of research still has psychologists stumped. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How science came to rely on the humble lab rat https://www.popsci.com/science/lab-rat-origins/ Tue, 19 Apr 2022 17:00:00 +0000 https://www.popsci.com/?p=437940
A collage of an article from Popular Science May 1927 issue about the origins of lab rats.
The article as it appeared in the May 1927 issue of Popular Science.

Almost 100 years ago Popular Science reported on the rise of rat use in lab experiments.

The post How science came to rely on the humble lab rat appeared first on Popular Science.

]]>
A collage of an article from Popular Science May 1927 issue about the origins of lab rats.
The article as it appeared in the May 1927 issue of Popular Science.

Since Aristotle, scientists have vivisected, poked, and prodded live animals in pursuit of knowledge (Pavlov’s dog, anyone?), but at the turn of the 20th century, breeding gutter-dwelling creatures to understand our own physiology was becoming a necessity of experimentation. By the 1920s, lab rats were in such high demand that they powered an entire American industry. In fact, some of today’s popular rodent breeds—Jax mice being a favorite—can trace their roots to the Jazz Age.

Since the passage of the US Animal Welfare Act in 1966, the use of larger critters has steadily dropped—only 800,000 in 2019—but rats and mice continue to be on trend. The US uses more than 100 million of the rodents annually, many genetically engineered for laboratory perfection. This story, penned by H.C. Davis in our May 1927 issue, chronicles the emergence of their furry white ancestors. 

“Rats that go to college” (H.C. Davis, May 1927)

The rat, ancient enemy of humankind, now has been sent to college as the friend of humankind. While the vicious alley rat is hunted and destroyed as a carrier of disease, his favored cousin, the white rat, is being pampered and educated by science in remarkable experiments calculated to make the human race healthier, happier, and wiser.

At Stanford University in California, 500 white rats—carefully bred, fed, and housed—recently have undergone intelligence tests which may lead to valuable discoveries about our mental processes. And in the Crocker Laboratory of Columbia University, some 9,000 pedigreed members of the same rodent family are being studied to learn new secrets of heredity and to gain useful knowledge in combating disease. Indeed, scientific institutions throughout the world today are calling for these long-tailed creatures in such quantities that the raising of well-bred rats on a large scale has been established as an unusual American industry.

In Philadelphia, the Wistar Institute of Anatomy and Biochemistry maintains $60,000 worth of special equipment for rearing thousands of the rodents to serve mankind. From there they are shipped to laboratories in many parts of the world.

The chief reason the white rat has become the chosen friend of scientists is that in structure, growth, and bodily processes he resembles human beings. Therefore his reactions to physical and intelligence tests can be counted on, relatively, to throw light on our mental and physical machinery.

In the study of habit, for example, the Stanford experimenters, under the direction of professor Calvin P. Stone, have tested the ability of rats to acquire new habits and to break old ones. For this purpose ingenious devices are employed. One, called the “problem box,” is a screened enclosure from which a door leads to another box containing food. The only way the rat, imprisoned in the problem box, can reach the food is to step on a small platform at the side of the box. An electric current releases the door. Each rat under examination is put through this test once daily for 20 days, and a record is made of the time required to open the door. The records show the rate at which habit is formed. In addition, the test is repeated after a lapse of 50 days to determine ability to retain the habit.

Another apparatus, “the maze,” consists of a labyrinth containing many blind alleys but only one direct path to the end, where food is placed. In repeated tests, the number of false moves, and the time required to thread the maze, measure ability to learn.

It has been found that the rat develops physically about 30 times as rapidly as a human. … The tests further indicate, according to Stone, that the rat’s mental development will prove to be fifty times as rapid as man’s.

In the study of heredity rats have proved most valuable. To observe four human generations would require the better part of a century. In two years, rats have told the same story, for the laws of heredity governing the rat family are fundamentally the same as those governing human life.

Recently laboratory rats have helped show how science can exterminate their plague-carrying dock and alley kin. A bacterial culture, known as “ratinin,” has been discovered that kills rats but does not harm humans or domestic animals. Placed on bait, it spreads an epidemic among the rodents.

Raised in spotlessly clean surroundings, his hours of sleeping, eating, and exercising as carefully regulated as a baby’s, the rat that goes to college is an aristocrat. He enters a university in the pink of condition for any test. “Preparatory schools” such as Wistar Institute graduate “standardized” rats each one so like the rest in body and health that one testing laboratory can compare its results directly with another’s.

How science came to rely on the humble lab rat
May 1927 cover story: The perils of toiling in steel-making pits. Image: Popular Science, 1927.

This text has been edited to match contemporary standards and style.

The post How science came to rely on the humble lab rat appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Brilliant 10: The most innovative up-and-coming minds in science https://www.popsci.com/science/brilliant-scientists-2021/ Mon, 20 Sep 2021 14:00:00 +0000 https://www.popsci.com/?p=396912
a page full of illustrated badges and awards for the Brilliant 10 scientist series
Popular Science's Brilliant 10 is back for 2021. Katie Belloff

These US-based engineers, psychologists, chemists, and more are taking on society's biggest challenges across the world.

The post The Brilliant 10: The most innovative up-and-coming minds in science appeared first on Popular Science.

]]>
a page full of illustrated badges and awards for the Brilliant 10 scientist series
Popular Science's Brilliant 10 is back for 2021. Katie Belloff

FRESH EYES can change the world, and a world stressed by a pandemic, climate change, and inequity is one more ripe for change than we have ever experienced before. That’s why, after a five-year break, Popular Science is bringing back the Brilliant 10: an annual roster of early-career scientists and engineers developing ingenious approaches to problems across a range of disciplines. To find those innovators, we embarked on a nationwide search, vetting hundreds of researchers from institutions of all stripes and sizes. These thinkers represent our best hopes for navigating the unprecedented challenges of tomorrow—and today.

Making future forecasts less hazy

a woman with long dark hair on a green background
Allison Wing, Assistant Professor of Meteorology, Florida State University. Nicole Rifkin

Allison Wing sees a hole in the world’s major climate models: The reports published by the Intergovernmental Panel on Climate Change factor in water vapor, but not the way it forms clouds—or, more specifically, the way they cluster in the skies. In fact, says the Florida State University meteorologist, these airborne puffs may be the biggest source of uncertainty in our environmental projections. Wing’s models and simulations could help predict how a hotter planet will reshape clouds and storms and whether these changes will, in turn, exacerbate global warming.

It’s already apparent that cloud patterns can produce distinct local effects. “When clouds are clumped together, rather than being randomly distributed,” Wing explains, “the atmosphere overall is drier and warmer, and there’s actually less cloud coverage overall. And that affects how radiative energy flows through our climate system.”

Wing’s findings, published in the Journal of Advances in Modeling Earth Systems in 2020, suggest that the nuances of cloud behavior may alter notions of what our climate future looks like and perhaps how fast we’ll reach it. “Not just how they’re clustering,” she says, “but everything about them.” She—together with a group of 40 international scientists she leads in running mathematical simulations of the atmosphere—wants to get a better grip on how factors like cloud density, height, and brightness could change as the planet warms. Zeroing in on those details may hone the accuracy of global warming projections.

In the here and now, Wing wants to answer questions about extreme weather events, such as what controls the number of hurricanes we have in a given year and why big storms are getting larger and wetter faster. Her work points at a sort of “cloud greenhouse effect” in which the infrared radiation reflected as the sun warms the Earth gets trapped under nascent storms, which makes stronger tempests build more quickly. She hopes observational data from the Jet Propulsion Laboratory’s CloudSat research satellite, which she got access to as part of a 2021 NASA grant, will verify this phenomenon’s existence.

By simulating past hurricanes in vivid detail—a process involving so many variables that Wing runs them on the National Center for Atmospheric Research’s supercomputer in Wyoming­—she hopes to render the re-creations more realistically over time. Eventually, though, she wants to tap NASA’s satellite imagery (aka the real world) to make potentially lifesaving predictions.

Turbocharging surgical pathology

a man with glasses on a pink background
Michael Giacomelli, Assistant Professor of Biomedical Engineering and Optics, University of Rochester. Nicole Rifkin

When it comes to speedy biopsy results, nothing beats Mohs surgery. To minimize scarring, pathologists analyze excised skin cancers on site to ensure all dangerous cells are gone. Other common cancer surgeries, such as those for the prostate and breast, still rely on lab work that takes days to confirm clear margins, which can mean repeat procedures are necessary. And it’s all very labor­ intensive. Michael Giacomelli, a University of Rochester biomedical engineer, has a microscope that could put even Mohs surgery’s turnaround time to shame—spotting cancerous cells from a variety of tumors in near-real time.

The key is going small. The type of imager he’s built, a two-photon microscope, has been around for decades, but their hefty price tags (often $500,000 or more) and sprawling form factors (components are often racked in a space the size of a utility closet) make them impractical for most operating rooms. The scopes spy sick cells with the help of lasers: Tumor cells have characteristically enlarged nuclei, due to their excess of DNA; when soaked in a specialized dye, the oversize organelles fluoresce under the laser light. “They’re able to reach into a wet, bloody, messy mass of tissue and look at what’s inside,” Giacomelli explains.

With a background in optics, he knew that smaller, lighter lasers were being used for welding and on factory floors. The key was to find dyes that operated at their wavelength and that wouldn’t ruin human tissue for more in-depth follow-up in a traditional lab. He identified one suitable hue in a derivative of the ink in pink highlighters. After years of trial and error, which began at MIT, and a few iterations, the laser that sets it alight weighs 5–25 pounds. Combined with a microscope, monitor, CPU, keyboard, and joystick, the system fits on a handcart compact enough to wheel between surgeries. The price tag: around $100,000.

With more than 100,000 breast cancer surgeries and millions of skin cancer procedures each year in the US, the impact could be profound. Since 2019, an earlier version of Giacomelli’s system (one the size of a washing machine) has been in a clinical trial for breast cancer patients at Beth Israel hospital in Boston. And a study on prostate cancer screening published in Modern Pathology found doctors could ID malicious cells just as well with the new system as with traditional methods. Next, Giacomelli wants to trial his new, sleeker setup on Mohs and other skin cancer surgeries. He’s also interested in getting his imaging equipment into rural clinics that don’t have tissue labs nearby for fast answers. And modifying his scope for 3D imaging, which could improve outcomes for complexly shaped cancers like melanoma, could also open doors: Looking at tumors in 2D limits our understanding of what’s going on, he says. “I really think 3D imaging is going to be huge for diagnosis.”

Untangling transgenerational trauma

a black woman in a yellow blazer on a green background
Bianca Jones Marlin , Assistant Professor of Psychology and Neuroscience, Columbia University. Nicole Rifkin

Bianca Jones Marlin credits her siblings for inspiring her career. All 30-plus of them. That’s not a typo: Her folks took in dozens of foster kids. “My siblings have gone through things you wouldn’t even want to imagine,” she says. That’s why Marlin, a psychologist and neuroscientist at Columbia University who has now fostered children herself, studies a unique sliver of epigenetics, or the impact our environments and behaviors have on our genes. She documents how stress and trauma pass between generations, even when forebears have little or no contact with their descendants.

“The world changes your brain and your body—and also your offspring,” she says. “That has such strong implications for society, for the way we predict what’s going to happen in the future.” Communities that have endured famine, genocide, or any number of other struggles, she points out, may experience heightened anxiety and PTSD in later generations. Revealing the levers by which stress “travels to the future” could open pathways to therapy and ­prevention—​­breaking the chain of trauma.

Marlin began her work, which centers on brain development and learning, by identifying one of the mechanisms responsible for a seismic shift in social behavior. In 2015, she showed how the hormone oxytocin sensitizes mouse moms to their pups’ distress calls. And since then, she’s studied the effects of environmental stress and trauma in lab mice.

But how are those changes passed down? “That is the beautiful, essential question that we’re working on,” Marlin says. Until now, scientists have seen such effects only anecdotally: For example, an infamous famine in the Netherlands at the end of World War II increased health issues like diabetes, high blood pressure, and schizophrenia not only in those it affected, but also in their children, suggesting that reproductive cells could convey a memory of the trauma. Through her work on mice, Marlin has demonstrated how a learned behavior (associating the smell of almond with an electric shock) is tied to an increase in olfactory cells that respond to that scent in progeny. “We talk about it in culture,” she notes, “but because we don’t know the mechanism, it’s considered a myth.”

Marlin’s aware that her findings could be used to stigmatize groups of people—even harm them. “I would be disappointed if, 15 years from now, people were able to take the work that we have done and use that as a wall—assuming that because your ancestors went through this, you obviously are going to suffer from this too,” she says. Or worse, she continues, malicious actors could torture or terrorize with the explicit intention of harming future generations.

The positive ramifications are enough to keep her going. “If we can induce negative changes and dramatic changes, we also can induce positive,” Marlin says. “That’s the beauty of epigenetics. It’s not permanent.”

Dusting for digital fingerprints to find deepfakes

a man with glasses on a gray and pink background
Matthew Stamm, Associate Professor of Electrical and Computer Engineering, Drexel University. Nicole Rifkin

“It is impossible for a criminal to act, especially considering the intensity of a crime, without leaving traces of this presence,” wrote Edmond Locard, a 20th-century forensic science pioneer. It’s a quote Matthew Stamm frequently references. The Drexel University computer engineer isn’t after fingerprints or hair strands, however; his tools and techniques instead detect even the most subtle alterations to digital objects: deepfakes.

Since its earliest Reddit days in 2017, deep­faking has graduated from a revolting prank—using AI to put celebrity actors’ faces on porn stars’ bodies—to an alarming online threat that involves all sorts of synthetic multimedia. Since detection is even newer than the act itself, no one has a grasp on how widespread the phenomenon has become. Sensity, an Amsterdam-based security firm, reported that the examples spotted by its homegrown sniffer doubled in the first six months of 2020 alone. But that number is surely low, especially with the release of easy-to-use apps like MyHeritage, Avatarify, and Wombo, which have already been used to animate tens of millions of photos.

“The ability to rapidly produce visually convincing, completely fake media has outpaced our ability to handle it from a technological end. And importantly, from a social end,” notes Stamm. According to a 2021 Congressional Research Service report, the acts pose considerable national security threats. They can be used to spread false propaganda with the intent to blackmail elected officials, radicalize populations, influence elections, and even incite war.

The budding threat has prompted a growing number of companies and researchers—including biggies like Micro­soft and Facebook—to develop software that sniffs out AI fakes. But Stamm, who’s funded by DARPA to build automatic deepfake detectors, notes that artificial intelligence is used to make only a small subset of the tampered media we have to worry about. People can use Adobe Photoshop to create so-called cheapfakes or dumbfakes without specialized talent or hardware. In 2019, videos of Nancy Pelosi were altered by slowing soundtracks to make her appear drunk, slurring her words. In 2020, chopped-up videos made then-candidate Joe Biden appear to fall asleep during an interview.

Stamm’s approach to image analysis can catch even simple manipulations, no matter how convincing. “Every processing element, every physical hardware device that’s involved in creating a piece of media, leaves behind a statistical trace,” he notes. He based his algorithms on a concept called forensic similarity, which spots and compares the digital “fingerprints” left behind in different regions. His software breaks images into tiny pieces and runs an analysis that compares every part of the photo with every other part to develop localized evidence of just about any kind of nefarious editing.

Stamm’s latest work focuses on emotional consistency, matching voice patterns (intensity and tone) with facial characterizations (expressions and movements) in video. Inspired by Stamm’s wife, a psychologist, the idea stems from the notion that it’s difficult for video manipulations to sustain emotional consistency over time, especially in voices, he says. These techniques are still in development, but they show promise.

Removing ‘forever chemicals’ from drinking water

a man with a beard wearing goggles on a yellow background
Frank Leibfarth, Assistant Professor of Chemistry, University of North Carolina Chapel Hill. Nicole Rifkin

The Cape Fear river in North Carolina feeds drinking water for much of the southeastern part of the state. But for decades the chemical giant DuPont fed something unsavory into the waterway: PFAS, or per- and polyfluoroalkyl substances, chains of tightly bonded carbon and fluorine with a well-earned rep as “forever chemicals.” A subset of them—PFOA and PFOS—can contribute to elevated cholesterol, thyroid disease, lowered immunity, and cancer. The Centers for Disease Control and Prevention has found them in the bloodstreams of nearly every American it’s screened since 1999. While DuPont (via a division now called Chemours) phased out production in 2013, the remnants of old formulations of household staples like Teflon, Scotchgard, and Gore-Tex linger.

Frank Leibfarth, a chemist at the University of North Carolina at Chapel Hill, has a filter that can remove these toxins—and he’s starting with the Tarheel State’s polluted waterways.

Leibfarth specializes in fluorinated polymers like PFAS. Before the NC Policy Collaboratory funded him to help with the state’s water pollution problem in 2018, he was focused on finding cheap and sustainable alternatives to single-use plastics, whose exteriors are sometimes hardened with fluorine. Leibfarth’s solution took its cue from diapers: “They’re super-absorbent polymers that suck up lots of water,” he says. He developed a fluorine-based resin that’s similar enough in structure to PFAS to attract the compounds and hold on to them. The material filters nearly all of these substances from water, and 100 percent of PFOA and PFOS, according to results his team published in the journal American Chemical Society Central Science in April 2020. The material is cheap and scalable, so municipal water treatment plants can deploy the filters as an additional cost-effective filtration step.

The North Carolina legislature is considering a series of PFAS-remediation bills in 2021, one of which would fund commercializing Leibfarth’s solution, including manufacturing the resin and fitting it to municipal filtration systems. Other locales will surely follow. According to the nonprofit Environmental Working Group, as of January 2021 there are more than 2,000 sites across the US with documented PFAS contamination. Seven states already enforce limits on the chemicals in their drinking water—with more to follow.

Amid all this, the Environmental Protection Agency in March 2021 identified another new PFAS exposure threat: the very same hardened plastic containers that Leibfarth’s initial work aims to make obsolete. “I want to change the field’s thinking,” he says, “about what is needed to develop materials that are both useful and sustainable at the same time.”

Powering electronics without batteries

a man in a pink shirt on a green background
Josiah Hester, Assistant Professor of Computer Science, Computer Engineering, and Electrical Engineering at Northwestern University. Nicole Rifkin

Our love of personal gadgets is causing a ­major pileup. Based on current trends, humanity’s battery­ powered gizmos could number in the trillions by 2030. Josiah Hester, a computer engineer at Northwestern University, hopes to keep those power-hungry devices from overloading landfills with their potentially toxic power cells. His plan is simple and radical: Let these little computers harvest their own juice.

Hester’s team creates arrays of small, smart, battery-free electronics that grab ambient energy. His work is based on a concept known as intermittent computing, an approach that can deal with frequent interruptions to power and internet connectivity—in other words, devices that do their jobs without a constant hum from the grid.

His team assembles circuit boards that combine off-the-shelf processors from companies like Texas Instruments with sensors and circuitry to tap power sources like the sun, radio waves from the environment, thermal gradients, microbes, and impact forces. The team also writes the custom software to keep the sensors running. The most notable feature of these circuit boards? No batteries. Juice flows through capacitors when it’s available, and devices are designed to handle brief power-downs when it’s not.

In 2020, Hester debuted his proof of concept: a handheld gaming device (ENGAGE) modeled after a classic Game Boy. Its power comes from small solar cells framing its screen and from the impacts of button presses, which generate electricity when a magnet drops through a coil. (Shakable Faraday flashlights work in a similar way.) The toy is no match for the energy-gobbling processors in most immersive platforms on the market, but it’s a harbinger of what’s to come. During the pandemic, Hester’s lab developed a “smart mask” prototype decked out with tiny sensors that check vital signs like temperature, heart rhythm, and respiratory rate—all powered by the vibrations from the user’s breaths.

Untethering devices from the electrical grid also makes them more practical for remote applications. Hester has several programs underway, including one to monitor wild rice habitats and avian flocks in the Kakagon Sloughs, a Great Lakes conservation area managed by the Ojibwa people. When the sensors, which harvest energy from soil microbes and sunshine, are deployed later this year, they’ll track water quality and the sounds of crop-ravaging waterfowl. He’s also working with the Nature Conservancy to set up noninvasive, solar-powered cameras on Palmyra Atoll, an island in the heart of the Pacific Ocean surrounded by more than 15,000 acres of coral reef. Once a weather station and monitoring site for nuclear testing, the spot is now perfectly stationed to track migrating birds and, perhaps eventually, the effects of climate change on marine species.

As Hester pushes the limits of intermittent computing to improve device sustainability, he’s guided by a philosophy he attributes to his Native Hawaiian upbringing. It boils down to a simple question: “How do you make decisions now that will have positive impacts seven generations in the future?”

Storing data in chemical soup

a woman with a brown ponytail on an orange background
Brenda Rubenstein, Assistant Professor of Chemistry, Brown University. Nicole Rifkin

According to a recent report, Earth only has enough permanent physical storage space to hold on to some 10 percent of the more than 64 billion terabytes of data humans generated in 2020. Luckily for us, not every meme and tweet needs to live forever. But given that our output has doubled since 2018, it’s reasonable to fear that crucial information like historical archives and precious family photos could find itself homeless in the near future. That’s the problem Brenda Rubenstein, a theoretical chemist at Brown University, hopes to solve. She wants to tap into evolution’s storage designs (read: molecules) to create a radical new type of hard drive—a liquid one. Her chemical computers use tiny dissolved molecules to crunch numbers and store information.

In 2020, she and her colleagues converted a cocktail of small amines, aldehydes, carboxylic acids, and isocyanides into a kind of binary code puree. “The way you can store information in that disordered mixture of molecules floating around is through their presence or absence,” Rubenstein notes. “If the molecule is there, that’s a one, if a molecule is not there, that’s a zero.” The method, published in Nature Communications, successfully stored and retrieved a scan of a painting by Picasso. In 2021, her team used a similar slurry to build a type of AI called a neural network capable of recognizing simple black-and-white images of animals, like kangaroos and starfish.

Molecular storage has already been in the works. Experiments with embedding info into DNA, or long-chain molecules, date back to the early 2000s, and tech titans like Microsoft and IBM have entered the mix, along with specialty companies and the US federal research agency for spies, IARPA.

But small molecules may have distinct advantages over DNA. Compared to the double helix, their structures are simpler to synthesize (cheaper to manufacture), more durable (less susceptible to degradation), and less error prone (because reading and writing don’t require sequencing or encoding). What’s more, according to Rubenstein’s rough calculations, a flask of small molecules could hold the same amount as 200 Empire State Buildings’ worth of terabyte hard drives. When they’re stored as dried crystals, the molecules’ lifespans could outlast even modern storage media—perhaps in the thousands of years compared to current hard drives’ and magnetic tapes’ 10 to 20. The main trade-off is speed. Rubenstein’s tech would take about six hours to store this article, for example, and you would need specialized equipment like a mass spectrometer to read it back, making the method better suited to archival preservation than daily computing.

Within the last few years, Rubenstein and her colleagues have filed a chemical computing patent, and they are in talks with a venture capital firm to launch a startup focused on harnessing the budding new technology. “What gets me up in the morning,” says Rubenstein, “is the prospect of computing using small molecules.”

Tracking public health with smart sewers

an asian woman with short hair on a blue background
Fangqiong Ling, Assistant Professor of Energy, Environmental & Chemical Engineering, Washington University in Saint Louis. Nicole Rifkin

The name Beijing often conjures images of skyscrapers, traffic, and crowds. But Fangqiong Ling, who grew up in the city of more than 20 million, thinks of its scenic lakes, which still bear their 17th-century Qing dynasty names: Qianhai, Houhai, and Xihai. Ling studied algae blooms in these pools in high school. She and her classmates used benthic invertebrates (such as crayfish, snails, and worms) to analyze water quality, knowing that different groups of species tend to gather in clean or polluted environments. She’s been turning smaller and smaller biological organisms into sensors ever since.

Ling, an environmental microbiologist and chemical engineer at Washington University in St. Louis, still studies the H2O that flows through urban infrastructure. But she’s transitioned from water quality to wastewater-based epidemiology (WBE) and the use of “smart sewers.”

This concept isn’t new: Public health officials have sampled sewage for years to detect a wide spectrum of biologics and chemicals—including illicit drugs, viruses, bacteria, antibiotics, and prescription medications. But they have lacked tools to accurately account for the number of human sources represented in their samples, making it hard to assess the scope and scale of contamination. If a sewage sample turns up high concentrations of nicotine, for example, the spike could be the result of one toilet flush from a hardcore smoker close to the collection area, or the culmination of many smokers across the city. Substitute coronavirus or anthrax, and it’s easy to see how the difference matters.

Ling’s breakthrough was figuring out how to use the relative numbers of people’s gut bacteria in wastewater—revealed by rapidly sequencing their RNA—to estimate the true size of the population that contributed to that sample.

Her field is having a moment. During COVID-19, many cities have turned to WBE, which has exploded from a dozen or so projects to more than 200 worldwide. In 2020 the Centers for Disease Control and Prevention announced a new National Wastewater Surveillance System as a public health tool. With a 2021 National Science Foundation grant, Ling wants to improve population estimates to the point where the comings and goings of commuters, tourists, and other transients don’t skew results. Those tools are a step toward auto­matic, highly accurate assessments of contaminants and contagions in precise locations. “Microbes really have a very fundamental relationship with humans and our cities,” Ling notes. “I’m just trying to dig out the stories they have to tell.”

Shining light on dark matter

a man with glasses on a pink background
Michael Troxel, Assistant Professor of Physics, Duke University. Nicole Rifkin

The standard cosmological model describes how stars, planets, solar systems, and galaxies—even little-understood objects like black holes—congealed from a raucous cloud of primordial particles. While there’s abundant evidence to support the big bang (such as the expansion of the universe and the background radiation the cosmic event left behind), there are some vexing gaps. Dark matter, for instance. For galaxies to rotate at the speeds we observe, there should be at least five times more mass than we’ve been able to lay eyes on. “We have no evidence that dark matter exists, except that it is necessary for the universe to end up where we are today,” says Michael Troxel, a cosmologist at Duke University. To piece together what’s missing, Troxel builds maps of the universe larger and more precise than any before.

Since 2014, Troxel has worked with the Dark Energy Survey (DES), an ambitious international collaboration of more than 400 scientists, to address critical unknowns in the universe. To scope out distant skies, DES fitted a custom 570-megapixel camera with an image sensor highly attuned to red light—as objects move farther away, their wavelengths appear to stretch, making them look increasingly crimson—and mounted it on a telescope perched high in the Chilean Andes. From that vantage, it can spot some 300 million galaxies.

Now co-chair of the DES Science Committee, Troxel coordinated the analysis of data collected through 2016, and, in doing so, spied dark matter’s myriad fingerprints on celestial bodies across spacetime in exquisite detail. The brightness and redness of objects indicates both their distance and—because the universe is expanding—how long they’ve been traveling. Modeling subtle bends in light (think magnified or stretched waves) called weak gravitational lensing reveals massy objects both seen and unseen. And the makeup of the objects themselves helps fill in the picture even more: Troxel used machine learning to classify patterns in galaxy colors (shades of red and faintness) and mathematical modeling to infer shapes (elliptical, spiral, irregular), netting a catalog of more than 1,000 types of galaxies. Having a reference for what clusters should look like helps efforts to detect distortions that may point to dark matter. “That allows us to reconstruct this 3D picture of not just what the universe looks like now, but how it looked 6 or even 9 billion years ago,” Troxel explains.

The findings, announced in May 2021, cover one-eighth of Earth’s sky and more than 100 million galaxies. By the time the results of the full DES data set are published (possibly by 2023), Troxel is hopeful we’ll be able to predict and calculate dark matter. “There’s going to be this watershed moment where we measure the right thing, or we measure the things we’re measuring now with enough precision that we’re going to fundamentally learn where physics is broken,” he says. “We’re almost there.”

Adapting technology for those who need it most

a woman with long brown hair on a green background
Stacy Branham, Assistant Professor of Informatics, University of California, Irvine. Nicole Rifkin

To Stacy Branham, people with disabilities are the original life hackers—and that’s a bad thing. The University of California, Irvine computer scientist doesn’t think anyone should have to be a MacGyver just to get through life. Marginalized groups often adapt apps and gadgets to suit their needs. In the 1950s, for instance, visually impaired people manipulated record players to run at higher speed, allowing them to “skim” audio books for school or work; today, browser extensions that hasten videos have the same effect. Branham wants to use similar insights to design better products from the start. “Innovation is having the right people in the room,” she says.

Branham takes off-the-shelf technologies, like virtual assistants, and puts them together in novel ways to address the needs of under-served communities. One of her projects, nicknamed Jamie, provides step-by-step directions to help the elderly and people with disabilities navigate Byzantine airport checkpoints, signs, and corridors. Jamie uses voice assistance, a geolocation system that takes cues from sources like Bluetooth beacons and WiFi signals, “staff-sourcing” (daily reports by airport employees about dynamic changes like repair work), and audio cues or vibrations. COVID-19 derailed plans to pilot the system at Los Angeles International Airport, but Branham expects to resurrect it soon. “It was built from the beginning with input from people who are blind, people who are wheelchair users, and people who are older adults,” she says, but the resulting tech will benefit anyone who gets lost in airports.

Next, Branham wants to adapt text-to-speech tech to help blind people read with their children. Her proposed Google Voice–based app will act as an interpreter for e-books, prompting caregivers via earbuds with the right words and descriptions of images so they can have a richer story-time experience with their families.

When modern tools are designed with disabled communities in mind, there’s often a widespread benefit—see, for example, the now-ubiquitous curb cuts that enable passage for those with strollers and luggage as much as those in wheelchairs. ­Branham also points out how software like hers could help others, like those who speak English as a second language. Ultimately, she measures success unlike most people developing personal electronic gizmos: not by whether she can create flashy new features, but by whether the offerings of innovation and science are accessible to the people who might need them the most.

This story originally ran in the Fall 2021 Youth issue of PopSci. Read more PopSci+ stories.

The post The Brilliant 10: The most innovative up-and-coming minds in science appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Can music help animals relax? https://www.popsci.com/science/does-music-soothe-animals/ Fri, 16 Jul 2021 11:00:00 +0000 https://www.popsci.com/?p=380177
band-playing-for-elephants
At the Central Park Zoo, a band tests an elephant's musical taste. Popular Science

There's no telling what songs our furry friends want to hear.

The post Can music help animals relax? appeared first on Popular Science.

]]>
band-playing-for-elephants
At the Central Park Zoo, a band tests an elephant's musical taste. Popular Science

WHEN ENGLISH PLAYWRIGHT William Congreve wrote, “Music hath charms to soothe a savage breast” in his 1697 tragedy The Mourning Bride, any real efforts to measure how nonhuman animals produce and perceive music—what we now call zoomusicology—were far in the future.

Early 20th-century attempts to test the oft-misquoted quip (“beast” rather than “breast”) appeared to prove it quite wrong. In July 1921Popular Science covered one such interlude at New York City’s Central Park Menagerie—now known as the Zoo. “The polar bear exhibited astonishment,” and a small tame wolf “ran wildly around, panic-stricken.” The elephant stood out, seeming oddly unfazed.

The purpose of the demonstration, according to an account in The New York Times, was “to gauge more or less scientifically the effect of jungle music on animals.” (“Jungle music” being a racist epithet for 1920s jazz, which was commonly associated with Black performers and progressive counterculture.) “There were some sketchy theories about animal songs and music back then,” says Emily Doolittle (no relation), a composer and zoomusicologist specializing in songbirds.

Scientists ranging from neurologists to veterinarians have since dug into figuring out which tunes our furry, feathered, and flippered pals do—or don’t—want to hear. In 1996, when a team at the Southwest Foundation for Biomedical Research played the radio for baboons, their heart rates slowed. One 2004 study published in Brain Research showed that listening to Mozart reduced systolic blood pressure by 15 percent in some rodents. And in 2008, a music theorist played clarinet for a humpback whale who seemed to change its own tune in response. Doolittle points out that songbirds experience a surge of happy-making chemicals like dopamine when they chirp at dawn.

Acclaimed cellist David Teie, a zoomusicologist whose compositions cater to cats, monkeys, dogs, horses, and (sure) humans, opts to mimic the sounds creatures themselves make when they’re feeling chill and safe. His soothing kitty melodies, for instance, reproduce the time signature of feline heart rhythms and the tones of mama cats’ purrs.

As for 1921’s indifferent Central Park elephant? A 2015 violin performance at the Pairi Daiza Zoo in Belgium managed to charm resident pachyderms, who swayed their trunks. But, Doolittle cautions, without more data to back it up, we shouldn’t take that to mean elephants prefer classical to jazz.

This story originally ran in the Spring 2021 Calm issue of PopSci. Read more PopSci+ stories.

The post Can music help animals relax? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The pandemic could be a long-awaited turning point for telemedicine https://www.popsci.com/health/pandemic-technology-push-remote-heart-monitoring/ Thu, 13 May 2021 11:00:00 +0000 https://www.popsci.com/?p=364246
popsci-phone-stethoscope
In 1921, scientists sent a patient's heart beat through phone lines. Popular Science

100 years ago, doctors sent a heartbeat over a phone line, but devices enabling remote care may have finally found their moment

The post The pandemic could be a long-awaited turning point for telemedicine appeared first on Popular Science.

]]>
popsci-phone-stethoscope
In 1921, scientists sent a patient's heart beat through phone lines. Popular Science

The stethoscope was born in 1816 in a fit of self-consciousness. To avoid having to press his ear directly on a patient’s ample chest, French physician René Theophile Hyacinthe Laënnec resorted to a rolled-up sheet of paper to listen to an ailing woman’s heart. Laënnec pioneered the practice of listening to sounds made by internal organs (or auscultation), and that prudent moment also triggered a wave of innovation that’s still underway.

Laënnec would have no doubt welcomed the news reported in Popular Science’s July 1921 issue: “Physicians in New York City may now listen to their patients’ hearts in San Francisco.” A palm-sized transmitter was connected via vacuum tubes and telephone lines to a remote phonograph so that “the heartbeats of a patient may be loud enough to be heard throughout a large auditorium,” a tech Telephony magazine said was initially developed at the US Army’s Signal Corps lab presumably to monitor soldiers. In September 1924, PopSci covered a smaller, cart-contained “stethophone” at an American Medical Association convention in Chicago. These demonstrations promised to expand the reach of medicine, democratizing heart healthcare by making critical equipment mobile and delivering it to remote locales. Over the last century, progress toward that promise has been intermittent, but with recent advances—and the help of a pandemic—it’s been gaining traction. 

Despite the novelty of 1921’s stethophone, remote heart monitoring did not really catch on until the Holter monitor—a portable ECG—was commercially released in the early 1960s. The Holter’s form factor, however, precluded it from being practically worn for more than a few days or for anything strenuous. Between tangled wires, taped electrodes, and a cumbersome card-sized recorder, patients would lose patience with the unwieldy device. Even so, it remained the go-to remote monitor for more than five decades. 

“Medicine is slow to change and slow to adapt,” notes Francoise Marvel, a Johns Hopkins Cardiology Fellow and CEO of digital monitoring service Corrie Health. For Marvel, a self-proclaimed digital health interventionalist, medicine’s slow pace can be frustrating, even though patient privacy and health safety often dictate the adoption curve. “There had been a lot of stagnation with remote cardiac monitoring,” she says. “And then there came the smartphone.”

[Related: Older adults have a hard time accessing virtual health care.]

Over the past decade, three technological trends have converged to jumpstart efforts to hear distant tickers: significant improvements in wearable sensors, including longer-life batteries; wider availability of high-speed data networks; and the growth of cloud-based analytics, often powered by machine learning, which can deliver results and notifications instantaneously (even from San Francisco to New York). An April 2020 report in the Journal of the American College of Cardiology concluded that remote monitoring for cardiac disorders, “are showing great promise for the early detection of life-threatening conditions and critical events through long-term continuous monitoring.” 

There’s mounting evidence, too, that the practice leads to better outcomes, both medical and financial. A 2015 Mayo Clinic report found that discharged patients who enrolled in a smartphone study that required daily blood pressure and weight recordings experienced a 20 percent hospital readmission rate versus 60 percent for the control group. Marvel’s own Corrie Health trials, both published and in progress, have demonstrated similar results using daily blood pressure monitoring as part of a more comprehensive program. 

It’s not just Holters and blood pressure cuffs that can monitor heart health remotely—we’re now including everything from Apple Watches to smart patches. StethoMe and ThinkLabs offer puck-sized digital stethoscopes that can be used at home to record and share respiratory and heart rhythm readings. AliveCor’s KardiaMobile finger pad ECG reader comes with a smartphone case attachment. iRhythm offers a pendant-sized patch that sticks directly to the chest and can be worn continuously for up to 30 days. And a century after its stethophone premiere, even the US Army is getting onboard, running remote trials on soldiers using the ruggedized tracker called a Whoop strap. 

The monitoring menu may soon expand even more. Gartner forecasts the market for smart clothing could reach more than $2 billion by next year. Hexoskin shirts, for instance, continuously measure ECG and heart rate. German company Ambiotex offers a heart-rate tracking smart shirt. And in 2019, a Georgia Tech team developed a flexible, stretchable electronic system thin enough to be embedded in apparel and capable of measuring a slew of vitals including ECG, heart rate, respiratory rate, and motion. Even with so many technology stars aligning in favor of remote heart monitoring—and mounting evidence that people who are younger or just tech savvy are embracing smartwatches and other biosensors—doctors, seniors, and many insurance companies have generally resisted adoption because the technology can be too expensive, unproven, or difficult to master.  

Enter the pandemic. 

For all the loss, suffering, and setbacks caused by COVID-19, doctors and patients have learned to lean on telemedicine. In just the second quarter of 2020, the American Medical Association reported that primary care telemedicine visits increased nearly ten times from the year before to 35 million, while office-based visits decreased by 50 percent. A July 2020 report by UK health information company IQVIA indicates that doctors expect to rely on telehealth for 25 percent of their patient visits after the pandemic, up from 6 percent. And a 2021 Pew Research Center report predicts that life will be “far more tech-driven” in the wake of COVID-19, including “the emergence of…an ‘Internet of Medical Things’ with sensors and devices that allow for new kinds of patient monitoring.” Companies like MedWand (physical exams), Butterfly (ultrasounds), and PocDoc (blood tests) already offer digital tools that enable telehealth checkups whether they’re conducted at home, at work, or at a remote care facility. 

[Related: Can we make ourselves more empathetic? 100 years of research still has psychologists stumped.]

A century after Popular Science described how the sound of human heartbeats could be transmitted cross country to an awaiting gaggle of white-coated and be-stethoscoped men gathered in a concert hall to catch their first heart concert (not that Heart), remote heart monitoring looks much different now. “I think that COVID-19 hit the gas pedal and showed very quickly, in an emergent setting, that telemedicine could be responsive and could have a benefit,” says Marvel. “And so, I think the door is open, and we need to walk through it and take full advantage of advancing this technology.”

The post The pandemic could be a long-awaited turning point for telemedicine appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>