Robots | Popular Science https://www.popsci.com/category/robots/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Wed, 22 Nov 2023 18:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 Robots | Popular Science https://www.popsci.com/category/robots/ 32 32 Army ants could teach robots a thing or two https://www.popsci.com/technology/robot-swarm-army-ants/ Wed, 22 Nov 2023 18:00:00 +0000 https://www.popsci.com/?p=591264
Army ants building living bridge between two ledges in lab
Ants' tiny brains can still coordinate to build complex structures using their own bodies. Credit: Isabella Muratore

Army ants use their bodies to build bridges. Robots could soon take a cue from the tiny insect’s ability to collaborate.

The post Army ants could teach robots a thing or two appeared first on Popular Science.

]]>
Army ants building living bridge between two ledges in lab
Ants' tiny brains can still coordinate to build complex structures using their own bodies. Credit: Isabella Muratore

Apart from their nasty stings, army ant colonies are often known for their stunning, intricate architectural feats using their own bodies. When worker ant hunting parties encounter obstacles such as fallen tree branches, gaps in foliage, or small streams, the tiny insects will join forces to create a bridge for the remaining ant brethren to traverse. It’s as impressive as it is somewhat disconcerting—these are living, crawling buildings, after all. But one research team isn’t studying the coordination between miniscule bugs to benefit future construction projects; they are looking into how army ant teamwork could be mimicked by robots.

“Army ants create structures using decentralized collective intelligence processes,” Isabella Muratore, a postdoctoral researcher at the New Jersey Institute of Technology specializing in army ant building techniques, explains to PopSci over email. “This means that each ant follows a set of rules about how to behave based on sensory input and this leads to the creation of architectural forms without the need for any prior planning or commands from a leader.”

[Related: These robots reached a team consensus like a swarm of bees.]

Along with engineers from NJIT and Northwestern University, Muratore and her entomologist colleagues developed a series of tests meant to gauge army ant workers’ reactions and logistical responses to environmental impediments. After placing obstacles in the ants’ forest paths, Muratore filmed and later analyzed the herds’ subsequent adaptations to continue along their routes. Utilizing prior modeling work, the team also tested whether the ant bridges could withstand sudden, small changes in obstacle length using an adjustable spacing device.

Muratore and others recently presented their findings at this year’s annual Entomological Society of America conference. According to their observations, army ants generally choose to construct bridges in the most efficient locations—places wide enough to necessitate a building project while simultaneously using the least number of ants possible. The number of bridges needed during a sojourn also influences the ants’ collective decisions on resource allocation.

David Hu, a Georgia Institute of Technology engineering professor focused on fire ant raft constructions during flooding, recently likened the insects to neurons in one big, creepy-crawly brain while speaking to NPR on the subject. Instead of individual ants determining bridge dimensions and locations, each ant contributes to the decisions in their own small way.

[Related: Robot jellyfish swarms could soon help clean the oceans of plastic.]

Muratore and her collaborators believe an army ant’s collaborative capabilities could soon help engineers program swarms of robots based on the insect’s behavior principles and brains. Ants vary across species, but they still can pack a surprising amount of information within their roughly 1.1 microliter volume brains.

Replicating that brainpower requires relatively low energy costs. Scaling it across a multitude of robots could remain comparatively cheap, while exponentially increasing their functionality. This could allow them to “flexibly adapt to a variety of challenges, such as linking together to form bridges over gaps of different lengths in the most efficient manner possible,” Muratore writes to PopSci.
Robotic teamwork is crucial to implement the machines across a number of industries and scenarios, from outer space exploration, to ocean cleanup projects, to search-and-rescue efforts in areas too dangerous for humans to access. In these instances, coordinating quickly and efficiently not only saves time and energy, it could save lives.

The post Army ants could teach robots a thing or two appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Hyundai’s robot-heavy EV factory in Singapore is fully operational https://www.popsci.com/technology/hyundai-singapore-factory/ Tue, 21 Nov 2023 18:15:00 +0000 https://www.popsci.com/?p=590969
Robot dog at Hyundai factory working on car
Over 200 robots will work alongside human employees at the new facility. Hyundai

The seven-story facility includes a rooftop test track and ‘Smart Garden.’

The post Hyundai’s robot-heavy EV factory in Singapore is fully operational appeared first on Popular Science.

]]>
Robot dog at Hyundai factory working on car
Over 200 robots will work alongside human employees at the new facility. Hyundai

After three years of construction and limited operations, the next-generation Hyundai Motor Group Innovation Center production facility in Singapore is officially online and fully functioning. Announced on November 20, the 935,380-square-foot, seven-floor facility relies on 200 robots to handle over 60 percent of all “repetitive and laborious” responsibilities, allowing human employees to focus on “more creative and productive duties,” according to the company.

In a key departure from traditional conveyor-belt factories, HMGIC centers on what the South Korean vehicle manufacturer calls a “cell-based production system” alongside a “digital twin Meta-Factory.” Instead of siloed responsibilities for automated machinery and human workers, the two often cooperate using technology such as virtual and augmented reality. As Hyundai explains, while employees simulate production tasks in a digital space using VR/AR, for example, robots will physically move, inspect, and assemble various vehicle components.

[Related: Everything we love about Hyundai’s newest EV.]

By combining robotics, AI, and the Internet of Things, Hyundai believes the HMGIC can offer a “human-centric manufacturing innovation system,” Alpesh Patel, VP and Head of the factory’s Technology Innovation Group, said in Monday’s announcement

Atop the HMGIC building is an over 2000-feet-long vehicle test track, as well as a robotically assisted “Smart Farm” capable of growing up to nine different crops. While a car factory vegetable garden may sound somewhat odd, it actually compliments the Singapore government’s ongoing “30 by 30” initiative.

Due to the region’s rocky geology, Singapore can only utilize about one percent of its land for agriculture—an estimated 90 percent of all food in the area must be imported. Announced in 2022, Singapore’s 30 by 30 program aims to boost local self-sufficiency by increasing domestic yields to 30 percent of all consumables by the decade’s end using a combination of sustainable urban growth methods. According to Hyundai’s announcement, the HMGICS Smart Farm is meant to showcase farm productivity within compact settings—while also offering visitors some of its harvested crops. The rest of the produce will be donated to local communities, as well as featured on the menu at a new Smart Farm-to-table restaurant scheduled to open at the HMGICS in spring 2024.

[Related: Controversial ‘robotaxi’ startup loses CEO.]

HMGICS is expected to produce up to 30,000 electric vehicles annually, and currently focuses on the IONIQ 5, as well as its autonomous robotaxi variant. Beginning in 2024, the facility will also produce Hyundai’s IONIQ 6. If all goes according to plan, the HMGICS will be just one of multiple cell-based production system centers.

The post Hyundai’s robot-heavy EV factory in Singapore is fully operational appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This 3D-printed soft robotic hand has ‘bones,’ ‘ligaments,’ and ‘tendons’ https://www.popsci.com/technology/3d-printed-soft-robot-hand/ Wed, 15 Nov 2023 20:00:00 +0000 https://www.popsci.com/?p=589875
Side by side of 3D printed robot hand gripping pen and bottle
Researchers 3D-printed a robotic hand, a six-legged robot, a 'heart' pump, and a metamaterial cube. ETH Zurich / Thomas Buchner

3D-printed designs are usually limited to fast-drying polymers, but a new method enables wild, soft robotic possibilities.

The post This 3D-printed soft robotic hand has ‘bones,’ ‘ligaments,’ and ‘tendons’ appeared first on Popular Science.

]]>
Side by side of 3D printed robot hand gripping pen and bottle
Researchers 3D-printed a robotic hand, a six-legged robot, a 'heart' pump, and a metamaterial cube. ETH Zurich / Thomas Buchner

To call soft robotic hands “complex” is a bit of an understatement. These designs consider a number of engineering factors, including the elasticity and durability of materials. This usually entails separate 3D-printing processes for each component, often with multiple plastics and polymers. Now, however, engineers working together from ETH Zurich and the MIT spin-off company, Inkbit, can create extremely intricate products with a 3D-printer utilizing a laser scanner and feedback learning. The researchers’ impressive results already include a six-legged gripper robot, an artificial “heart” pump, sturdy metamaterials, as well as an articulating soft robotic hand complete with artificial tendons, ligaments, and bones.

[Related: Watch a robot hand only use its ‘skin’ to feel and grab objects.]

Traditional 3D-printers use fast-curing polyacrylate plastics. In this process, UV lamps quickly harden a malleable plastic gel as it is layered via the printer nozzle, while a scraping tool removes surface imperfections along the way. While effective, the rapid solidification can limit a product’s form, function, and flexibility. But trying to swap out the fast-curing plastic for slow-curing polymers like epoxies and thiolenes mucks up the machinery, meaning many soft robotic components require separate manufacturing methods.

Knowing this, designers wondered if adding scanning technology alongside rapid printing adjustments could solve the slow-curing hurdle. As detailed in their new paper published in Nature, their new system not only offers a solution, but demonstrates 3D-printed, slow-curing polymers’ potential across a number of designs.

Instead of scraping away imperfections layer-by-layer, three-dimensional scanning offers near-instantaneous information on surface irregularities. This data is sent to the printer’s feedback mechanism, which then adjusts the necessary material amount “in real time and with pinpoint accuracy,” Wojciech Matusik, an electrical engineering and computer science professor at MIT and study co-author, said in a recent project profile from ETH Zurich.

To demonstrate their new method’s potential, researchers created a quartet of diverse 3D-printed projects using soft-curing polymers—a resilient metamaterial cube, a heart-like fluid pump capable of transporting “liquids” through its system, a six-legged robot topped with a sensor-informed two-pronged gripper, as well an articulating hand capable of grasping objects using embedded sensor pads.
While refinements to production methods, polymers’ chemical compositions, and lifespan are still needed, the team believes the comparatively fast and adaptable 3D-printing method could one day lead to a host of novel industrial, architectural, and robotic designs. Soft robots, for example, offer less risk of injury when working alongside humans, and can handle fragile goods better than their standard, metal robot counterparts. Already, however, the existing advances have produced designs once impossible for 3D printers.

The post This 3D-printed soft robotic hand has ‘bones,’ ‘ligaments,’ and ‘tendons’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These robots reached a team consensus like a swarm of bees https://www.popsci.com/technology/bee-robot-communication/ Wed, 08 Nov 2023 18:30:00 +0000 https://www.popsci.com/?p=587785
Image of kilobots atop photo of bees
The tiny robots communicate using multicolored LED lights. Credit: Unsplash / University of Barcelona / PopSci

Scout bees vote for new hive locations with a 'dance.' These bots use blinking lights.

The post These robots reached a team consensus like a swarm of bees appeared first on Popular Science.

]]>
Image of kilobots atop photo of bees
The tiny robots communicate using multicolored LED lights. Credit: Unsplash / University of Barcelona / PopSci

Bees are extremely adept at communicating, even though their brains weigh just two milligrams. They’re so efficient at reaching a consensus, in fact, that researchers created a mini-robot team inspired by their ‘conversations.’

In the search for a new nesting spot, scout bees are known to conduct tiny “waggle dances” to indicate their preferred hive location—slowly winning over swarmmates to join in the process. The moves are tiny but complex, involving moving in figure-eight patterns while shaking their bodies at rapid speed. The bees with the most popular dance part earn final say on where to build. While the three centimeter-wide “kilobots” under the watch of a team at Spain’s University of Barcelona can’t shimmy and shake just yet, they do signal to one another much like bees.

[Related: Bee brains could teach robots to make split-second decisions.]

As detailed in their preprint paper submitted in late October, the team first attached a colored LED light alongside an infrared-light receiver and emitter atop each of a total of 35 kilobots. They then programmed the bots using a modified version of a previously designed mathematical model based on scout bee behavior. From there, the team placed varying numbers of kilobots within an enclosure and let them jitter through their new environment on their trio of toothpick-like legs. During over 70 tests, researchers ordered certain bot clusters to advertise their preferred nesting location “opinion” via signaling between their LED lights’ red, blue, and green hues.

Every kilobot team achieved a group consensus within roughly 30 minutes, no matter the team size or environmental density. Such reliable decision making—even in machines capable of transmitting just 9 bytes of information at a time—could one day prove invaluable across a number of industries.

[Related: Bat-like echolocation could help these robots find lost people.]

“We believe that in the near future there are going to be simple robots that will do jobs that we don’t want to do, and it will be very important that they make decisions in a decentralized, autonomous manner,” Carmen Miguel, one of the study’s co-authors, explained to New Scientist on November 7.

During invasive medical procedures, for instance, tiny robots could maneuver within a patient’s body, communicating with one another without the need for complex electronics. Similarly, cheap bots could coordinate with one another while deployed during search-and-rescue missions. In such scenarios, the environmental dangers often prevent the use of expensive robots due to risk of damage or destruction.

Above it all, however, the University of Barcelona team believes their work draws attention to often underappreciated aspects of everyday existence. The team’s paper abstract concludes: “By shedding light on this crucial layer of complexity… we emphasize the significance of factors typically overlooked but essential to living systems and life itself.”

The post These robots reached a team consensus like a swarm of bees appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why electric knifefish ‘shimmy’ https://www.popsci.com/environment/electric-knifefish-shimmy/ Thu, 26 Oct 2023 15:00:00 +0000 https://www.popsci.com/?p=583514
A long torpedo-shaped fish swims among green plants. Knifefish like the black ghost knifefish are known for their shimmying motions and electrical pulses. and live in freshwater lakes and rivers in Central and South America.
Knifefish like the black ghost knifefish are known for their shimmying motions and electrical pulses. and live in freshwater lakes and rivers in Central and South America. Deposit Photos

Quick movements heighten animal senses—even in humans.

The post Why electric knifefish ‘shimmy’ appeared first on Popular Science.

]]>
A long torpedo-shaped fish swims among green plants. Knifefish like the black ghost knifefish are known for their shimmying motions and electrical pulses. and live in freshwater lakes and rivers in Central and South America.
Knifefish like the black ghost knifefish are known for their shimmying motions and electrical pulses. and live in freshwater lakes and rivers in Central and South America. Deposit Photos

Animals have a wide range of ways to make sense of the world around them. Dogs sniff the air around them. Dolphins use echolocation. Humans glance at each other. For the electric knifefish, “shimmying” around in the water like a tadpole helps it make sense of its watery world. But knifefish are not the only ones that wiggle with purpose. In a study published October 26 in the journal Nature Machine Intelligence, scientists describe a wide range of organisms that perform these same wavy patterns of movement to feel out the environment around them. 

[Related: Five animals that can sense things you can’t.]

The team behind this study was interested in what the nervous system does when animals move to improve their perception of the world, and if that behavior could be translated to robotic control systems.

“Amoeba don’t even have a nervous system, and yet they adopt behavior that has a lot in common with a human’s postural balance or fish hiding in a tube,” study co-author and Johns Hopkins University mechanical engineer Noah Cowan said in a statement. “These organisms [knifefish and amoebas] are quite far apart from each other in the tree of life, suggesting that evolution converged on the same solution through very different underlying mechanisms.”

An observation tank illuminated by infrared shows electric knifefish behavior with the lights on (top) and lights off (bottom). CREDIT: Johns Hopkins University.

Shimmying in the dark

Knifefish are blade-shaped fish found in freshwater lakes and rivers in Central and South America. They can reach three feet long and eat insects, crustaceans, and other fish. In the wild, they are hardwired to hide to avoid predators. They send out weak electric discharges that sense the predators’ location and find shelter. Wiggling around rapidly helps them actively sense their surroundings to find a place to hide.

While watching electric knifefish in an observation tank, the team noticed that when it was dark, the fish shimmied back and forth significantly more frequently. The fish swayed more gently with occasional bursts of quick movements when the lights were on. 

“We found that the best strategy is to briefly switch into explore mode when uncertainty is too high, and then switch back to exploit mode when uncertainty is back down,” co-author and Johns Hopkins computational cell biologist and neuroethologist Debojyoti Biswas said in a statement. When a predator could be nearby, the knifefish will quickly search for somewhere to hide. If they feel safe, they can return back to a more normal and less wiggly state to find food.

Exciting the senses

In the study, the team created a model that simulates the key sensing behaviors of the fish. They used work from other labs and spotted these same sensory-dependent movements in other organisms including amoeba, moths, cockroaches, moles, bats, mice, and even humans.

According to the authors, this is the first time scientists have deciphered this mode-switching strategy in fish and linked the behavior across species. They believe that all organisms have a brain computation that manages uncertainty in their environment.

[Related: How cats and dogs see the world.]

“If you go to a grocery store, you’ll notice people standing in line will change between being stationary and moving around while waiting,” Cowan said. “We think that’s the same thing going on, that to maintain a stable balance you actually have to occasionally move around and excite your sensors like the knifefish. We found the statistical characteristics of those movements are ubiquitous across a wide range of animals, including humans.”

Understanding these sensory mechanisms and their nuances could be used to improve search and rescue drones, space rovers, and other autonomous robots. These same characteristics for looking around could be built into future robots to help them perceive the space around them. The team also plans to explore how these mechanisms work in living things—even in plants.

The post Why electric knifefish ‘shimmy’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Marines used a ‘robotic goat’ to fire a rocket-propelled grenade https://www.popsci.com/technology/marines-robotic-goat-fires-weapon/ Tue, 24 Oct 2023 20:46:42 +0000 https://www.popsci.com/?p=582921
The robotic goat with the M72 Light Anti-Tank Weapon on Sept. 9, 2023.
The robotic goat with the M72 Light Anti-Tank Weapon on Sept. 9, 2023. Justin J. Marty / US Marines

Here's why the US military put a light anti-tank weapon on the back of a robotic quadruped.

The post The Marines used a ‘robotic goat’ to fire a rocket-propelled grenade appeared first on Popular Science.

]]>
The robotic goat with the M72 Light Anti-Tank Weapon on Sept. 9, 2023.
The robotic goat with the M72 Light Anti-Tank Weapon on Sept. 9, 2023. Justin J. Marty / US Marines

On September 9, Marines at Twentynine Palms, California, strapped a rocket launcher to the back of a commercially available robotic goat as part of a tactical training exercise. In a video of the test, the robotic goat is set up for safety on a firing range within a little sandbagged shelter, cleared to fire, and then the rocket-propelled grenade launches off the goat’s back. (While most quadrupedal robots of this size are referred to as robot dogs, the Marine Corps referred to the robot in question as a robotic goat.) The test, one of several new and autonomy-adjacent technologies demonstrated that day, offers a glimpse into what robot-assisted combat of the present and the future could look like.

The test was conducted by the Tactical Training and Exercise Control group, together with the Office of Naval Research, and it took place at the Marine Air Ground Task Force Training Command, which is the largest Marine Corps base. The rocket-propelled grenade launcher used was an M72 Light Anti-tank Weapon (or LAW). The weapon is a NATO standard, and thousands of the weapons have been delivered to Ukraine since it was invaded by Russia in February 2022.

The M72 LAW has been in service with US forces since 1963. Weighing just 5.5 pounds, the weapon is light, cheap enough to discard after firing, and dead simple to use. A Marine Corps guide notes that it is a standard tool of infantry battalions (which includes roughly 800 Marines). The weapon is also not specific to any line of service and “can be fired by any Marine with basic infantry skills.”

[Related: The US military’s tiniest drone feels like it flew straight out of a sci-fi film]

The rockets fired by the launcher can travel up to 3,280 feet, but are most effective at a range of 650 feet. That’s a dangerously close distance to be near a tank, as it places the person trying to destroy the tank within range of not just the tank’s big cannon but also any machine guns it may have for self-defense. This danger is exacerbated for armies fighting in open fields, but the M72 was designed for the density and obstructions of urban combat. All of those features, from simplicity to disposability to close-range firing, make it an ideal weapon to mount on a remote-controlled robot shooter.

“Instead of having a Marine handle the weapon system, manipulate the safeties, we could put a remote trigger mechanism on it that allowed it to all be done remotely,” said Aaron Safadi, in a release on the test. Safadi is the officer in charge of the emerging technology integration section of the Tactical Training and Exercise Control group. “The Marine could be behind cover and concealment, the weapon system could go forward, and the Marine could manipulate the safeties from a safe place while allowing that weapon system to get closer to its target.”

The robot goat on which the Marines tested the M72 is, as a Marine emphatically explains in the video, a tool for testing and not the intended robot for carrying it into combat. As reported by The War Zone, “the underlying quadrupedal robot is a Chinese-made Unitree Go1, which is readily available for purchase online, including through Amazon.” (The War Zone is owned by PopSci’s parent company, Recurrent Ventures.)

In the past, security concerns about using off-the-shelf robotics and drones made in China have led to the Department of Defense banning their use without explicit waivers for permission. That’s left the Pentagon in a sometimes tricky spot, as the overwhelming majority of commercial manufacture of such robots is in China, to the point that even models branded Made in USA have Chinese components.

Both Ukraine and Russia have adopted off-the-shelf commercial robots for use in their war against each other. The low price point of the Go1 goat robot suggests it could follow a similar pattern, should it prove useful as a remote-control firing platform. The Marine Corps, should it pursue a different mount for the M72, could pursue a platform like the Ghost Robotics Q-UGV. This four-legged robotic dog has already seen use patrolling an Air Force base in Florida, and in 2021 Ghost demonstrated a version of the Q-UGV with a gun mounted on its back at a defense technology exposition.

To mount the M72 on the robot goat, the robot first dons a metal box with firing controls and safety switches on its back. After firing, the box can be opened, the spent launcher discarded, and the robot is ready to take on a new round. It is easy to see the appeal of such a system in combat. With the M72 designed to punch through armor or defenses at short range, the driver could use a video-game-like controller to scout ahead, watching through the robot’s cameras as eyes. Sensors on the side of the robot help it avoid other obstacles. Once it’s in position, the robot’s rocket could be launched, and if the robot survives the encounter, it could let the Marine witness the destruction before advancing.

Bringing tanks or other armored vehicles into cities is already a fraught decision, as urban combat necessitates reduced perception. Cities, even ones reduced to rubble, can hide all sorts of waiting unpleasantness. For urban defenders and assaulters alike, the ability to mount weapons on robotic scouts, even and especially disposable robots with disposable weapons, offers a way to take a first toe into urban combat without exposing troops to excess danger.

Watch a video of the robot goat, and other items test in the training exercise, below:

The post The Marines used a ‘robotic goat’ to fire a rocket-propelled grenade appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This seafaring robot ‘eats’ stinky seaweed and dumps it in deep water https://www.popsci.com/technology/algaray-seaweed-robot/ Tue, 24 Oct 2023 18:00:00 +0000 https://www.popsci.com/?p=582851
AlgaRay robot floating atop water in Antigua
After gathering the seaweed, AlgaRay can dive below the surface to deposit its cargo near the ocean floor. Seaweed Generation/University of Exeter

The AlgaRay scoops up invasive sargassum seaweed before it washes onto shores. It could even alleviate CO2 pollution in the process.

The post This seafaring robot ‘eats’ stinky seaweed and dumps it in deep water appeared first on Popular Science.

]]>
AlgaRay robot floating atop water in Antigua
After gathering the seaweed, AlgaRay can dive below the surface to deposit its cargo near the ocean floor. Seaweed Generation/University of Exeter

If you’ve ever spent time on a beach in the Gulf of Mexico or the Caribbean, there is a solid chance you stumbled across a slimy mass of stinky, sulfurous-smelling seaweed. The specific marine plant in question during those gross encounters is likely sargassum—while helpful for absorbing CO2, sargassum is also incredibly invasive, and can wreak havoc on both shoreline and ocean ecosystems. Cleanup efforts can cost tens of thousands of dollars while disrupting both tourist and fishing industries, but a recent aquatic robot project is showing immense promise in alleviating sargassum stress. In fact, AlgaRay’s recent successes have even earned it a spot on Time’s Best Inventions of 2023.

Co-designed by Seaweed Generation, a nonprofit organization dedicated to utilizing the versatile plant to help mitigate and remove carbon emissions, an AlgaRay prototype is currently patrolling off the coasts of Antigua. There, the roughly 9-foot-wide robot scoops up clumps of sargassum until its storage capacity is filled, at which point the autonomous bot dives 200m below the surface.

[Related: Rocks may be able to release carbon dioxide as well as store it.]

At this depth, the air pockets that make sargassum leaves so buoyant are so compressed by the water pressure that it simply can’t float anymore. Once released by AlgaRay, the seaweed then sinks to the ocean floor. According to a new writeup by Seaweed Generation’s partners at the University of Exeter, the robot can repeat this process between four and six times every hour. And thanks to a combination of solar panels, lithium batteries, and navigational tools connected to Starlink’s satellite internet constellation, AlgaRay will “ultimately be able to work almost non-stop,” reports the University of Exeter.

Of course, ocean ecosystems are complex and delicate balancing acts at any depth. AlgaRay’s designers are well aware of this, and assure its potential additional ocean floor CO2 deposits won’t be carried out recklessly. Additionally, they note sargassum blooms—exacerbated by human ecological disruption—are already causing major issues across the world.

“Sargassum inundations… cause environmental, social and economic disruption across the Caribbean, Central US and West African regions,” Seaweed Generation CEO Paddy Estridge and Chief of Staff Blythe Taylor, explain on the organization’s website. “Massive influxes of seaweed wash ashore and rot, releasing not just the absorbed CO2 but hydrogen sulfide gasses, decimating fragile coastal ecosystems including mangroves and seagrass meadows and killing countless marine animals.”

[Related: The US is investing more than $1 billion in carbon capture, but big oil is still involved.]

Estridge and Taylor write that humans “need to tread carefully” when it comes to depositing biomass within the deep ocean to ensure there are no “negative impacts or implications on the surrounding environment and organisms.” At the same time, researchers already know sargassum naturally dies and sinks to the bottom of the ocean.

Still, “we can’t assume either a positive or negative impact to sinking sargassum, so a cautious pathway and detailed monitoring has been built into our approach,” Estridge and Taylor write. “The scale of our operations are such that we can measure any change to the ocean environment on the surface, mid or deep ocean. Right now, and for the next few years our operations are literally a drop in the ocean (or a teaspoon of Sargassum per m2).”

As the name might imply, the AlgaRay is inspired by manta rays, which glide through ocean waters while using their mouths to filter and eat algae. In time, future iterations of the robot could even rival manta rays’ massive sizes. A nearly 33-foot-wide version is in the works to collect upwards of 16 metric tons of seaweed at a time—equal to around two metric tons of CO2. With careful monitoring of deep sea repositories, fleets of AlgaRay robots could soon offer an efficient, creative means to remove CO2 from the atmosphere.

“The [Intergovernmental Panel on Climate Change]  has been very clear that we need to be able to remove (not offset, remove) 10 billion [metric tons] of carbon a year from the atmosphere by 2050 to have a hope of avoiding utter catastrophe for all people and all earth life,” write Estridge and Taylor. Knowing this, AlgaRay bots may be a key ally for helping meet that goal. If nothing else, perhaps some beaches will be a little less overrun with rotting seaweed every year. 

The post This seafaring robot ‘eats’ stinky seaweed and dumps it in deep water appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch what happens when AI teaches a robot ‘hand’ to twirl a pen https://www.popsci.com/technology/nvidia-eureka-ai-training/ Fri, 20 Oct 2023 19:10:00 +0000 https://www.popsci.com/?p=581803
Animation of multiple robot hands twirling pens in computer simulation
You don't even need humans to help train some AI programs now. NVIDIA Research

The results are better than what most humans can manage.

The post Watch what happens when AI teaches a robot ‘hand’ to twirl a pen appeared first on Popular Science.

]]>
Animation of multiple robot hands twirling pens in computer simulation
You don't even need humans to help train some AI programs now. NVIDIA Research

Researchers are training robots to perform an ever-growing number of tasks through trial-and-error reinforcement learning, which is often laborious and time-consuming. To help out, humans are now enlisting large language model AI to speed up the training process. In a recent experiment, this resulted in some incredibly dexterous albeit simulated robots.

A team at NVIDIA Research directed an AI protocol powered by OpenAI’s GPT-4 to teach a simulation of a robotic hand nearly 30 complex tasks, including tossing a ball, pushing blocks, pressing switches, and some seriously impressive pen-twirling abilities.

[Related: These AI-powered robot arms are delicate enough to pick up Pringles chips.]

NVIDIA’s new Eureka “AI agent” utilizes GPT-4 by asking the large language model (LLM) to write its own reward-based reinforcement learning software code. According to the company, Eureka doesn’t need intricate prompting or even pre-written templates; instead, it simply begins honing a program, then adheres to any subsequent external human feedback.

In the company’s announcement, Linxi “Jim” Fan, a senior research scientist at NVIDIA, described Eureka as a “unique combination” of LLMs and GPU-accelerated simulation programming. “We believe that Eureka will enable dexterous robot control and provide a new way to produce physically realistic animations for artists,” Fan added.

Judging from NVIDIA’s demonstration video, a Eureka-trained robotic hand can pull off pen spinning tricks to rival, if not beat, extremely dextrous humans. 

After testing its training protocol within an advanced simulation program, Eureka then analyzes its collected data and directs the LLM to further improve upon its design. The end result is a virtually self-iterative AI protocol capable of successfully encoding a variety of robotic hand designs to manipulate scissors, twirl pens, and open cabinets within a physics-accurate simulated environment.

Eureka’s alternatives to human-written trial-and-error learning programs aren’t just effective—in most cases, they’re actually better than those authored by humans. In the team’s open-source research paper findings, Eureka-designed reward programs outperformed humans’ code in over 80 percent of the tasks—amounting to an average performance improvement of over 50 percent in the robotic simulations.

[Related: How researchers trained a budget robot dog to do tricks.]

“Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process,” Anima Anandkumar, senior director of AI research at NVIDIA’s senior director of AI research and one of the Eureka paper’s co-authors, said in the company’s announcement. “Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks.”

The post Watch what happens when AI teaches a robot ‘hand’ to twirl a pen appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This weird-looking British ship will keep an eye out for sabotage beneath the surface https://www.popsci.com/technology/british-ship-proteus-surveillance/ Fri, 20 Oct 2023 14:00:37 +0000 https://www.popsci.com/?p=581582
The Proteus.
The Proteus. Ministry of Defence

It's called the Proteus, and it's a surveillance vessel.

The post This weird-looking British ship will keep an eye out for sabotage beneath the surface appeared first on Popular Science.

]]>
The Proteus.
The Proteus. Ministry of Defence

On October 10, the Royal Fleet Auxiliary dedicated a ship called the Proteus in a ceremony on the River Thames. The vessel, which looks like someone started building a ship and then stopped halfway through, is the first in the fleet’s Multi-Role Ocean Surveillance program, and is a conversion from a civilian vessel. 

In its new role, the Proteus will keep a protective eye on underwater infrastructure deemed vitally important, and will command underwater robots as part of that task. Before being converted to military use, the RFA Proteus was the Norwegian-built MV Topaz Tangaroa, and it was used to support oil platforms.

Underwater infrastructure, especially pipelines and communications cables, make the United Kingdom inextricably connected to the world around it. While these structures are hard to get to, as they rest on the seafloor, they are not impossible to reach. Commercial vessels, like the oil rig tenders the Proteus was adapted from, can reach below the surface with cranes and see below it through remotely operated submarines. Dedicated military submarines can also access seafloor cables. By keeping an eye on underwater infrastructure, the Proteus increases the chance that saboteurs can be caught, and more importantly, improves the odds that damage can be found and repaired quickly.

“Proteus will serve as a testbed for advancing science and technological development enabling the UK to maintain the competitive edge beneath the waves,” reads the Royal Navy’s announcement of the ship’s dedication.

The time between purchase and dedication of the Topaz Tangaroa to the Proteus was just 11 months, with conversion completed in September. The 6,600-ton vessel is operated by a crew of just 26 from the Royal Fleet Auxiliary, while the surveillance, survey, and warfare systems on the Proteus are crewed by 60 specialists from the Royal Navy. As the Topaz Tangaroa, the vessel was equipped for subsea construction, installation, light maintenance, and inspection work, as well as survey and remotely operated vehicle operations. The Proteus retains its forward-mounted helipad, which looks like a hexagonal brim worn above the bow of the ship.

Most striking about the Proteus is the large and flat rear deck, which features a massive crane as well as 10,700 square feet of working space, which is as much as five tennis courts. Helpful to the ship’s role as a home base for robot submersibles is a covered “moon pool” in the deck that, whenever uncovered, lets the ship launch submarines directly beneath it into the ocean.

“This is an entirely new mission for the Royal Fleet Auxiliary – and one we relish,” Commodore David Eagles RFA, the head of the Royal Fleet Auxiliary, said upon announcement of the vessel in January.

Proteus is named for one of the sons of the sea god Poseidon in Greek mythology, with Proteus having domain over rivers and the changing nature of the sea. While dedicated on a river, the ship is designed for deep-sea operation, with a ballast system providing stability as it works in the high seas. 

“Primarily for reasons of operational security, the [Royal Navy] has so far said little about the [Multi-Role Ocean Surveillance] concept of operations and the areas where Proteus will be employed,” suggests independent analysts Navy Lookout, as part of an in-depth guide on the ship. “It is unclear if she is primarily intended to be a reactive asset, to respond to suspicious activity and potentially be involved in repairs if damage occurs. The more plausible alternative is that she will initially be employed in more of a deterrent role, deploying a series of UUVs [Uncrewed Underwater Vehicles] and sensors that monitor vulnerable sites and send periodic reports back to the ship or headquarters ashore. Part of the task will be about handling large amounts of sensor data looking for anomalies that may indicate preparations for attacks or non-kenetic malign activity.”

In the background of the UK’s push for underwater surveillance are actual attacks and sabotage on underwater pipelines. In September 2022, an explosion caused damage and leaks in the Nord Stream gas pipeline between Russia and Germany. While active transfer of gas had been halted for diplomatic reasons following Russia’s February 2022 invasion of Ukraine, the pipeline still held gas in it at the time of the explosion. While theories abound for possible culprits, there is not yet a conclusive account of which nation was both capable and interested enough to cause such destruction.

The Proteus is just the first of two ships with this task. “The first of two dedicated subsea surveillance ships will join the fleet this Summer, bolstering our capabilities and security against threats posed now and into the future,” UK Defence Secretary Ben Wallace said in January. “It is paramount at a time when we face Putin’s illegal invasion of Ukraine, that we prioritise capabilities that will protect our critical national infrastructure.”

While the Proteus is unlikely to fully deter such acts, having it in place will make it easier for the Royal Navy to identify signs of sabotage. Watch a video of the Proteus below:

The post This weird-looking British ship will keep an eye out for sabotage beneath the surface appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI design for a ‘walking’ robot is a squishy purple glob https://www.popsci.com/technology/ai-robot-blob/ Fri, 13 Oct 2023 15:30:00 +0000 https://www.popsci.com/?p=579501
AI-designed multi-legged robots on table
They may not look like much, but they skipped past billions of years' of evolution to get those little legs. Northwestern University

During testing, the creation could walk half its body length per second—roughly half as fast as the average human stride.

The post AI design for a ‘walking’ robot is a squishy purple glob appeared first on Popular Science.

]]>
AI-designed multi-legged robots on table
They may not look like much, but they skipped past billions of years' of evolution to get those little legs. Northwestern University

Sam Kreigman and his colleagues made headlines a few years back with their “xenobots”— synthetic robots designed by AI and built from biological tissue samples. While experts continue to debate how to best classify such a creation, Kriegman’s team at Northwestern University has been hard at work on a similarly mind-bending project meshing artificial intelligence, evolutionary design, and robotics.

[Related: Meet xenobots, tiny machines made out of living parts.]

As detailed in a new paper published earlier this month in the Proceedings of the National Journal of Science, researchers recently tasked an AI model with a seemingly straightforward prompt: Design a robot capable of walking across a flat surface. Although the program delivered original, working examples within literal seconds, the new robots “[look] nothing like any animal that has ever walked the earth,” Kriegman said in Northwestern’s October 3 writeup.

And judging from video footage of the purple multi-“legged” blob-bots, it’s hard to disagree:

After offering their prompt to the AI program, the researchers simply watched it analyze and iterate upon a total of nine designs. Within just 26 seconds, the artificial intelligence managed to fast forward past billions of years of natural evolutionary biology to determine legged movement as the most effective method of mobility. From there, Kriegman’s team imported the final schematics into a 3D printer, which then molded a jiggly, soap bar-sized block of silicon imbued with pneumatically actuated musculature and three “legs.” Repeatedly pumping air in and out of the musculature caused the robots’ limbs to expand and contract, causing movement. During testing, the robot could walk half its body length per second—roughly half as fast as the average human stride.

“It’s interesting because we didn’t tell the AI that a robot should have legs,” Kriegman said. “It rediscovered that legs are a good way to move around on land. Legged locomotion is, in fact, the most efficient form of terrestrial movement.”

[Related: Disney’s new bipedal robot could have waddled out of a cartoon.]

If all this weren’t impressive enough, the process—dubbed “instant evolution” by Kriegman and colleagues—all took place on a “lightweight personal computer,” not a massive, energy-intensive supercomputer requiring huge datasets. According to Kreigman, previous AI-generated evolutionary bot designs could take weeks of trial and error using high-powered computing systems. 

“If combined with automated fabrication and scaled up to more challenging tasks, this advance promises near-instantaneous design, manufacture, and deployment of unique and useful machines for medical, environmental, vehicular, and space-based tasks,” Kriegman and co-authors wrote in their abstract.

“When people look at this robot, they might see a useless gadget,” Kriegman said. “I see the birth of a brand-new organism.”

The post AI design for a ‘walking’ robot is a squishy purple glob appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Futuristic aircraft and robotic loaders dazzled at a Dallas tech summit https://www.popsci.com/technology/up-summit-2023-aircraft-equipment/ Thu, 12 Oct 2023 20:00:00 +0000 https://www.popsci.com/?p=579128
This bizarre-looking flying machine is an ultralight aircraft called the Black Fly, and it holds precisely one person. The company that makes it, Pivotal, recently changed their name from Opener. They plan to start selling a similar model to this one, called Helix, which will cost $190,000. The operator doesn’t need to be a pilot, and the small aircraft also has an emergency parachute. The eight propellers and two wings allow it to fly, and it can travel for about 20 miles or 20 minutes.
This bizarre-looking flying machine is an ultralight aircraft called the Black Fly, and it holds precisely one person. The company that makes it, Pivotal, recently changed their name from Opener. They plan to start selling a similar model to this one, called Helix, which will cost $190,000. The operator doesn’t need to be a pilot, and the small aircraft also has an emergency parachute. The eight propellers and two wings allow it to fly, and it can travel for about 20 miles or 20 minutes. Rob Verger

Check out these photos of cargo drones, electric flying machines, Army gear, and remote-controlled construction equipment at a Texas event.

The post Futuristic aircraft and robotic loaders dazzled at a Dallas tech summit appeared first on Popular Science.

]]>
This bizarre-looking flying machine is an ultralight aircraft called the Black Fly, and it holds precisely one person. The company that makes it, Pivotal, recently changed their name from Opener. They plan to start selling a similar model to this one, called Helix, which will cost $190,000. The operator doesn’t need to be a pilot, and the small aircraft also has an emergency parachute. The eight propellers and two wings allow it to fly, and it can travel for about 20 miles or 20 minutes.
This bizarre-looking flying machine is an ultralight aircraft called the Black Fly, and it holds precisely one person. The company that makes it, Pivotal, recently changed their name from Opener. They plan to start selling a similar model to this one, called Helix, which will cost $190,000. The operator doesn’t need to be a pilot, and the small aircraft also has an emergency parachute. The eight propellers and two wings allow it to fly, and it can travel for about 20 miles or 20 minutes. Rob Verger

Last week at a ranch outside Dallas, Texas, hundreds of people gathered to hobnob and discuss topics like transportation, aviation, drones, and more. Some were clad in cowboy hats. The event, called the UP.Summit, included investors, politicians, business leaders, representatives from large companies like Airbus, Bell, Boeing, as well as relatively newer players like Beta Technologies and Joby Aviation that are working on electric aircraft. 

On display was gear and hardware from companies like Wisk, Zipline, Jedsy, and much more.  

Take a look at some of the flying machines and other gadgets and equipment that were at the event, which is put on by investment firm UP.Partners.

This helicopter-like prototype aircraft is called a Volocopter, and it holds one person. Up top are 18 all-electric propellers mounted on a ring that’s about 26 feet in diameter. It can fly for about 20 minutes and has a range of about 11 or 12 miles.
This helicopter-like prototype aircraft is called a Volocopter, and it holds one person. Up top are 18 all-electric propellers mounted on a ring that’s about 26 feet in diameter. It can fly for about 20 minutes and has a range of about 11 or 12 miles. Rob Verger
The CEO of Bulgaria-based Dronamics, Svilen Rangelov, tells PopSci that this aircraft is basically a “flying delivery van.” The drone has a wingspan of about 50 feet, measures about 25 feet long, and is called the Black Swan, even though it’s white. Rangelov says that it can carry about 770 pounds of packages a distance of some 1,550 miles, and that ground-based pilots operate or oversee the aircraft as it flies. The company plans to start operating delivery flights in Greece early next year. (The aircraft in the photo is a replica and can’t actually fly.)
The CEO of Bulgaria-based Dronamics, Svilen Rangelov, tells PopSci that this aircraft is basically a “flying delivery van.” The drone has a wingspan of about 50 feet, measures about 25 feet long, and is called the Black Swan, even though it’s white. Rangelov says that it can carry about 770 pounds of packages a distance of some 1,550 miles, and that ground-based pilots operate or oversee the aircraft as it flies. The company plans to start operating delivery flights in Greece early next year. (The aircraft in the photo is a replica and can’t actually fly.) Rob Verger
This piece of construction equipment is a John Deere wheel loader, but on top of the cab is special equipment from a company called Teleo that allows the machine to be remotely operated from large distances. Popular Science had the chance to control a piece of construction equipment called a compact track loader in California from a base station in Texas, and observed a Teleo employee at the same Texas station operate a different large piece of construction equipment—a Komatsu WA500-8 wheel loader—in Oulu, Finland.
This piece of construction equipment is a John Deere wheel loader, but on top of the cab is special gear from a company called Teleo that allows the machine to be remotely operated from large distances. Popular Science had the chance to control a piece of construction equipment called a compact track loader in California from a base station in Texas, and observed a Teleo employee at the same Texas station operate a different large construction vehicle—a Komatsu WA500-8 wheel loader—in Oulu, Finland. Rob Verger
This small robotic helicopter is roughly 22 feet long, 7.5 feet high, and is called the Mosquito. It’s a development aircraft for a company called Rain that’s working on software to snuff out wildfires early. “We’re building technology to stop wildfires before they grow out of control, when they’re the size of a single tree, not when they’re the size of a warzone,” says Maxwell Brodie, the CEO of Rain. They’re collaborating with Sikorsky, which has already developed the tech for a Black Hawk helicopter to be able to fly itself. Brodie says their plan is to eventually pre-position autonomous, uncrewed helicopters (big ones like Black Hawks, not this Mosquito) with their software so they can tackle wildfires with a quickness when they’re small.
This small robotic helicopter is roughly 22 feet long, 7.5 feet high, and is called the Mosquito. It’s a development aircraft for a company called Rain that’s working on software to snuff out wildfires early. “We’re building technology to stop wildfires before they grow out of control, when they’re the size of a single tree, not when they’re the size of a warzone,” says Maxwell Brodie, the CEO of Rain. They’re collaborating with Sikorsky, which has already developed the tech for a Black Hawk helicopter to be able to fly itself. Brodie says their plan is to eventually pre-position autonomous, uncrewed helicopters (big ones like Black Hawks, not this Mosquito) with their software so they can tackle wildfires with a quickness when they’re small. Rob Verger
The goggle-like pieces of gear on top of the backpacks are the latest iteration—version 1.2—of the Army’s IVAS (Integrated Visual Augmentation System), which has been a challenging technology to get right and has a history of causing issues like nausea. The goal is to give a soldier a head-up display that can show a compass heading, map, or other information right in front of their eyes. Think of them as augmented reality goggles for soldiers that continue to be a work in progress; they’re made by Microsoft.
The goggle-like pieces of gear on top of the backpacks are the latest iteration—version 1.2—of the Army’s IVAS (Integrated Visual Augmentation System), which has been a challenging technology to get right and has a history of causing issues like nausea. The goal is to give a soldier a head-up display that can show a compass heading, map, or other information right in front of their eyes. Think of them as augmented reality goggles for soldiers that continue to be a work in progress; they’re made by Microsoft. Rob Verger
This is the tail rotor of an Airbus H160 helicopter. Notice how it’s tilted, or canted, ever so slightly? The 10-degree tilt gives the helicopter a tiny bit of lift—about 1 percent. (The vast majority comes from the main rotor, up top.) While some tail rotors just have blades that spin freely in the air, the ones that are enclosed like this are called Fenestrons.
This is the tail rotor of an Airbus H160 helicopter. Notice how it’s tilted, or canted, ever so slightly? The 10-degree tilt gives the helicopter a tiny bit of lift—about 1 percent. (The vast majority comes from the main rotor, up top.) While some tail rotors just have blades that spin freely in the air, the ones that are enclosed like this are called Fenestrons. Rob Verger
Like the uncrewed flying machine from Dronamics, this drone’s sole purpose is to carry cargo. But unlike the Dronamics vehicle, it can take off and land vertically by using eight electric motors and propellers. (It had another four props for forward flight.) It’s also hybrid electric—an onboard engine and generator create the electricity the system needs. “Jet fuel goes in, 700 volts of electric power comes out, and that electrical power drives the propulsion, and charges the onboard battery,” explains David Merrill, the CEO and cofounder of the company. The drone, called the Chaparral, carries cargo in the canoe-like container below it. Merrill says that its range is about 300 miles with a 300-pound payload. They’re working with the Air Force and FedEx. (The drone in the photograph is a full-sized replica of the real thing.)
Like the uncrewed flying machine from Dronamics, this drone’s sole purpose is to carry cargo. But unlike the Dronamics vehicle, it can take off and land vertically by using eight electric motors and propellers. (It has another four props for forward flight.) It’s also hybrid electric—an onboard engine and generator create the electricity the system needs. “Jet fuel goes in, 700 volts of electric power comes out, and that electrical power drives the propulsion, and charges the onboard battery,” explains David Merrill, the CEO and cofounder of the company. The drone, called the Chaparral, carries cargo in the canoe-like container below it. Merrill says that its range is about 300 miles with a 300-pound payload. They’re working with the Air Force and FedEx. (The drone in the photograph is a full-sized replica of the real thing.) Rob Verger

The post Futuristic aircraft and robotic loaders dazzled at a Dallas tech summit appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Disney’s new bipedal robot could have waddled out of a cartoon https://www.popsci.com/technology/disney-robot-cute-animation/ Tue, 10 Oct 2023 18:00:00 +0000 https://www.popsci.com/?p=578352
Creating real-world robotss that have the same magnetism as our favorite animated characters is no simple task.
Creating real-world robotss that have the same magnetism as our favorite animated characters is no simple task. Walt Disney Imagineering/Youtube

Its only job (for now) is to be absolutely adorable.

The post Disney’s new bipedal robot could have waddled out of a cartoon appeared first on Popular Science.

]]>
Creating real-world robotss that have the same magnetism as our favorite animated characters is no simple task.
Creating real-world robotss that have the same magnetism as our favorite animated characters is no simple task. Walt Disney Imagineering/Youtube

Some robots are cuter than others—but Disney may have just revealed a contender for the most adorable yet. Last week at the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Detroit, a team of researchers from Disney Research Studios in Zurich revealed a charismatic, child-sized bipedal bot that looks like a cross between a cleaned-up WALL-E and a baby chick. With stubby legs, a box-like head, and wiggly antennae, it doesn’t need to do much to look loveable.

But, this little robot packs a powerful amount of personality in the ways it moves—that little boxy head has four degrees of freedom, according to IEEE Spectrum, meaning it can look up, down, around, and tilt in a perplexed manner. Its five-degree-of-freedom legs and hips allow it to balance and waddle around indoors or out, and even catch itself when given a playful shove. 

“Most roboticists are focused on getting their bipedal robots to reliably walk,” Disney research scientist Morgan Pope tells IEEE Spectrum. “At Disney, that might not be enough—our robots may have to strut, prance, sneak, trot, or meander to convey the emotion that we need them to.”

[Related: Why humans feel bad for awkward robots.]

While Disney has long been one of the biggest names in animation, creating real-world characters that have the same magnetism as our favorite movie characters is complicated—after all, animation tools don’t always play fair with the laws of physics, team lead and research scientist Mortiz Bächer added. 

Enter a reinforcement learning-based pipeline that helps bring together animation magic and real-world physicality. The system is highly tunable, and apparently can train a robot new behavior on a single PC. These behaviors can be tweaked, and essentially allow the mostly 3D-printed robot to handle itself in public and stay in character. Additionally, this process opens up a whole new world of possibilities when it comes to making new robotic characters with different personalities, legs, arms, or other components.

[Related: Robotic ‘Super Monster Wolves’ are guarding Japanese towns against bears.]

These kinds of developments are not only fun, but could one day be useful since humans and robots may one day find themselves in closer quarters. Amazon has been playing around with automation for over a decade, and robots are finding their way into healthcare, conservation, and even into our burrito bowls. The team at Disney argues that having a robot that can show you a little bit of emotion or intent can go a long way in bridging the gap between people and potential new robot friends.

The post Disney’s new bipedal robot could have waddled out of a cartoon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
College students invented an easy device for cerebral palsy patients to drink on their own https://www.popsci.com/technology/robocup-cerebral-palsy/ Mon, 09 Oct 2023 16:00:00 +0000 https://www.popsci.com/?p=577668
Man with cerebral palsy drinking from RoboCup
Gary Lynn demonstrates the RoboCup. Brandon Martin/Rice University

Two undergraduates worked alongside disability advocate Gary Lynn to create the open source 'RoboCup.'

The post College students invented an easy device for cerebral palsy patients to drink on their own appeared first on Popular Science.

]]>
Man with cerebral palsy drinking from RoboCup
Gary Lynn demonstrates the RoboCup. Brandon Martin/Rice University

“Are you drinking enough water?”

The question is so ubiquitous that it’s become meme canon in recent years. But what may be an annoying reminder to one person is often a logistical challenge for people dealing with mobility issues like cerebral palsy (CP). After learning about the potential physical hurdles involved in staying hydrated, two undergraduate engineering students at Rice University set out to design a robotic tool to help disabled users easily access their drinks as needed. The result, appropriately dubbed “RoboCup,” is not only a simple, relatively easy-to-construct device—it’s one whose plans are already available to anyone online for free.

According to a recent university profile, Thomas Kutcher and Rafe Neathery began work on their invention after being approached by Gary Lynn, a local Houstonian living with CP who oversees a nonprofit dedicated to raising awareness for the condition. According to Kutcher, a bioengineering major, their RoboCup will hopefully remove the need for additional caregiver aid and thus “grant users greater freedom.”

[Related: How much water should you drink in a day?]

RoboCup was by no means perfect from the outset, and the undergraduates reportedly went through numerous iterations before settling on their current design. In order to optimize their tool to help as many people as possible, Kutcher and Rafe spoke to numerous caregiving and research professionals about how to best improve their schematics.

“They really liked our project and confirmed its potential, but they also pointed out that in order to reach as many people as possible, we needed to incorporate more options for building the device, such as different types of sensors, valves and mechanisms for mounting the device on different wheelchair types,” Kutcher said in their October 6 profile.

The biggest challenge, according to the duo, was balancing simplification alongside functionality and durability. In the end, the pair swapped out an early camelback version for a mounted cup-and-straw design, which reportedly is both aesthetically more pleasing to users, as well as less intrusive.

In a demonstration video, Lynn is shown activating a small sensor near his left hand, which automatically pivots an adjustable straw towards his mouth. He can then drink as much as he wants, then alert the sensor again to swivel the straw back to a neutral position.

Lynn, who tested the various versions of RoboCup, endorsed the RoboCup’s ability to offer disabled users more independence in their daily lives, and believes that “getting to do this little task by themselves will enhance the confidence of the person using the device.”

Initially intended to just be a single semester project, Kutcher and Neathery now intend to continue refining their RoboCup, including investigating ways it could be adapted to people dealing with other forms of mobility issues. In the meantime, the RoboCup is entered in World Cerebral Palsy Day’s “Remarkable Designa-thon,” which promotes new products and services meant to help those with CP. And, as it just so happens, voting is open to the public from October 6-13.

The post College students invented an easy device for cerebral palsy patients to drink on their own appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch robot dogs train on obstacle courses to avoid tripping https://www.popsci.com/technology/dog-robot-vine-course/ Fri, 06 Oct 2023 18:00:00 +0000 https://www.popsci.com/?p=577508
Better navigation of complex environments could help robots walk in the wild.
Better navigation of complex environments could help robots walk in the wild. Carnegie Mellon University

Four-legged robots have a tough time traipsing through heavy vegetation, but a new stride pattern could help.

The post Watch robot dogs train on obstacle courses to avoid tripping appeared first on Popular Science.

]]>
Better navigation of complex environments could help robots walk in the wild.
Better navigation of complex environments could help robots walk in the wild. Carnegie Mellon University

Four-legged robots can pull off a lot of complex tasks, but there’s a reason you don’t often see them navigating “busy” environments like forests or vine-laden overgrowth. Despite all their abilities, most on-board AI systems remain pretty bad at responding to all those physical variables in real-time. It might feel like second nature to us, but it only takes the slightest misstep in such situations to send a quadrupedal robot tumbling.

After subjecting their own dog bot to a barrage of obstacle course runs, however, a team at Carnegie Mellon University’s College of Engineering is now offering a solid step forward, so to speak, for robots deployed in the wild. According to researchers, teaching a quadrupedal robot to reactively retract its legs while walking provides the best gait for both navigating and untangling out of obstacles in its way.

[Related: How researchers trained a budget robot dog to do tricks.]

“Real-world obstacles might be stiff like a rock or soft like a vine, and we want robots to have strategies that prevent tripping on either,” Justin Yim, a University of Illinois Urbana-Champaign engineering professor and project collaborator, said in CMU’s recent highlight.

The engineers compared multiple stride strategies on a quadrupedal robot while it tried to walk across a short distance interrupted by multiple, low-hanging ropes. The robot quickly entangled itself while high-stepping, or walking with its knees angled forward, but retracting its limbs immediately after detecting an obstacle allowed it to smoothly cross the stretch of floor.

“When you take robots outdoors, the entire problem of interacting with the environment becomes exponentially more difficult because you have to be more deliberate in everything that you do,” David Ologan, a mechanical engineering master’s student, told CMU. “Your system has to be robust enough to handle any unforeseen circumstances or obstructions that you might encounter. It’s interesting to tackle that problem that hasn’t necessarily been solved yet.”

[Related: This robot dog learned a new trick—balancing like a cat.]

Although wheeled robots may still prove more suited for urban environments, where the ground is generally flatter and infrastructures such as ramps are more common, walking bots could hypothetically prove much more useful in outdoor settings. Researchers believe integrating their reactive retraction response into existing AI navigation systems could help robots during outdoor search-and-rescue missions. The newly designed daintiness might also help quadrupedal robots conduct environmental surveying without damaging their surroundings.

“The potential for legged robots in outdoor, vegetation-based environments is interesting to see,” said Ologan. “If you live in a city, a wheeled platform is probably a better option… There is a trade-off between being able to do more complex actions and being efficient with your movements.”

The post Watch robot dogs train on obstacle courses to avoid tripping appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How researchers trained a budget robot dog to do tricks https://www.popsci.com/technology/parkour-algorithm-robodog/ Thu, 05 Oct 2023 22:00:00 +0000 https://www.popsci.com/?p=577333
robot dog doing parjour
Zipeng Fu / YouTube

A new 'parkour algorithm' teaches robodogs in virtual settings first.

The post How researchers trained a budget robot dog to do tricks appeared first on Popular Science.

]]>
robot dog doing parjour
Zipeng Fu / YouTube

While bipedal human-like androids are a staple of sci-fi movies, for many potential real world tasks, like rescuing people from burning buildings, flooded streets, or the freezing wilds, four-legged “robodogs” are better. In a new paper due to be presented at the Conference on Robot Learning (CoRL) next month in Atlanta, researchers at Stanford University and Shanghai Qi Zhi Institute have proposed a novel, simplified machine learning technique that allows them to train a vision-based algorithm that enables (relatively) cheap, off-the-shelf robots to climb, leap, crawl, and run around the real world. As the researchers claim, they can do “parkour” all by themselves.

Traditionally, teaching robots to navigate the world has been an expensive challenge. Boston Dynamics’ Atlas robots can dance, throw things, and parkour their way around complex environments, but they are the result of more than a decade of DARPA-funded research. As the researchers explain in the paper, “the massive engineering efforts needed for modeling the robot and its surrounding environments for predictive control and the high hardware cost prevent people from reproducing parkour behaviors given a reasonable budget.” However, recent advances in artificial intelligence have demonstrated that training an algorithm in a computer simulation and then installing it in a robot can be cost effective way to train them to walk, climb stairs, and mimic animals, so the researchers set out to do the same for parkour in low-cost hardware. 

The researchers used two-stage reinforcement learning to train the parkour algorithm. In the first “soft dynamics” step, the virtual robots were allowed to penetrate and collide with the simulated objects but were encouraged—using a simple reward mechanism—to minimize penetrations as well as the mechanical energy necessary to clear each obstacle and move forward. The virtual robots weren’t given any instructions—they had to puzzle out how best to move forward for themselves, which is how the algorithm learns what does and doesn’t work.

In the second “hard dynamics” fine-tuning stage, the same reward mechanism was used but the robots were no longer allowed to collide with obstacles. Again, the virtual robots had to figure out what techniques worked best to proceed forward while minimizing energy expenditure. All this training allowed the researchers to develop a “single vision-based parkour policy” for each skill that could be deployed in real robots.

And the results were incredibly effective. Although the team was working with small robots that stand just over 10-inches tall, their relative performance was pretty impressive—especially given the simple reward system and virtual training program. The off-the-shelf robots were able to scale objects up to 15.75-inches high (1.53x their height), leap over gaps 23.6-inches wide (1.5x their length), crawl beneath barriers as low as 7.9-inches (0.76x their height), and tilt so they could squeeze through gaps a fraction of an inch narrower than their width. 

According to an interview with the researchers in Stanford News, the biggest advance is that the new training technique enables the robodogs to act autonomously using just their onboard computer and camera. In other words, there’s no human with a remote control. The robots are assessing the obstacle they have to clear, selecting the most appropriate approach from their repertoire of skills, and executing it—and if they fail, they try again.

The researchers noted that the biggest limitation with their training method is that the simulated environments have to be manually designed. So, going forward, the team hopes to explore “advances in 3D-vision and graphics to construct diverse simulation environments automatically from large-scale real-world data.” That could enable them to train even more adventurous robodogs.

Of course, this Stanford team isn’t the only research group exploring robodogs. In the past year or two, we’ve seen quadrupedal robots of varying shapes and sizes that can paw open doors, climb walls and ceilings, sprint on sand, and balance along beams. But for all that, we’re still a while away from seeing rescue robodogs out in the wild. It seems labradors aren’t out of a job just yet.

See them in action, below:

The post How researchers trained a budget robot dog to do tricks appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An ‘electronic tongue’ could help robots taste food like humans https://www.popsci.com/technology/electronic-tongue-ai-robot/ Wed, 04 Oct 2023 20:00:00 +0000 https://www.popsci.com/?p=577156
Electronic artificial tongue sensor
The sensor could one day help AI develop their own versions of taste palates. Das Research Lab/Penn State

A combination of ultra-thin sensors marks the first step in machines being able to mimic our tastes.

The post An ‘electronic tongue’ could help robots taste food like humans appeared first on Popular Science.

]]>
Electronic artificial tongue sensor
The sensor could one day help AI develop their own versions of taste palates. Das Research Lab/Penn State

AI programs can already respond to sensory stimulations like touch, sight, smell, and sound—so why not taste? Engineering researchers at Penn State hope to one day accomplish just that, in the process designing an “electronic tongue” capable of detecting gas and chemical molecules with components that are only a few atoms thick. Although not capable of “craving” a late-night snack just yet, the team is hopeful their new design could one day pair with robots to help create AI-influenced diets, curate restaurant menus, and even train people to broaden their own palates.

Unfortunately, human eating habits aren’t based solely on what we nutritionally require; they are also determined by flavor preferences. This comes in handy when our taste buds tell our brains to avoid foul-tasting, potentially poisonous foods, but it also is the reason you sometimes can’t stop yourself from grabbing that extra donut or slice of cake. This push-and-pull requires a certain amount of psychological cognition and development—something robots currently lack.

[Related: A new artificial skin could be more sensitive than the real thing]

“Human behavior is easy to observe but difficult to measure. and that makes it difficult to replicate in a robot and make it emotionally intelligent. There is no real way right now to do that,” 

Saptarshi Das, an associate professor of engineering science and mechanics, said in an October 4 statement. Das is a corresponding author of the team’s findings, which were published last month in the journal Nature Communications, and helped design the robotic system capable of “tasting” molecules.

To create their flat, square “electronic gustatory complex,” the team combined chemitransistors—graphene-based sensors that detect gas and chemical molecules—with molybdenum disulfide memtransistors capable of simulating neurons. The two components worked in tandem, capitalizing on their respective strengths to simulate the ability to “taste” molecular inputs.

“Graphene is an excellent chemical sensor, [but] it is not great for circuitry and logic, which is needed to mimic the brain circuit,” said Andrew Pannone, an engineering science and mechanics grad student and study co-author, in a press release this week. “For that reason, we used molybdenum disulfide… By combining these nanomaterials, we have taken the strengths from each of them to create the circuit that mimics the gustatory system.”

When analyzing salt, for example, the electronic tongue detected the presence of sodium ions, thereby “tasting” the sodium chloride input. The design is reportedly flexible enough to apply to all five major taste profiles: salty, sour, bitter, sweet, and umami. Hypothetically, researchers could arrange similar graphene device arrays that mirror the approximately 10,000 different taste receptors located on a human tongue.

[Related: How to enhance your senses of smell and taste]

“The example I think of is people who train their tongue and become a wine taster. Perhaps in the future we can have an AI system that you can train to be an even better wine taster,” Das said in the statement.

The post An ‘electronic tongue’ could help robots taste food like humans appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robot trio mimics the life cycle of a frog https://www.popsci.com/environment/frog-robot-trio-video/ Wed, 04 Oct 2023 14:00:00 +0000 https://www.popsci.com/?p=577051
Four legged robot inspired by frog
The robots are inspired by frogs' multiple life stages. Colorado State University

Search-and-rescue operations could one day feature a fleet of frog-bots to help save the day.

The post This robot trio mimics the life cycle of a frog appeared first on Popular Science.

]]>
Four legged robot inspired by frog
The robots are inspired by frogs' multiple life stages. Colorado State University

New quadrupedal robots, based on years of research alongside some amphibian inspiration, could one day crawl and shimmy their way into search-and-rescue operations. As detailed in a new paper recently published in Nature Communications, the robotic trio developed by a team at Colorado State University can swim, walk, and crawl depending on their environments’ obstacles—thanks in large part to lightweight artificial muscles that don’t require heavy onboard power sources.

[Related: Four-legged dog robots could one day explore the moon.]

The new systems, which have been in development since 2017, were designed by a team led by CSU Department of Mechanical Engineering professor Jianguo Zhao, and rely on materials that change rigidity depending on temperature.

“Our embedded morphing scheme uses a lightweight artificial muscle similar to a human muscle, and it contracts when electricity is applied,” Zhao explained in the project’s October 2 announcement. “By embedding these artificial muscles in the spine of the robot or in its skin, we can achieve a variety of shape-types. Altogether, this approach offers a promising path towards developing robots that can navigate and work in difficult environments.”

Aside from the electrical properties, the robots owe their movements in large part to frogs—or, rather, frogs’ multiple life stages. “They start as tadpoles with tails for swimming before developing legs that let them jump, crawl or swim,” Zhao continued. “We take inspiration from those transformations, but achieving animal-like embedded shape morphing in robots remains challenging and is something we hope this work will continue to address.”

Judging from the video montage, it’s easy to see the frog analogy. Depending on its surroundings and terrain, the robots can curve their limbs to “swim,” then adjust them accordingly to scale a rocky hurdle that mimics a shoreline. On dry land, Zhao’s robots can “hop” along by repeatedly rotating their limbs 360 degrees to push forward. A third version of the robot can flatten itself to skitter through small openings, as well as hang onto a ledge to help transition across gaps.

For now, however, the robots require remote control, but future iterations could rely on sensor- and camera-based analysis of their environments for navigation, and even morph as needed to handle their surroundings.

The post This robot trio mimics the life cycle of a frog appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robotic ‘Super Monster Wolves’ are guarding Japanese towns against bears https://www.popsci.com/technology/robot-wolves-guard-bear/ Tue, 03 Oct 2023 21:00:00 +0000 https://www.popsci.com/?p=576879
Animal deterring robotic wolf sentry
It may not look like a real wolf to you, but it does the trick against boars and bears. Wolf Kamuy

First introduced to combat invasive wild boars, experts now believe the robo-wolf could deter wandering black and brown bears.

The post Robotic ‘Super Monster Wolves’ are guarding Japanese towns against bears appeared first on Popular Science.

]]>
Animal deterring robotic wolf sentry
It may not look like a real wolf to you, but it does the trick against boars and bears. Wolf Kamuy

Stories about solar-powered robotic wolves first surfaced back in 2017 after Japanese researchers began testing prototypes to combat wild boars’ devastating encroachment into farmlands. Since then, a company called Wolf Kamuy expanded sales of its sentry products featuring menacing fangs, fur, flashing red LED “eyes,” and a head capable of shaking side-to-side while emitting a 90 decibel howl. But boars aren’t the only problem plaguing rural Japanese communities. According to recent reports, Wolf Kamuy is now offering many of its faux-wolves as bear deterrence.

[Related: How to watch Alaska’s fat bears.]

It turns out the “Super Monster Wolf” isn’t just effective at protecting farmers’ crops—it’s also pretty good at protecting the farmers themselves. As reported October 1 via the BBC, bears are an increasingly difficult, sometimes even deadly nuisance in many areas of Japan thanks to a combination of serious factors, including climate change, deforestation,and urban expansion. What’s more, bear populations in regions such as Hokkaido appear to be actually increasing as Japan faces an aging population and declining birth rates. According to the BBC, some researchers estimate a total of over 22,000 bears located around Hokkaido. Because of all this, the region recorded at least 150 bear attacks over the past six decades—with four fatalities in 2021 alone. Meanwhile, bears continue to wander into more crowded towns and cities bordering wildlife areas.

Enter: the Super Monster Wolf. By installing the guard bots in urban locales, experts hope to deter bears from wandering into populated areas to potentially harm both humans and themselves. Researchers previously estimated that a robo-wolf’s howls effectively deterred bears from encroaching within approximately 1-square-km (about 0.38 square mi) of its installation—arguably better than many electric fence perimeters. With strategic placement, Super Monster Wolves could help elderly communities, and protect the bears.

Of course, humanity cannot solely rely on an army of robot wolves to protect us from bear attacks. Bears (not to mention countless other species) face immense existential threats in the face of ongoing climate change calamities, and it’s not the bears’ fault they are increasingly desperate to find food sources. The best remedy, therefore, is to continue focusing on climate solutions like conservation, renewable energy, and sustainable urban planning, rather than stopgaps like the (admittedly rad) Super Monster Wolf.

The post Robotic ‘Super Monster Wolves’ are guarding Japanese towns against bears appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch Chipotle’s latest robot prototype plunk ingredients into a burrito bowl https://www.popsci.com/technology/chipotle-burrito-bowl-salad-robot/ Tue, 03 Oct 2023 12:00:00 +0000 https://www.popsci.com/?p=576646
Chipotle automated makeline station
Chipotle also announced an avocado-pitting robot earlier this year. Chipotle

Human workers will still have to add the guacamole.

The post Watch Chipotle’s latest robot prototype plunk ingredients into a burrito bowl appeared first on Popular Science.

]]>
Chipotle automated makeline station
Chipotle also announced an avocado-pitting robot earlier this year. Chipotle

Back in July, Chipotle revealed the “Autocado”—an AI-guided avocado-pitting robot prototype meant to help handle America’s insatiable guacamole habit while simultaneously reducing food waste. Today, the fast casual chain announced its next automated endeavor—a prep station capable of assembling entrees on its own.

[Related: Chipotle is testing an avocado-pitting, -cutting, and -scooping robot.]

According to the company’s official reveal this morning, its newest robotic prototype—a collaboration with the food service automation startup, Hyphen—creates virtually any combination of available base ingredients for Chipotle’s burrito bowls and salads underneath human employees’ workspace. Meanwhile, staff are reportedly allowed to focus on making other, presumably more structurally complex and involved dishes such as burritos, quesadillas, tacos, and kid’s meals. Watch the robot prototype plop food into little piles in the bowl under the workspace here: 

As orders arrive via Chipotle’s website, app, or another third-party service like UberEats, burrito bowls and salads are automatically routed within the makeline, where an assembly system passes dishes beneath the various ingredient containers. Precise portions are then doled out accordingly, after which the customer’s order surfaces via a small elevator system on the machine’s left side. Chipotle employees can then add any additional chips, salsas, and guacamole, as well as an entree lid before sending off the orders for delivery.

[Related: What robots can and can’t do for a restaurant.]

Chipotle estimates around 65 percent of all its digital orders are salads and burrito bowls, so their so-called “cobot” (“collaborative” plus “robot”) could hypothetically handle a huge portion of existing kitchen prep. The automated process may also potentially offer more accurate orders, the company states. 

Advocates frequently voice concern about automation and its effect on human jobs. And Chipotle isn’t the only chain in question—companies like Wendy’s and Panera continue to experiment with their own automation plans. Curt Garner, Chipotle’s Chief Customer and Technology Officer described the company’s long-term goal of having the automated digital makeline “be the centerpiece of all our restaurants’ digital kitchens.”

For now, however, the new burrito bowl bot can only be found at the Chipotle Cultivate Center in Irvine, California—presumably alongside the Autocado.

The post Watch Chipotle’s latest robot prototype plunk ingredients into a burrito bowl appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This gigantic mech suit can be yours for $3 million https://www.popsci.com/technology/archax-mech-suit-robot/ Mon, 02 Oct 2023 15:00:00 +0000 https://www.popsci.com/?p=576477
Archax robotic mech suit in warehouse
The Archax has two transport modes, and is named after the archaeopteryx. YouTube

The 15-foot-tall Archax is first and foremost meant to be very 'cool.'

The post This gigantic mech suit can be yours for $3 million appeared first on Popular Science.

]]>
Archax robotic mech suit in warehouse
The Archax has two transport modes, and is named after the archaeopteryx. YouTube

Five mech suits capable of morphing between robotic and vehicular modes are now available for pre-order from a Japanese startup overseen by 25-year-old inventor Ryo Yoshida. At nearly 15-feet-tall and weighing in around 3.5 tons, one of Tsubame Industries’  “Archax” joyrides can be all yours—if you happen to have an extra $3 million burning a hole in your pocket.

News of the production update came courtesy of Reuters on Monday, who spoke with Yoshida about their thought process behind constructing the futuristic colossus, which gets its name from the famous winged dinosaur archaeopteryx. 

[Related: Robotic exoskeletons are storming out of sci-fi and onto your squishy human body.]

“Japan is very good at animation, games, robots and automobiles so I thought it would be great if I could create a product that compressed all these elements into one,” he said at the time. “I wanted to create something that says, ‘This is Japan.’”

To pilot the steel and iron-framed Archax, individuals must first climb a small ladder and enter a cockpit situated within the robot’s chest. Once sealed inside, a system of nine cameras connected to four view screens allows riders to see the world around them alongside information such as battery life, speed, tilt angle, and positioning. Depending on a user’s desire, Archax can travel upwards of 6 mph from one of two setups—a four-wheeled upright robotic mode, and a more streamlined vehicle mode in which the cockpit reclines 17 degrees as the chair remains upright. Meanwhile, a set of joysticks alongside two floor pedals control the mech suit’s movement, as well as its controllable arms and hands

Unlike countless other robotic creations on the market, however, Archax currently isn’t designed for rigorous real world encounters. It’s currently meant to be, per the company’s own description, “cool.” 

But that doesn’t mean Yoshida and his team at Tsubame aren’t hopeful to build future Archax models better equipped for real world uses. According to the inventor, he hopes such pilotable robotic suits could find applications within search-and-rescue operations, disaster relief, and even the space industry. For now, however, Tsubame sounds perfectly satisfied with its luxury toy status.

“Arcax is not just a big robot that you can ride inside. A person can climb into the cockpit and control the vehicle at will. Each part moves with sufficient speed, rigidity, and power,” reads the product’s description.

“And it’s cool,” Tsubame Industries reiterates.

The post This gigantic mech suit can be yours for $3 million appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new drone might help cops stop high-speed car chases https://www.popsci.com/technology/skydio-x10-cop-drone/ Tue, 26 Sep 2023 17:00:00 +0000 https://www.popsci.com/?p=574631
Skydio X10 drone flying at night
Skydio's newest drone is designed specifically to act as a remote controlled first responder. Skydio

Skydio wants its 'intelligent flying machines' to become part of law enforcement's 'basic infrastructure.' Little regulation stands in their way.

The post A new drone might help cops stop high-speed car chases appeared first on Popular Science.

]]>
Skydio X10 drone flying at night
Skydio's newest drone is designed specifically to act as a remote controlled first responder. Skydio

A new high-tech surveillance drone developed by a California-based startup Skydio will include infrared sensors, cameras capable of reading license plates as far as 800 feet away, and the ability to reach top speeds of 45 mph. Skydio hopes “intelligent flying machines”–like its new drone X10–will become part of the “basic infrastructure” supporting law enforcement, government organizations, and private businesses. Such an infrastructure is already developing across the country. Meanwhile, critics are renewing their privacy and civil liberties concerns about what they believe remains a dangerously unregulated industry.

Skydio first unveiled its new X10 on September 20, which Wired detailed in a new rundown on Tuesday. The company’s latest model is part of a push to “get drones everywhere they can be useful in public safety,” according to CEO Adam Bry during last week’s launch event. Prior to the X10’s release, Skydio has reportedly sold over 40,000 other “intelligent flying machines” to more than 1,500 clients over the past decade, including the US Army Rangers and the UK’s Ministry of Defense. Skydio execs, however, openly express their desire to continue expanding drone adoption even further via a self-explanatory concept deemed “drone as first responder” (DFR).

[Related: The Army skips off-the-shelf drones for a new custom quadcopter.]

In such scenarios, drones like the X10 can be deployed in less than 40 seconds by on-the-scene patrol officers from within a backpack or car trunk. From there, however, the drones can be piloted via onboard 5G connectivity by operators at remote facilities and command centers. Skydio believes drones like its X10 are equipped with enough cutting edge tools to potentially even aid in stopping high-speed car chases.

To allow for this kind of support, however, drone operators are increasingly required to obtain clearance from the FAA for what’s known as beyond the visual line of sight (BVLOS) flights. Such a greenlight allows drone pilots to control fleets from centralized locations instead of needing to remain onsite. BVLOS clearances are currently major goals for retail companies like Walmart and Amazon, as well as shipping giants like UPS, who will need such certifications to deliver to customers at logistically necessary distances. According to Skydio, the company has already supported customers in “getting over 20 waivers” for BVLOS flight, although its X10 announcement does not provide specifics as to how. 

Man in combat gear holding X10 drone at night
Credit: Skydio

Drone usage continues to rise across countless industries, both commercial and law enforcement related. As the ACLU explains, drones’ usages in scientific research, mapping, and search-and-rescue missions are undeniable, “but deployed without proper regulation, drones [can be] capable of monitoring personal conversations would cause unprecedented invasions of our privacy rights.”

Meanwhile, civil rights advocates continue to warn that there is very little in the way of such oversight for the usage of drones among the public during events such as political demonstrations, protests, as well as even simply large gatherings and music festivals.

“Any adoption of drones, regardless of the time of day or visibility conditions when deployed, should include robust policies, consideration of community privacy rights, auditable paper trails recording the reasons for deployment and the information captured, and transparency around the other equipment being deployed as part of the drone,” Beryl Lipton, an investigative researcher for the Electronic Frontier Foundation, tells PopSci.

“The addition of night vision capabilities to drones can enable multiple kinds of 24-hour police surveillance,” Lipton adds.

Despite Skydio’s stated goals, critics continue to push back against claims that such technology benefits the public, and instead violates privacy rights while disproportionately targeting marginalized communities. Organizations such as the New York Civil Liberties Union cites police drones deployed at protests across 15 cities in the wake of the 2020 murder of George Floyd.

[ Related: Here is what a Tesla Cybertruck cop car could look like ]

Skydio has stated in the past it does not support weaponized drones, although as Wired reports, the company maintains an active partnership with Axon, makers of police tech like Tasers. Currently, Skydio is only integrating its drone fleets with Axon software sold to law enforcement for evidence management and incident responses.

Last year, Axon announced plans to develop a line of Taser-armed drones shortly after the Uvalde school shooting massacre. The news prompted near immediate backlash, causing Axon to backtrack less than a week later—but not before the majority of the company’s AI Ethics board resigned in protest.

Update 09/26/23 1:25pm: This article has been updated to include a response from the Electronic Frontier Foundation.

The post A new drone might help cops stop high-speed car chases appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This massive armored vehicle has a giant plow for clearing Russian mines https://www.popsci.com/technology/mine-clearing-tank/ Fri, 22 Sep 2023 13:36:50 +0000 https://www.popsci.com/?p=573451
This is a Mine-Clearing Tank.
This is a Mine-Clearing Tank. Pearson Engineering

Eight machines like this one are already in Ukraine to do the dangerous work of dealing with minefields.

The post This massive armored vehicle has a giant plow for clearing Russian mines appeared first on Popular Science.

]]>
This is a Mine-Clearing Tank.
This is a Mine-Clearing Tank. Pearson Engineering

At the DSEI international arms show held in London earlier this month, German defense company FFG showed off a tank-like vehicle it had already sent to Ukraine. The Mine Clearing Tank, or MCT, is a tracked and armored vehicle, based on the WISENT 1 armored platform, designed specifically to clear minefields and protect the vehicle’s crew while doing so. As Russia’s February 2022 invasion of Ukraine continues well into its second year, vehicles like this one show both what the present need there is, and what tools may ultimately be required for Ukraine to reclaim Russian-occupied territory.

The current shape of the war in Ukraine is largely determined by minefields, trenches, and artillery. Russia holds long defensive lines, where mines guard the approaches to trenches, and trenches protect soldiers as they shoot at people and vehicles. Artillery, in turn, allows Russian forces to strike at Ukrainian forces from behind these defensive lines, making both assault and getting ready for assault difficult. This style of fortification is hardly unique; it’s been a feature of modern trench warfare since at least World War I. 

Getting through defensive positions is a hard task. On September 20, the German Ministry of Defense posted a list of the equipment it has so far sent to Ukraine. The section on “Military Engineering Capabilities” covers an extensive range of tools designed to clear minefields. It includes eight mine-clearing tanks of the WISENT 1 variety, 11 mine plows that can go on Ukraine’s Soviet-pattern T-72 tanks, three remote-controlled mine-clearing robots, 12 Ahlmann backhoe loaders designed for mine clearing, and the material needed for explosive ordnance disposal.

The MCT WISENT 1 weighs 44.5 tons, a weight that includes its heavy armor, crew protection features, and the powerful engines it needs to lift and move the vehicle’s mine-clearing plow. The plow itself weighs 3.5 tons, and is wider than the vehicle itself.

“During the clearing operation, the mines are lifted out of the ground and diverted via the mine clearing shield to both sides of the lane, where they are later neutralized by EOD forces. If mines explode, ‘only’ the mine clearance equipment will be damaged. If mines slip through and detonate under the vehicle, the crew is protected from serious injuries,” reports Gerhard Heiming for European Security & Technology.

One of the protections for crew are anti-mine seats, designed to divert the energy from blasts away from the occupants. The role of a mine-clearing vehicle is, after all, to drive a path through a minefield, dislodging explosives explicitly placed to prevent this from happening. As the MCT WISENT 1 clears a path, it can also mark the lane it has cleared.

Enemy mine

Mines as a weapon are designed to make passage difficult, but not impossible. What makes mines so effective is that many of the techniques to clear them, and do so thoroughly, are slow, tedious, time-consuming tasks, often undertaken by soldiers with hand tools. 

“The dragon’s teeth of this war are land mines, sometimes rated the most devilish defense weapons man ever devised,” opens How Axis Land Mines Work, a story from the April 1944 issue of Popular Science. “Cheap to make, light to transport, and easy to install, it is as hard to find as a sniper, as dangerous to disarm as a commando. To cope with it, the Army Engineers have developed a corps of specialists who have one of the most nerve-wracking assignments in the book.”

The story goes on to to detail anti-tank and anti-personnel mines, which are the two categories broadly in use today. With different explosive payloads and pressure triggers, the work of min-clearing is about ensuring all the mines are swept aside, so dismounted soldiers and troops in trucks alike can have safe passage through a cleared route. 

The MCT WISENT 1 builds upon lessons and technologies for mine-clearing first developed and used at scale in World War II. Even before the 2022 invasion by Russia, Ukraine had a massive mine-clearing operation, working on disposing of explosives left from World War II through to the 2014-2022 Donbass war. The peacetime work of mine clearing can be thorough and slow.

For an army on the move, and looking to break through enemy lines and attack the less-well-defended points beyond the front, the ability of an armored mine-sweeper to clear a lane can be enough to shift the tide of battle, and with it perhaps a stalled front.

The post This massive armored vehicle has a giant plow for clearing Russian mines appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why humans feel bad for awkward robots https://www.popsci.com/technology/human-robot-embarrassment/ Wed, 20 Sep 2023 18:30:00 +0000 https://www.popsci.com/?p=572966
grimace face awkward smiley
Bernard Hermant / Unsplash

Secondhand embarrassment is related to empathy.

The post Why humans feel bad for awkward robots appeared first on Popular Science.

]]>
grimace face awkward smiley
Bernard Hermant / Unsplash

When someone does something cringey, it’s only human nature to feel embarrassed for them. If a friend slips and falls on a wet floor, it makes sense to feel self-conscious on their behalf. It’s a sign of empathy, according to science, and it determines how people cooperate, connect, and treat one another. What happens, though, when the second person in this situation is replaced with a robot?

Experiencing secondhand embarrassment lights up areas in the human brain associated with pain and the recognition of emotions. In that vein, social anxiety is linked to heightened empathy, but also comes with a reduced capacity to actually understand the other person’s emotions, known as cognitive empathy. And of course, the more socially close and invested a person is in another, the more acutely they’ll feel this bystander discomfort. 

Interestingly, new research from Toyohashi University of Technology in Japan found that humans can have the same sort of secondhand embarrassment when they see a robot commit a social faux pas. A detailed report was published in the journal Scientific Reports last week. 

To test this phenomenon, human subjects were immersed in a virtual environment where both human and robot avatars were present. The researchers then put these avatars, both the ones representing humans and the ones depicting bots, through awkward situations like stumbling in a crowd, running into a sliding door, or dancing clumsily in public. 

Researchers then measured skin conductance, or the electrical activity of the sweat glands, of the subjects. This correlates to arousal signals like stress, or other states of high emotion. Participants also filled out a questionnaire about their emotional responses to each virtual social situation. 

[Related: Do we trust robots enough to put them in charge?]

The data indicates that humans felt self-embarrassment for both the human and robot avatars when they were in a socially awkward scenario, although they perceived the situation as more “real” for the human avatar compared to the robot.  

Still, the team says that the results show that “humans can empathize with robots in embarrassing situations, suggesting that humans assume the robots can be aware of being witnessed and have some degree of self-consciousness based on self-reflection and self-evaluation,” they wrote in the paper. But it also matters what the robot looks like: “The appearance of the robot may affect the empathic embarrassment because humans empathize more strongly with more human-looking robots and less with more mechanical-looking robots when they are mistreated by humans.”

Previous research into this area has turned up similar themes. Last year, a study out of France found that humans would unconsciously sync their movements with that of humanoid robots, as a bid to fit in socially. And imbuing robot speech with more emotional undertones make them more acceptable to humans

Despite the interesting findings in this recent study, the team from Toyohashi University of Technology acknowledges that a larger sample size, as well as real-world humans and robots, would make the conclusions more convincing. 

“Our study provides valuable insights into the evolving nature of human-robot relationships. As technology continues to integrate into our daily lives, understanding the emotional responses we have towards robots is crucial,” Harin Hapuarachchi, the lead researcher on the project, said in a press release. “This research opens up new avenues for exploring the boundaries of human empathy and the potential challenges and benefits of human-robot interactions.”

The post Why humans feel bad for awkward robots appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What’s in the US military’s historic lost and found: nukes, jets, and drones https://www.popsci.com/technology/lost-military-f35-drones-nuclear-weapons/ Wed, 20 Sep 2023 11:00:00 +0000 https://www.popsci.com/?p=572760
an F-35B fighter jet
An F-35B seen in South Carolina on Aug. 17, 2023. Kyle Baskin / US Marine Corps

The F-35 in South Carolina is not the first important asset to go missing for a spell.

The post What’s in the US military’s historic lost and found: nukes, jets, and drones appeared first on Popular Science.

]]>
an F-35B fighter jet
An F-35B seen in South Carolina on Aug. 17, 2023. Kyle Baskin / US Marine Corps

For roughly 24 hours, between the afternoon of September 17 and the evening of September 18, the United States Marine Corps couldn’t find one of its F-35B stealth fighter jets. The pilot had ejected, but it took the military a spell to find the jet, and in the process it put out a call for the public to keep their eyes peeled for the plane. Joint Base Charleston confirmed Monday evening that a debris field was found two hours northeast of the base, believed to be the crashed plane. 

So how does the military lose a stealth jet? That’s the $100-million question. F-35 unit prices vary by model and the lot in which they are purchased; recent F-35B purchases have cost a high of $108 million per jet and a low of $78.3 million. On the other hand, F-35A models, which the Air Force fly, cost around $69.9 million now, though older lots cost up to $89.2 million. 

The nature of stealth helps explain how it’s possible, in 2023, for the Department of Defense to lose track of one of its own jets, prompting a call for citizens to help search. Stealth is a technology designed to hide planes from radar, so that stealth fighters and bombers can attack buildings, ships, vehicles, and other targets in war with less fear of getting detected and shot down by enemy aircraft and anti-air missiles. To achieve this sort of radar-invisibility, stealth planes have physical shapes that reduce radar signature, along with special coatings that dampen the reflectivity of radio waves.

Because the stealth characteristics are built into jets like the F-35 series, as well as the F-22 fighter, and the B-2 and B-21 bombers, they are just harder for radars to track. One way to keep track of where planes are is a transponder, which sends out a signal announcing the aircraft’s location. Transponders are useful for commercial and military aircraft, and required for almost all flights in US skies, as they allow aircraft to avoid each other. The Washington Post reported that the F-35B’s transponder was not working at the time the pilot ejected, leading the military to ask the public for help locating the plane.

Another way to make stealth jets more visible, and to conceal the true ability of their radar-avoiding shape, is to include high-radar-visibility augmentation, as is sometimes done at air shows. The military sometimes augments the F-35′s cross-section during public or semi-public flights so they will look different on a radar from how it would during an actual combat mission, retired Air Force General Hawk Carlisle told Defense News.

Public transponder records, as reported by the War Zone (which is owned by PopSci’s parent company, Recurrent), show the search pattern the Air Force used to try to locate the lost F-35B before finding the debris field. If other techniques were used to find the plane beyond visual search, it is likely the military will want to keep those secret, as details about how to find a stealth plane could undermine the massive investment already put into stealth jets.

Even if it briefly created a flurry of media attention, the case of the temporarily missing F-35B is just the latest incident of the US military losing control of something powerful and important. Here are several others.

Lost drones

For as long as the military has operated drones, some of those drones have gotten lost. Both of these instances have some similarity to this week’s wild F-35 hunt.

A plane called the Kettering Bug was built during World War I as an “aerial torpedo,” or a flying uncrewed bomb that would, in the fixed trench combat of the time, travel a set distance and then shed its wings to crash into an enemy position with explosive force. The war ended before the Bug could see action, but this predecessor of both drones and cruise missiles was tested as a secret weapon in the United States. 

On October 4, 1918, the biplane bomb took off, and then flew off track. The US Army searched the area near its Dayton, Ohio launch site, asking the public if they had seen a missing plane. Several of the witnesses reported what appeared to be a plane with a drunk pilot, and the Army went along with those stories, saying the pilot had jumped out and was being treated. The plane, as an uncrewed weapon, had no human pilot on board. Rather than reveal the secret weapon, the Army let witnesses believe they had seen something other than the aerial torpedo. The Army found the wreckage of the Bug, recovered its reusable mechanical parts, and burned the wrecked fuselage on the spot.

Almost a century later in 2017, the US Army lost an RQ-7B Shadow drone, which was launched from a base in southern Arizona on January 31, then discovered over a week later on February 9, having crashed into a tree outside of Denver. The Shadow drone has a stated range of under 80 miles, though that range is how far it can fly while remaining in contact with the ground station used by human operators. Shadow drones can also fly for nine hours, with a cruising speed of 81 mph, so the 630-mile journey was within the distance the drone could technically cover. While drones like the Shadow are programmed to search for lost communications signals, autonomous flight features mean that a failure to connect can lead to unusual journeys, like the one the Shadow took.

Lost jets

The F-35B that went missing in South Carolina is just the latest such plane to crash and require search and recovery. In November 2021, a British F-35B operating from the HMS Queen Elizabeth crashed into the Mediterranean. The pilot ejected safely, but the sunken stealth jet, once found, required a maritime salvage operation. 

Then, in January 2022, the US Navy lost an F-35C in the South China Sea. The plane approached too low on a landing, skidded across the deck, and then fell off the deck’s edge into the ocean after the pilot had ejected. The incident injured seven sailors, including the pilot.  The sunken stealth jet had to be recovered from a depth of 12,400 feet, using a specialized remotely operated vessel.

While in both cases these crashes featured witnesses in the general vicinity who knew where the lost planes ended up, the recovery took on a similar sense of importance, as even a crashed and sunken jet could reveal crucial details of the aircraft’s design and operation to another country, had one of them gotten there first.

Lost nukes

While jets are often the most expensive piece of hardware lost in a crash, there’s also the cargo to consider. In February 1958, the US Air Force lost a Mark 15 thermonuclear bomb off the coast of Tybee Island, Georgia, following a mid-air collision with an F-86 fighter jet. To date, the bomb has not yet been found in its watery resting place, despite extensive searching by the US Navy for the months after the incident.

In January 1961, a B-52 bomber transporting two nuclear bombs started to fall apart in the sky above North Carolina. The two bombs crashed into the ground, either as part of the plane or released independently (accounts vary), and neither bomb detonated. But both bombs did come close to detonation, as several safety triggers were activated in the fall, and the whole incident prompted a change to how easy it was to arm US nuclear bombs.

The incident over North Carolina was just one of several nuclear near-misses that came from the transport and failure of systems around US nuclear bombs. In January 1966, a US bomber collided with the tanker refueling it above the village of Palomares in Spain, releasing one nuclear weapon into the sea and three onto land, where two of them cracked open and dispersed the bomb’s plutonium into the wind. The three bombs on land were found and recovered quickly, and the fourth bomb was recovered from the sea after an extensive underwater salvage operation. Cleanup work on the site where the bombs scattered plutonium continued into the 2010s.

The post What’s in the US military’s historic lost and found: nukes, jets, and drones appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Mini explosions give this little robot a big bounce https://www.popsci.com/technology/explosive-power-robot/ Fri, 15 Sep 2023 19:00:00 +0000 https://www.popsci.com/?p=570862
Tiny robot standing on perch
Miniature internal combustion engines power this small robot. YouTube

The bug-inspired bot can carry 22 times its own weight and leap almost as high as hopping insects.

The post Mini explosions give this little robot a big bounce appeared first on Popular Science.

]]>
Tiny robot standing on perch
Miniature internal combustion engines power this small robot. YouTube

Electrical power and battery arrays remain go-to routes for juicing up robots, but sometimes old school explosives can still do the trick. A team at Cornell University recently demonstrated just that idea via a new tiny robot that relies on small-scale actuators ostensibly fueled by miniature internal-combustion engines. Even at minuscule levels, the bug-sized quadrupedal bot’s design allows it to launch to heights nearly as high as many leaping insects, while also carrying and walking with a load 22 times its own weight.

As detailed in a paper published on September 14 in Science, researchers created a propulsion unit via assembling a 3D-printed combustion chamber with an inflatable elastomeric membrane, electrodes, as well as teeny fuel injection tubing. When the electrodes introduce a small spark, the membrane balloons in just half a millisecond with 9.5 newtons of force. The process can then be repeated as quickly as 100 times per second.

“The high frequencies, speeds, and strengths allow [the] actuators to provide microrobots with locomotion capabilities that were previously available only to much larger robots,” writes Northwestern University Assistant Professor of Materials Science and Engineering Ryan Truby in a related essay within Science.

[Related: This small, squishy robot is cuter than its cockroach inspiration.]

But as IEEE Spectrum explains, even the smallest explosions can wear down or damage materials over time. Knowing this, the engineering team designed the elastic membrane using flame-resistant material alongside an integrated flame arrestor to control the timing and size of each little kaboom. The results are an extremely durable propulsion unit that the team estimates can continuously operate for over 750,000 cycles (roughly 8.5 hours for the robot) before any noticeable performance degradation. In video demonstrations, the team showcased their 29 mm long, 1.5 g robot vertically leaping 59 centimeters, even while carrying comparably massive amounts of weight. To “walk,” the robot fires its actuators at breakneck speed, and turns via selectively engaging the same engines.

Moving forward (so to speak), the team wants to hone their bot’s ability to actually slow its actuators to allow for more precise movement, as well as the ability to “run.” The robot is also currently tethered via power cables, so creating wireless iterations would be integral to deploying the device in a real-world scenario, such as a disaster zone or other hard-to-reach environments.

“One idea we want to explore in the future is using aggregates of these small and powerful actuators as large, variable recruitment musculature in large robots,” Robert F. Shepherd, head of Cornell’s Organic Robotics Lab and study co-author, told IEEE Spectrum. “Putting thousands of these actuators in bundles over a rigid endoskeleton could allow for dexterous and fast land-based hybrid robots.”

Explosive robot muscles—what could go wrong?

The post Mini explosions give this little robot a big bounce appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Microflier robots use the science of origami to fall like leaves https://www.popsci.com/technology/microflier-origami-robots/ Wed, 13 Sep 2023 19:00:00 +0000 https://www.popsci.com/?p=570105
Robotic origami microflier
Researchers at the University of Washington developed small robotic devices that can change how they move through the air by 'snapping' into a folded position during their descent. Mark Stone/University of Washington

The newest origami robots can change shape within milliseconds after dropping from drones.

The post Microflier robots use the science of origami to fall like leaves appeared first on Popular Science.

]]>
Robotic origami microflier
Researchers at the University of Washington developed small robotic devices that can change how they move through the air by 'snapping' into a folded position during their descent. Mark Stone/University of Washington

Origami has inspired yet another robot—in this case, one that dynamically changes its shape after dropping from drones in order to glide through the air while collecting environmental data. As detailed via a new study published in Science Robotics, researchers at the University of Washington relied on the traditional Miura-ori folding method (itself inspired by leaves’ geometric patterns) to underpin their new “microfliers.”

According to study co-senior author Vikram Iyer, an UW assistant professor of computer science and engineering, the microfliers first fall “chaotically” from drones in an unfolded, flat state, much akin to an elm leaf’s descent. Using tiny onboard pressure sensors to measure altitude, alongside timers and Bluetooth signals, the robots then morph midair to change airflow’s effects on its new structure. This allows it a more stable descent such as those seen within maple leaves.

[Related: Foldable robots with intricate transistors can squeeze into extreme situations.]

“Using origami opens up a new design space for microfliers,” Iyer said in the University of Washington’s announcement. “This highly energy efficient method allows us to have battery-free control over microflier descent, which was not possible before.”

Because of the microfliers’ light weight—about 400 milligrams, or roughly half as heavy as a nail—the robots can already span the length of a football field when dropped from just 40 meters (131 feet) in the air. Battery-free, solar-fueled actuators kick into gear at customizable times to control how and when their shapes interact with surrounding air, thus controlling directional descents. Researchers believe unfurling the bots at different times will allow for greater areas of distribution, and at just 25 milliseconds to initiate folding, the timing can be extremely precise. Although the current robots only transition in a single direction, researchers hope future versions will do so in both directions, allowing for more precise landings during turbulent weather.

Time lapse image of origami microflier changing shape during descent

The team believes such microfliers could be easily deployed as useful sensors during environmental and atmospheric surveying. The current models can transmit air temperature and pressure data via Bluetooth signals as far as 60 meters (196 feet) away, but researchers think both their reach and capabilities could be expanded in the future.

Origami is increasingly inspiring new, creative robots.  Earlier this year, researchers at UCLA developed flexible “mechanobots” that can squeeze their way into incredibly narrow environments. Meanwhile, the folding art’s principles are showing immense potential within engineering and building advancements, such as MIT’s recent developments in origami-inspired plate lattice designs for cars, planes, and spacecraft.

The post Microflier robots use the science of origami to fall like leaves appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Ascento Guard patrol robot puts a cartoonish spin on security enforcement https://www.popsci.com/technology/ascento-guard-robot/ Tue, 12 Sep 2023 18:00:00 +0000 https://www.popsci.com/?p=569688
Ascento Guard robot
The new robot literally puts a friendly face on perimeter surveillance. Ascento

A startup's new security guard bot boasts two wheels—and eyebrows.

The post The Ascento Guard patrol robot puts a cartoonish spin on security enforcement appeared first on Popular Science.

]]>
Ascento Guard robot
The new robot literally puts a friendly face on perimeter surveillance. Ascento

Multiple companies around the world now offer robotic security guards for property and event surveillance, but Ascento appears to be only one, at least currently, to sell mechanical patrollers boasting eyebrows. On September 12, the Swiss-based startup announced the launch of its latest autonomous outdoor security robot, the Ascento Guard, which puts a cartoon-esque spin on security enforcement.

[Related: Meet Garmi, a robot nurse and companion for Germany’s elderly population.]

The robot’s central chassis includes a pair of circular “eye” stand-ins that blink, along with rectangular, orange hazard lights positioned as eyebrows. When charging, for example, an Ascento Guard’s eyes are “closed” to mimic sleeping, but open as they engage in patrol responsibilities. But perhaps the most unique design choice is its agile “wheel-leg” setup that seemingly allows for more precise movements across a variety of terrains. Showcase footage accompanying the announcement highlights the robot’s various features for patrolling “large, outdoor, private properties.” Per the company’s announcement, it already counts manufacturing facilities, data centers, pharmaceutical production centers, and warehouses as clients.

According to Ascento co-founder and CEO, Alessandro Morra, the global security industry currently faces a staff turnover rate as high as 47 percent each year. “Labor shortages mean a lack of qualified personnel available to do the work which involves long shifts, during anti-social hours or in bad weather,” Morra said via the company’s September 12 announcement. “The traditional approach is to use either people or fixed installed cameras… The Ascento Guard provides the best of both worlds.”

Each Ascento Guard reportedly only requires a few hours’ worth of setup time before becoming virtually autonomous via programmable patrol schedules. During its working hours, the all-weather robots are equipped to survey perimeters at a walking speed of approximately 2.8 mph, as well as monitor for fires or break-ins via thermal and infrared cameras. On-board speakers and microphones also allow for end-to-end encrypted two-way communications, while its video cameras can “control parking lots,” per Ascento’s announcement—video footage shows an Ascento Guard scanning car license plates, for example.

While robot security guards are nothing new by now, the Ascento Guard’s decidedly anthropomorphic design typically saved for elderly care and assistance, is certainly a new way to combat potential public skepticism, not to mention labor and privacy concerns espoused by experts for similar automation creations. Ascento’s reveal follows a new funding round backed by a host of industry heavyweights including the European Space Agency incubator, ESA BIC, and Tim Kentley-Klay, founder of the autonomous taxi company, Zoox.

The post The Ascento Guard patrol robot puts a cartoonish spin on security enforcement appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The US military’s tiniest drone feels like it flew straight out of a sci-fi film https://www.popsci.com/technology/black-hornet-drone/ Tue, 12 Sep 2023 11:00:00 +0000 https://www.popsci.com/?p=569223
the black hornet drone
The Black Hornet in flight. The wire hanging down is the aircraft's antenna. Teledyne FLIR

The Black Hornet reconnoissance drone is minuscule and highly maneuverable—and even explored the collapsed parking garage in New York City in April.

The post The US military’s tiniest drone feels like it flew straight out of a sci-fi film appeared first on Popular Science.

]]>
the black hornet drone
The Black Hornet in flight. The wire hanging down is the aircraft's antenna. Teledyne FLIR

On April 18 in New York City, a parking garage in lower Manhattan collapsed, killing one person—the garage’s manager, Willis Moore. Much of the media coverage surrounding that event focused on a robotic dog that the New York City Fire Department used on the scene, a mechanical quadruped painted like a dalmatian and named Bergh. But another robot explored the collapsed structure that spring day—an exceptionally tiny and quiet drone flown by militaries that looks exactly like a little helicopter.

It’s called the Black Hornet. It weighs less than 1.2 ounces, takes off from its operator’s hand, and streams back video to a screen so people can see what the drone sees and make decisions before approaching a structure that might have hostile forces or other hazards inside it. 

Here’s how this 6.6-inch-long drone works, what it’s like to fly it, and how it was used that April day following the deadly structural collapse. 

black hornet drone
The drone is small enough to take off—and then finish its flight—in an operator’s hand. Rob Verger

Restaurant reconnaissance

Popular Science received a demonstration of the drone on August 10, and had the chance to fly it, in a space on the ground floor of a New York City hotel near Central Park. 

Rob Laskovich, a former Navy SEAL and the lead trainer for the Black Hornet with Teledyne FLIR, the company that makes the diminutive drone, explains that the drone’s low “noise signature” makes it virtually undetectable when it’s more than 10 feet away from people and 10 feet in the air. “It almost disappears,” he says. “And the size of this thing—it’s able to get into very tight corners.” 

Because it’s so quiet and so maneuverable, the itty bitty drone offers a way to gather information about what’s in a space up to a mile away or further and stream that video (at a resolution of 640 by 480 pixels) over encrypted radio link back to the base station. This latest version of the Black Hornet also doesn’t need access to GPS to fly, meaning it can operate inside a building or in other “GPS-denied” spaces. It carries no weapons. 

Laskovich removes one of the toy-sized Black Hornets from a case; there are three of them in this kit, meaning two can be charging while another one is flying. The drone has a nearly invisible wire antenna that requires a flick of the finger to make it hang out down off the back. The Black Hornet, he says, is “almost like a mini Black Hawk helicopter.” It is indeed just like a miniature helicopter; it has a top rotor to give it lift and a tail rotor to prevent it from spinning around in circles—the anti-torque system. 

Mission control for the little bird involves a small non-touchscreen display and a button-filled controller designed to be used with one hand. Laskovich selects “indoor mode” for the flight. “To start it, it’s a simple twist,” he says, giving the Black Hornet a little lateral twist back and forth with his left hand. Suddenly, the top rotor starts spinning. Then he spins the tiny chopper around a bit more, “to kind of let it know where it’s at,” he says. He moves the aircraft up and down. 

“What it’s doing, it’s reading the environment right now,” he adds. “Once it’s got a good read on where it’s at, the tail rotor is going to start spinning, and the aircraft will take off.” And that’s exactly what happens. The wee whirlybird departs from his hand, and then it’s airborne in the room. The sound it makes is a bit like a mosquito. 

On the screen on the table in front of us is the view from the drone’s cameras, complete with the space’s black and white tiled floor; two employees walk past it, captured on video. A few moments later he turns it so it’s looking at us at our spot in a corner booth, and on the screen I see the drone’s view of me, Laskovich, and Chris Skrocki, a senior regional sales manager with Teledyne FLIR, standing by the table. 

Laskovich says this is the smallest drone in use by the US Department of Defense; Teledyne FLIR says that the US Army, Navy, Marines, and Air Force have the drone on hand. Earlier this summer, the company announced that they were going to produce 1,000 of these itty bitty aircraft for the Norwegian Ministry of Defense, who would send them to Ukraine, adding to 300 that had already been sent. Skrocki notes that a kit of three drones and other equipment can cost “in the neighborhood of about $85,000.”

Eventually Laskovich pilots the chopper back to him and grabs it out of the air from the bottom, as if he was a gentle King Kong grabbing a full-sized helicopter out of the sky, and uses the hand controller to turn it off. 

Kitchen confidential 

The demonstration that Laskovich had conducted was with a Black Hornet model that uses cameras to see the world like a typical camera sensor does. Then he demonstrates an aircraft that has thermal vision. (That’s different from night vision, by the way.) On the base station’s screen, the hot things the drone sees can be depicted in different ways: with white showing the hot spots, black showing the heat, or two different “fuse” modes, the second of which is highly colorful, with oranges and reds and purples. That one, with its bright colors, Laskovich calls “Predator mode,” he says, “because it looks like the old movie Predator.”

Laskovich launches the thermal drone with a whir and he flies it away from our booth, up towards a red EXIT sign hanging from a high ceiling and then off towards an open kitchen. I watch to see what the drone sees via the screen on the table in front of me. He gets it closer and closer to the kitchen area and eventually puts it into “Predator mode.” 

A figure is clearly visible on the drone’s feed, working in the general kitchen area. “And the cool part about it, they have no idea there’s a drone overhead right now,” he says. He toggles through the different thermal settings again: in one of the drone’s modes, a body looks black, then in another, white. He descends a bit to clear a screen-type installation that hangs from the ceiling over the kitchen area and pushes further into the cooking space. At one point, the drone, via the screen in front of me, reveals plates on metal shelving. 

“There’s your serving station right there,” he says. “We’re right in the kitchen right now.” He notes that thanks to “ambient noise,” any people nearby likely can’t detect the aircraft. He flies the drone back to us and I can see the black and white tile floor, and then the drone’s view of me and Laskovich sitting at our table. He cycles through the different thermal settings once more, landing on Predator mode again, revealing both me and Laskovich in bright orange and yellow. 

In a military context, the drone’s ideal use case, Laskovich explains, is to provide operators a way to see, from some distance away, what’s going on in a specific place, like a house that might be sheltering hostile forces. “It’s the ability to have real-time information of what’s going on on a target, without compromising your unit,” he says.

One of the thermal views is colloquially called "Predator mode." In the image above, the author is on the left and Rob Laskovich is on the right.
One of the thermal views is colloquially called “Predator mode.” In the image above, the author is on the left and Rob Laskovich is on the right. courtesy Teledyne FLIR

Flight lessons

Eventually, it’s my turn to learn to fly this little helo. The action is all controlled by a small gray hand unit with an antenna that enables communication to the drone. On the front of the control stick are a bunch of buttons, and on the back are two more. Some of them control what the camera does. Others control the flight of the machine itself. One of them is a “stop and hover” button. Two of the buttons are for yaw, which makes the helicopter pivot to the left or right. The two on the back tell the helicopter to ascend or descend—the altitude control. The trick in flying it, Laskovich says, is to look at the screen while you’re operating the drone, not the drone itself. 

I hold the helicopter in my left hand, and after I put the system in “indoor mode,” Laskovich tells me, “you’re ready to fly.” 

I twist the Black Hornet back and forth and the top rotor starts spinning with a whir. After some more calibration moves, the tail rotor starts spinning, too. I let it go and it zips up out of my hand. “You’re flying,” Laskovich says, who then proceeds to tell me what buttons to press to make the drone do different things. 

launching a black hornet drone
After the top rotor and the tail rotor begin spinning, the next step is just to let the drone go. Teledyne FLIR / Popular Science

I fly it for a bit around the space, and after about seven minutes, I use my left hand to grab onto the bottom part of the machine and then hit three buttons simultaneously on the controller to kill the chopper’s power. And suddenly, the rotor and tail stopped spinning. The aircraft remains in my left hand, a tiny little flying machine that feels a bit like it flew out of a science fiction movie. 

Flying this aircraft, which will hold a stable hover all on its own, is much easier than managing the controls of a real helicopter, which I, a non-pilot, once very briefly had the chance to try under the watchful tutelage of an actual aviator and former Coast Guard commander. 

black hornet drone
The drone can terminate its flight in the pilot’s hand. Teledyne FLIR / Popular Science

The garage collapse

On April 18, Skrocki was in New York City on business when he heard via text message that the parking garage had collapsed. He had the Black Hornet on hand, and contacted the New York Police Department and offered the drone’s use. They said yes, and he headed down to the scene of the collapse, and eventually sent the drone into the collapsed structure “under coordination with the guys there on scene,” Skrocki says. 

He recalls what he saw in there, via the Black Hornet. “There were some vehicles that were vertically stacked, a very busy scene,” he says. “It just absolutely appeared unstable.” When the flight was over, as Skrocki notes on a post on LinkedIn that includes a bit of video, he landed the drone in a hat. The Black Hornet drone doesn’t store the video it records locally on the device itself, but the base station does, and Skrocki noted on Linkedin that “Mission data including the stills/video was provided to FDNY.”

Besides the robotic dog, the FDNY has DJI drones, and they said that they used one specific DJI model, an Avata, that day for recon in the garage. As for the Black Hornet, the FDNY said in an emailed statement to PopSci: “It was used after we were already done surveying the building. The DJI Avata did most if not all of the imagery inside the building. The black hornet was used as we had the device present and wanted to see its capabilities. We continue to use the DJI Avata for interior missions.” The FDNY does not have its own Black Hornet. 

Beyond military uses, Skrocki says that the Black Hornet can help in a public safety context or with police departments, giving first responders an eye on a situation where an armed suspect might be suicidal or have a hostage, for example. The drone could provide a way for watchers to know exactly when to try to move in.

In New York state, the Erie County Sheriff’s Office has a Black Hornet set that includes three small aircraft. And Teledyne FLIR says that the Connecticut State Police has the drone, although via email a spokesperson for that police force said: “We cannot confirm we have Black Hornet Drones.” 

The New York City Police Department has controversially obtained two robotic dogs, a fact that spurred the executive director of the New York Civil Liberties Union to tell The New York Times in April: “And all we’re left with is Digidog running around town as this dystopian surveillance machine of questionable value and quite potentially serious privacy consequences.” 

Stuart Schrader, an associate research professor at Johns Hopkins University’s Center for Africana Studies, highlights the potential for military-level technology in civilian hands to experience a type of “mission creep.”

“It seems quite sensible to not put humans or [real] dogs in danger to do the [parking garage] search, and use a drone instead,” Schrader says. “But I think that the reality is what we see with various types of surveillance technologies—and other technologies that are dual-use technologies where they have military origins—it’s just that most police departments or emergency departments have very infrequent cause to use them.” And that’s where the mission creep can come in. 

In the absence of a parking garage collapse or other actual disaster, departments may feel the need to use the expensive tools they already have in other more general situations. From there, the tech could be deployed, Schrader says, “in really kind of mundane circumstances that might not warrant it, because it’s not a crisis or emergency situation, but actually it’s just used to potentiate the power of police to gain access for surveillance.”

The post The US military’s tiniest drone feels like it flew straight out of a sci-fi film appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The newest moon-bound robot will roll around like a tennis ball https://www.popsci.com/technology/japan-lunar-ball-robot/ Mon, 11 Sep 2023 17:00:00 +0000 https://www.popsci.com/?p=569255
JAXA LEV-2 lunar probe on sand
This lunar probe was inspired by children's toys. JAXA/TOMY/Sony/Doshisha University

Japan's LEV-2 lunar probe is inspired by children's toys, and could make history by the end of the year.

The post The newest moon-bound robot will roll around like a tennis ball appeared first on Popular Science.

]]>
JAXA LEV-2 lunar probe on sand
This lunar probe was inspired by children's toys. JAXA/TOMY/Sony/Doshisha University

If all goes according to plan, a tennis ball-sized robot modeled after a children’s toy will soon briefly explore the moon’s surface as part of Japan’s first soft lunar landing. As recently highlighted by Space.com, the Japanese space agency, JAXA, is currently overseeing its Smart Lander for Investigating Moon (SLIM) probe mission, which launched on September 6 alongside the country’s XRISM X-ray satellite payload. Unlike more powerful launches, it will take less than 9-foot-wide SLIM between three and four months to reach lunar orbit, after which it will survey the roughly 1000-foot-wide Shioli Crater landing site from afar for about another month.

Afterwards, however, the lander will descend towards the moon, and deploy the Lunar Excursion Vehicle 2 (LEV-2) once it reaches around six-feet above the surface. The probe’s sphere-shaped casing will then divide into two halves on either side of a small camera system. From there, LEV-2 will begin hobbling atop the SLIM landing site and surrounding area for around two hours, until its battery reserve is depleted.

[Related: India’s successful moon landing makes lunar history.]

Per JAXA’s description, LEV-2 was developed by its Space Exploration Innovation Hub Center associate senior researcher Hirano Daichi. Daichi collaborated with a team from Doshisha University as well as the toy manufacturer TOMY to create the tiny space explorer. Meanwhile, Sony provided the two cameras that will survey the moon. According to Daichi, the team turned to children’s toys for their “robust and safe design… which reduced the number of components used in the vehicle as much as possible and increased its reliability.”

“This robot was developed successfully within the limited size and mass using the downsizing and weight reduction technologies and the shape changing mechanism developed for toys by TOMY,” continued Daichi.

If successful, JAXA engineers hope the soft lunar landing method can be adapted to larger craft in the future, including those piloted by human astronauts. “By creating the SLIM lander humans will make a qualitative shift towards being able to land where we want and not just where it is easy to land, as had been the case before,” reads JAXA’s project description. “By achieving this, it will become possible to land on planets even more resource scarce than the moon.”

Beyond just this project, it’s been an active time for lunar exploration. In August, India completed the first successful lunar landing at the moon’s south pole via its Chandrayaan-3 probe. Last year, NASA’s Artemis-1 rocket also kickstarted the space agency’s long standing goal towards establishing a permanent moon base.

The post The newest moon-bound robot will roll around like a tennis ball appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This wormy robot can wriggle its way around a jet engine https://www.popsci.com/technology/ge-aerospace-sensiworm-robot/ Sat, 09 Sep 2023 11:00:00 +0000 https://www.popsci.com/?p=568999
an inchwoom robot climbing up a smooth surface
Sensiworm can crawl around a jet engine. GE Aerospace

It's soft enough to squeeze into tight spaces.

The post This wormy robot can wriggle its way around a jet engine appeared first on Popular Science.

]]>
an inchwoom robot climbing up a smooth surface
Sensiworm can crawl around a jet engine. GE Aerospace

A new wormy robot could help with jet engine inspections at GE Aerospace, according to an announcement this week. Sensiworm, short for “Soft ElectroNics Skin-Innervated Robotic Worm,” is the newest outgrowth in GE’s line of worm robots, which includes a “giant earthworm” for tunneling and the “Pipeworm” for pipeline inspection. 

Jet engines are complex devices made up of many moving parts. They have to withstand factors like high heat, plenty of movement, and varying degrees of pressure. Because they need to perform at top speed, they often need to undergo routine cleaning and inspection. Typically, this is done with human eyes and with a device like a borescope, which is a skinny tube with a camera that’s snaked into the engine (technically known as a turbofan). But with Sensiworm, GE promises to make this process less tedious and that it could happen “on wing,” meaning the turbofan doesn’t need to be removed from the wing for the inspection. 

Like an inchworm, Sensiworm moves forward on its own using two sticky suction-like parts on its bottom to squish into crevasses and scrunch around the curves of the engine to find areas where there are cracks or corrosion, or check to see if the heat-protecting thermal barrier coatings are as thick as they should be. 

It comes with cameras and sensors onboard, and is attached through a long, thin wire. In a demo video, this robot showed that it can navigate around obstacles, hang on to a spinning turbine, and sniff out gas leaks. 

These “mini-robot companions” could add an extra pair of eyes and ears, expanding the inspection capabilities of human service operators for on-wing inspections without having to take anything apart. “With their soft, compliant design, they could inspect every inch of jet engine transmitting live video and real-time data about the condition of parts that operators typically check,” GE Aerospace said in a press release

“Currently, our demonstrations have primarily been focused on the inspection of engines,” Deepak Trivedi, principal robotics engineer at GE Aerospace Research, noted in the statement. “But we’re developing new capabilities that would allow these robots to execute repair once they find a defect as well.”

Flexible, squiggling robots have found lots of uses in many industries. Engineers have designed them for medical applications, search and rescues, military operations, and even space ventures

Watch Sensiworm at work below: 

The post This wormy robot can wriggle its way around a jet engine appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Will we ever be able to trust health advice from an AI? https://www.popsci.com/health/will-we-ever-be-able-to-trust-health-advice-from-an-ai/ Tue, 05 Sep 2023 13:00:00 +0000 https://www.popsci.com/?p=567169
robot doctor talks to elderly person sitting in chair
AI-generated illustration by Dan Saelinger

Medical AI chatbots have the potential to counsel patients, but wrong replies and biased care remain major risks.

The post Will we ever be able to trust health advice from an AI? appeared first on Popular Science.

]]>
robot doctor talks to elderly person sitting in chair
AI-generated illustration by Dan Saelinger

IF A PATIENT KNEW their doctor was going to give them bad information during an upcoming appointment, they’d cancel immediately. Generative artificial intelligence models such as ChatGPT, however, frequently “hallucinate”—tech industry lingo for making stuff up. So why would anyone want to use an AI for medical purposes?

Here’s the optimistic scenario: AI tools get trained on vetted medical literature, as some models in development already do, but they also scan patient records and smartwatch data. Then, like other generative AI, they produce text, photos, and even video—personalized to each user and accurate enough to be helpful. The dystopian version: Governments, insurance companies, and entrepreneurs push flawed AI to cut costs, leaving patients desperate for medical care from human clinicians. 

Right now, it’s easy to imagine things going wrong, especially because AI has already been accused of spewing harmful advice online. In late spring, the National Eating Disorders Association temporarily disabled its chatbot after a user claimed it encouraged unhealthy diet habits. But people in the US can still download apps that use AI to evaluate symptoms. And some doctors are trying to use the technology, despite its underlying problems, to communicate more sympathetically with patients. 

ChatGPT and other large language models are “very confident, they’re very articulate, and they’re very often wrong,” says Mark Dredze, a professor of computer science at Johns Hopkins University. In short, AI has a long way to go before people can trust its medical tips. 

Still, Dredze is optimistic about the technology’s future. ChatGPT already gives advice that’s comparable to the recommendations physicians offer on Reddit forums, his newly published research has found. And future generative models might complement trips to the doctor, rather than replace consults completely, says Katie Link, a machine-learning engineer who specializes in healthcare for Hugging Face, an open-source AI platform. They could more thoroughly explain treatments and conditions after visits, for example, or help prevent misunderstandings due to language barriers.

In an even rosier outlook, Oishi Banerjee, an artificial intelligence and healthcare researcher at Harvard Medical School, envisions AI systems that would weave together multiple data sources. Using photos, patient records, information from wearable sensors, and more, they could “deliver good care anywhere to anyone,” she says. Weird rash on your arm? She imagines a dermatology app able to analyze a photo and comb through your recent diet, location data, and medical history to find the right treatment for you.

As medical AI develops, the industry must keep growing amounts of patient data secure. But regulators can lay the groundwork now for responsible progress, says Marzyeh Ghassemi, who leads a machine-learning lab at MIT. Many hospitals already sell anonymized patient data to tech companies such as Google; US agencies could require them to add that information to national data sets to improve medical AI models, Ghassemi suggests. Additionally, federal audits could review the accuracy of AI tools used by hospitals and medical groups and cut off valuable Medicare and Medicaid funding for substandard software. Doctors shouldn’t just be handed AI tools, either; they should receive extensive training on how to use them.

It’s easy to see how AI companies might tempt organizations and patients to sign up for services that can’t be trusted to produce accurate results. Lawmakers, healthcare providers, tech giants, and entrepreneurs need to move ahead with caution. Lives depend on it.

Read more about life in the age of AI: 

Or check out all of our PopSci+ stories.

The post Will we ever be able to trust health advice from an AI? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Australia is eyeing uncrewed vessels to patrol the vast Pacific Ocean https://www.popsci.com/technology/australia-pacific-submarine-strategy-autonomy/ Sat, 02 Sep 2023 11:00:00 +0000 https://www.popsci.com/?p=567346
US submarine in Australia
The USS Mississippi in Australia in 2022. It's a Virginia-class fast-attack submarine. John Hall / US Marine Corps

The Pacific is strategically important, and Australia already has a deal with the US and UK involving nuclear-powered submarines.

The post Australia is eyeing uncrewed vessels to patrol the vast Pacific Ocean appeared first on Popular Science.

]]>
US submarine in Australia
The USS Mississippi in Australia in 2022. It's a Virginia-class fast-attack submarine. John Hall / US Marine Corps

The Pacific Ocean is vast, strategically important, and soon to be patrolled by another navy with nuclear-powered submarines. Earlier this year, Australia finalized a deal with the United States and the United Kingdom to acquire its own nuclear-powered attack submarines, and to share in duties patrolling the Pacific. These submarines will be incorporated into the broader functions of Australia’s Royal Navy, where they will work alongside other vessels to track, monitor, and if need be to fight other submarines, especially those of other nations armed with nuclear missiles. 

But because the ocean is so massive, the Royal Australian Navy wants to make sure that its new submarines are guided in their search by fleets of autonomous boats and subs, also looking for the atomic needle in an aquatic haystack—enemy submarines armed with missiles carrying nuclear warheads. To that end, on August 21, Thales Australia announced it was developing an existing facility for a bid to incorporate autonomous technology into vessels that can support Australia’s new nuclear-powered fleet. This autonomous technology will be first developed around more conventional roles, like undersea mine clearing, though it is part of a broader picture for establishing nuclear deterrence in the Pacific.

To understand why this is a big deal, it’s important to look at two changed realities of power in the Pacific. The United States and the United Kingdom are allies of Australia, and have been for a long time. A big concern shared by these powers is what happens if tensions over the Pacific with China escalate into a shooting war.

Nuclear submarines

In March of this year, the United States, Australia, and the United Kingdom announced an agreement called AUKUS, a partnership between the three countries that will involve the development of new submarines, and shared submarine patrols in the Pacific. 

Australia has never developed nuclear weapons of its own, while the United States and the United Kingdom were the first and third countries, respectively, to test nuclear weapons. By basing American and British nuclear-powered (but not armed) submarines in Australia, the deal works to incorporate Australia into a shared concept of nuclear deterrence. In other words, the logic is that if Russia or China or any other nuclear-armed state were to try to threaten Australia with nuclear weapons, they’d be threatening the United States and the United Kingdom, too.

So while Australia is not a nuclear-armed country, it plans to host the submarine fleets of its nuclear-armed allies. None of these submarines are developed to launch nuclear missiles, but they are built to look for and hunt nuclear-armed submarines, and they carry conventional weapons like cruise missiles that can hit targets on land or at sea.

The role of autonomy

Here’s where the new complex announced by Thales comes in. The announcement from Thales says that the new facility will help the “development and integration of autonomous vessels in support of Australia’s nuclear deterrence capability.” 

Australia is one of many nations developing autonomous vessels for the sea. These types of self-navigating robots have important advantages over human-crewed ones. So long as they have power, they can continuously monitor the sea without a need to return to harbor or host a crew. Underwater, direct communication can be hard, so autonomous submarines are well suited to conducting long-lasting undersea patrols. And because the ocean is so truly massive, autonomous ships allow humans to monitor the sea over great distances, as robots do the hard work of sailing and surveying.

That makes autonomous ships useful for detecting and, depending on the sophistication of the given machine, tracking the ships and submarines of other navies. Notably, Australia’s 2025 plan for a “Warfare Innovation Navy” outlines possible roles for underwater autonomous vehicles, like scouting and assigning communications relays. The document also emphasizes that this is new technology, and Australia will work together with industry partners and allies on the “development of doctrine, concepts and tactics; standards and data sharing; test and evaluation; and common frameworks and capability maturity assessments.”

Mine-hunting ships

In the short term, Australia is looking to augment its adoption of nuclear-powered attack submarines by modernizing the rest of its Navy. This includes the replacement of its existing mine-hunting fleet. Mine-hunting is important but unglamorous work; sea mines are quick to place and persist until they’re detonated, defused, or naturally decay. Ensuring safe passage for naval vessels often means using smaller ships that scan beneath the sea using sonar to detect mines. Once found, the vessels then remain in place, and send out either tethered robots or human divers to defuse the mines. Australia has already retired two of its Huon-class minehunters, surface ships that can deploy robots and divers, and is set to replace the remaining four in its inventory. 

In its announcement, Thales emphasized the role it will play in replacing and developing the next-generation of minehunters. And tools developed to hunt mines can also help hunt subs with nuclear weapons on them. Both tasks involve locating underwater objects at a safe distance, and the stakes are much lower in figuring it out first with minehunting.

Developing new minehunters is likely an area where the Royal Australian Navy and industry will figure out significant parts of autonomy. Mine hunting and clearing is a task particularly suited towards naval robots, as mines are fixed targets, and the risk is primarily borne by the machine doing the defusing. Sensors developed to find and track mines, as well as communications tools that allow mine robots to communicate with command ships, could prove adaptable to other areas of naval patrol and warfare.

The post Australia is eyeing uncrewed vessels to patrol the vast Pacific Ocean appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This small, squishy robot is cuter than its cockroach inspiration https://www.popsci.com/technology/clari-cockroach-robot/ Fri, 01 Sep 2023 15:00:00 +0000 https://www.popsci.com/?p=567534
The CLARI mini-robot created by Kaushik Jayaram, assistant professor, mechanical engineering and Heiko Kabutz, PhD student, mechanical engineering at the University of Colorado Boulder
CLARI could one day traverse collapsed buildings in search of survivors. Casey Cass/CU Boulder

CLARI is lighter than a ping pong ball, but capable of morphing its body to fit in the tiniest of spaces.

The post This small, squishy robot is cuter than its cockroach inspiration appeared first on Popular Science.

]]>
The CLARI mini-robot created by Kaushik Jayaram, assistant professor, mechanical engineering and Heiko Kabutz, PhD student, mechanical engineering at the University of Colorado Boulder
CLARI could one day traverse collapsed buildings in search of survivors. Casey Cass/CU Boulder

A multi-legged robot inspired by everyday bugs could soon come to your aid in a literal and figurative pinch. In a new study published via Advanced Intelligent Systems, University of Colorado Boulder researchers recently unveiled their Compliant Legged Articulated Robotic Insect, aka CLARI. The cute, modular bot is lighter than a ping pong ball and small enough that multiple units can fit in your hand. But don’t let its size and weight fool you—CLARI is optimized to squeeze into tight spaces via an extremely malleable body structure. The bug-like bot shows immense promise as an exploratory tool for small areas such as within jet engines, as well as even during search and rescue missions.

[Related: This bumblebee-inspired bot can bounce back after injuring a wing.]

According to assistant professor of mechanical engineering and study co-author Kaushik Jayaram, CLARI’s inspiration is owed largely to the everyday cockroach. As a graduate student, Jayaram engineered a robot capable of compressing to just half its height, much like roaches fitting through tiny crevices in buildings.

“We were able to squeeze through vertical gaps, but that got me thinking: That’s one way to compress. What are others?” said Jayaram in an August 30 statement.

Fast forward a few years to CLARI, a new iteration that builds upon previous soft robotic advancements. In its standard shape, CLARI resembles a square with four articulating legs, each controlled by its own dual actuators and circuitry. When encountering a difficult environment, however, the team’s robot can narrow from 1.3 inches wide to just 0.8 inches narrow. With more refinement, Jayaram’s team believes future CLARI robots could become even more malleable.

“What we want are general-purpose robots that can change shape and adapt to whatever the environmental conditions are,” Jayarm said. He likens the ultimate version to an amoeba “which has no well-defined shape but can change depending on whether it needs to move fast or engulf some food.”

Instead of dining opportunities, however, CLARI bots could use their unique structures and various leg configurations to traverse disaster zones in search of missing victims, or inspect the innards of machinery without needing to take apart the entire product. Right now, CLARI still requires wired connections for both power and controls, but Jayaram’s team hopes to eventually create wireless models capable of independent movement and exploration.

“Most robots today basically look like a cube,” Jayaram said. “Why should they all be the same? Animals come in all shapes and sizes.”

The post This small, squishy robot is cuter than its cockroach inspiration appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This drug-delivery soft robot may help solve medical implants’ scar tissue problem https://www.popsci.com/technology/soft-robot-drug-ai/ Thu, 31 Aug 2023 16:00:00 +0000 https://www.popsci.com/?p=567276
Professor Garry Duffy and Dr Rachel Beatty show the soft robotic implant developed by University of Galway and MIT
The implant uses mechanotherapy to adjust its shape and size, thus avoiding scar tissue buildup. Martina Regan

The new design could one day provide continuous, consistent drug dispersal without succumbing to fibrosis complications.

The post This drug-delivery soft robot may help solve medical implants’ scar tissue problem appeared first on Popular Science.

]]>
Professor Garry Duffy and Dr Rachel Beatty show the soft robotic implant developed by University of Galway and MIT
The implant uses mechanotherapy to adjust its shape and size, thus avoiding scar tissue buildup. Martina Regan

Scar tissue, also known as fibrosis, is the scourge of medical device implants. Even when receiving potentially life saving drug treatments, patients’ bodies often form scarring around the foreign object, thus eventually forcing the implant to malfunction or fail. This reaction can drastically limit a procedure’s efficacy, but a new breakthrough combining soft robotics and artificial intelligence could soon clear the troublesome hurdle.

According to a new study published with Science Robotics, a collaboration between researchers at MIT and the University of Galway resulted in new medical device tech that relies on AI and a malleable body to evade scar tissue buildup. 

“Imagine a therapeutic implant that can also sense its environment and respond as needed using AI,” Rachel Beatty, co-lead author and postdoctoral candidate at the University of Galway, said in a statement. “This approach could generate revolutionary changes in implantable drug delivery for a range of chronic diseases.”

The technology’s secret weapon is its conductive, porous membrane capable of detecting when it is becoming blocked by scar tissue. When this begins to occur, a machine learning algorithm kicks in to oversee an emerging treatment known as mechanotherapy, in which soft robotic implants inflate and deflate at various speeds and sizes to deter scar tissue formation.

[Related: A micro-thin smart bandage can quickly heal and monitor wounds.]

Ellen Roche, an MIT professor of mechanical engineering and study co-author, explains that personalized, precision drug delivery systems could greatly benefit from responding to individuals’ immune system responses. Additionally, such devices could reduce “off-target effects” while ensuring the right drug dosages are delivered at the right times.

“The work presented here is a step towards that goal,” she added in a statement.

In training simulations, the team’s device could develop personalized, consistent dosage regimes in situations involving significant fibrosis. According to researchers, the new device’s AI could effectively control drug release even in a “worst-case scenario of very thick and dense scar tissue,” per the August 31 announcement.

According to Garry Duffy, the study’s senior author and a professor of anatomy and regenerative medicine at the University of Galway, the team initially focused on using the new robot for diabetes treatment. “Insulin delivery cannulas fail due to the foreign body response and have to be replaced often (approx. every 3-5 days),” told PopSci via email. “If we can increase the longevity of the cannula, we can then maintain the cannula for longer with less changes of the set required by the person living with diabetes.”

Beyond diabetes, they envision a future where the device can be easily adapted to a variety of medical situations and drug delivery regimens. According to Duffy, the advances could soon “provide consistent and responsive dosing over long periods, without clinician involvement, enhancing efficacy and reducing the need for device replacement because of fibrosis,” he said in the August 31 statement.

The post This drug-delivery soft robot may help solve medical implants’ scar tissue problem appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Future leaping robots could take a cue from seed-launching witch hazel plants https://www.popsci.com/environment/witch-hazel-seed-robot/ Tue, 29 Aug 2023 15:00:00 +0000 https://www.popsci.com/?p=566561
Close-up of witch hazel plant
Witch hazel can eject seeds from their shells as fast as 30 feet-per-second. Deposit Photos

Despite their small size, witch hazel seed pods pack a powerful punch.

The post Future leaping robots could take a cue from seed-launching witch hazel plants appeared first on Popular Science.

]]>
Close-up of witch hazel plant
Witch hazel can eject seeds from their shells as fast as 30 feet-per-second. Deposit Photos

Despite witch hazel plants’ various medicinal uses, a closer inspection of their propagation techniques more resembles cannonfire than convalescence. As witch hazels’ woody seed capsules dry and warp, they split open and build pressure against the seeds themselves. Eventually, the pressure ejects the seeds from their pods in an impressive display of force for something so small. Getting a detailed look at that process, however, is difficult to do with the naked eye.

“If you blink you’ll miss it,” Justin Jorge, a Duke University biomechanical engineering graduate student, explained in a statement earlier this month.

As detailed in a paper recently published in the Journal of the Royal Society Interface, training high-powered cameras on witch hazel seed launches is providing a better glimpse at how the delicate plants can exert so much comparative force. In time, their findings could influence a new generation of leaping robots.

[Related: Leaping robots take physics lessons from grasshoppers.]

The witch hazel deep dive comes courtesy of senior author and Duke University professor of biology Sheila Patek, who Jorge worked alongside as part of their PhD thesis. According to the study, Patek’s team first trained a high-speed, 100,000 frames per second video camera on three varieties of seed-bearing witch hazel plants collected from Duke Gardens and Duke Forest. Researchers then waited for the plants to propagate, and examined their speeds and velocities. The playbacks proved both impressive, and surprising.

Upon review, witch hazel seeds accelerate upwards of 30 feet-per-second within just half a millisecond of leaving their pods. What’s more, the speed is largely uniform across plant breeds, regardless of seed sizes ranging from as light as just 15 milligrams, to seeds 10 times as massive.

“We found that the launch speeds were all roughly the same,” continued Jorge. “Given the order of magnitude difference in seed masses, I was not expecting that at all.”

Further investigation revealed that witch hazel plant varieties’ seed capsules are proportional to the size of the seeds themselves—heavier seeds mean larger pods, thus a greater reserve of elastic energy. This ensures that, regardless of plant or seed size, the rapidfire launch speed remains consistent.

Jorge explained that while most people may associate springiness with coils, rubber bands, or archery bows, biology allows for “all these weird, complex shapes.” It stands to reason, then, that these unique designs could improve synthetic springs, such as those found within certain small jumping robots.

“People ask me all the time, ‘why are you looking at seed-shooting plants?’” said Jorge. “It’s the weirdness of their springs.”

The post Future leaping robots could take a cue from seed-launching witch hazel plants appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These AI-powered robot arms are delicate enough to pick up Pringles chips https://www.popsci.com/technology/robot-arms-pringles/ Thu, 24 Aug 2023 16:00:00 +0000 https://www.popsci.com/?p=565256
Robot arms lifting a single Pringles chip
The 'Bi-Touch' system relies on deep reinforcement learning to accomplish delicate tasks. Yijiong Lin

Using deep reinforcement learning and 'proprioception,' the two robotic limbs can pick up extremely fragile objects.

The post These AI-powered robot arms are delicate enough to pick up Pringles chips appeared first on Popular Science.

]]>
Robot arms lifting a single Pringles chip
The 'Bi-Touch' system relies on deep reinforcement learning to accomplish delicate tasks. Yijiong Lin

A bimanual robot controlled by a new artificial intelligence system responds to real-time tactile feedback so precisely that it can pick up individual Pringles chips without breaking them. Despite the delicacy required for such a feat, the AI program’s methodology allows it to learn specific tasks solely through simulated scenarios in just a couple of hours.

Researchers at University of Bristol’s Bristol Robotics Laboratory detailed their new “Bi-Touch” system in a new paper published on August 23 via IEEE Robotics and Automation Letters. In their review, the team highlights how their AI directs its pair of robotic limbs to “solve tasks even under unexpected perturbations and manipulate delicate objects in a gentle way,” lead author and engineering professor Yijiong Lin said in a statement on Thursday.

What makes the team’s advancements so promising is its leveraging of two robotic arms, versus a single limb as usually seen in most tactile robotic projects. Despite doubling the number of limbs, however, training only takes just a few hours. To accomplish this, researchers first train their AI in a simulation environment, then apply the finalized Bi-Touch system to their physical robot arms.

[Related: This agile robotic hand can handle objects just by touch.]

“With our Bi-Touch system, we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch,” Lin continued. “And more importantly, we can directly apply these agents from the virtual world to the real world without further training.”

Bi-Touch system’s success is owed to its reliance on Deep Reinforcement Learning (Deep-RL), in which robots attempt tasks through copious trial-and-error experimentation. When successful, researchers give AI a “reward” note, much like when training a pet. Over time, the AI learns the best steps to achieve its given goal—in this case, using the two limbs each capped with a single, soft pad to pick up and maneuver objects such as foam brain mold, a plastic apple, and an individual Pringles chip. With no visual inputs, the Bi-Touch system only relies on proprioceptive feedback such as force, physical positioning, and self-movement.

The team hopes that their new Bi-Touch system could one day deploy in industries such as fruit-picking, domestic services, and potentially even integrate into artificial limbs to recreate touch sensations. According to researchers, the Bi-Touch system’s utilization of “affordable software and hardware,” coupled with the impending open-source release of its coding, ensures additional teams around the world can experiment and adapt the program to their goals.

The post These AI-powered robot arms are delicate enough to pick up Pringles chips appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These nifty drones can lock together in mid-air to form a bigger, stronger robot https://www.popsci.com/technology/drones-assemble-air/ Wed, 23 Aug 2023 22:00:00 +0000 https://www.popsci.com/?p=564938
two drones coming together while flying
Drones, assemble. University of Tokyo / Advanced Intelligent Systems

Teamwork makes the dream work.

The post These nifty drones can lock together in mid-air to form a bigger, stronger robot appeared first on Popular Science.

]]>
two drones coming together while flying
Drones, assemble. University of Tokyo / Advanced Intelligent Systems

A drone’s size affects what it can—or can’t—do. If a drone is too small, it may be limited in the types of tasks it can complete, or the amount of heavy lifting it can do. But if a drone is too big, it may be difficult to get it up in the air or have it navigate around tricky structures, but it may make up for that in other ways 

A solution that a group of engineers from the University of Tokyo came up with is to create a set of drone units that can assemble and disassemble in the air. That way, they can break up to fit into tight spaces, but can also combine to become stronger if needed. 

Last month, the detailed design behind this type of system, called Tilted-Rotor-Equipped Aerial Robot With Autonomous In-Flight Assembly and Disassembly Ability (TRADY), was described in the journal Advanced Intelligent Systems

The drones used in the demonstration look like normal quadcopters but with an extra component (a plug or jack). The drone with the plug and the drone with the jack are designed to lock into one another, like two pieces of a jigsaw puzzle. 

[Related: To build a better crawly robot, add legs—lots of legs]

“The team developed a docking system for TRADY that takes its inspiration from the aerial refueling mechanism found in jet fighters in the form of a funnel-shaped unit on one side of the mechanism means any errors lining up the two units are compensated for,” according to Advanced Science News. To stabilize the units once they intertwine, “the team also developed a unique coupling system in the form of magnets that can be switched on and off.”

Engineering photo
The assembly mechanism, illustrated. University of Tokyo / Advanced Intelligent Systems

Although in their test runs, they only used two units, the authors wrote in the paper that this methodology “can be easily applied to more units by installing both the plug type and the jack type of docking mechanisms in a single unit.” 

To control these drones, the researchers developed two systems: a distributed control system for operating each unit independently that can be switched to a unified control system. An onboard PC conveys the position of each drone to allow them to angle themselves appropriate for coming together and apart. 

Other than testing the smoothness of the assembly and disassembly process, the team put these units to work by giving them tasks to do, such as inserting a peg into a pipe, and opening a valve. The TRADY units were able to complete both challenges. 

“As a future prospect, we intend to design a new docking mechanism equipped with joints that will enable the robot to alter rotor directions after assembly. This will expand the robot’s controllability in a more significant manner,” the researchers wrote. “Furthermore, expanding the system by utilizing three or more units remains a future challenge.” 

Engineering photo
Here is the assembled drone units working to turn a valve. University of Tokyo / Advanced Intelligent Systems

The post These nifty drones can lock together in mid-air to form a bigger, stronger robot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ukraine claims it built a battle drone called SkyKnight that can carry a bomb https://www.popsci.com/technology/ukraine-skyknight-drone/ Tue, 22 Aug 2023 22:09:09 +0000 https://www.popsci.com/?p=564533
ukraine troops training exercise
Ukrainian soldiers during a training exercise in 2017. Anthony Jones / US Army

The announcement came via the Ministry of Defense’s Telegram account.

The post Ukraine claims it built a battle drone called SkyKnight that can carry a bomb appeared first on Popular Science.

]]>
ukraine troops training exercise
Ukrainian soldiers during a training exercise in 2017. Anthony Jones / US Army

For 18 months, Russia’s invasion of Ukraine has been fought largely from the ground. Neither Russia nor Ukraine has been able to establish air superiority, or the ability to completely rule the sky at the other’s expense. While Ukraine is working to gradually build up a new air force using NATO-model fighters like the F-16 (which nations including Denmark and the Netherlands have pledged to the country), it is also using a range of drones to drop death from the sky. On August 19, the Ukrainian Ministry of Defense announced a small new armed drone for military use: the SkyKnight.

The announcement of the new UAV was posted to the Ministry of Defense’s Telegram account, and features an image of the SkyKnight drone. The vehicle is compact, and features four limbs like a common quadcopter, but each limb sports two rotors, making the drone an octocopter. A sensor is fitted on the front of the drone, with a camera facing forwards, and what appears to be batteries are strapped, in an unusual configuration, to the top of the drone’s hull. Underneath it holds a 2.5 kg (5.5 lbs) bomb. That’s between three and five times as heavy as a hand grenade, and would be a large explosive for a drone of this size.

“This can be used against stationary and moving targets – anything from tanks, armored vehicles, artillery and other systems, to infantry units on the move and in trenches, and against any target that is identified as a Russian military one,” says Samuel Bendett, an analyst at the Center for Naval Analysis and adjunct senior fellow at the Center for New American Security. “This payload can be effective and devastating against infantry units, as evidenced from multiple videos of similar attacks by quadcopters.”

Before the massive invasion of Ukraine in February 2022, the country fought a long, though more geographically confined, war against Russian-backed separatists in the Donbas of Eastern Ukraine. Using quadcopters as bombers was a regular occurance in that war, like when in 2018 Ukrainian forces used a DJI Mavic quadcopter to drop a bomb on trenches. While the Mavic was not built for war, it is a simple and easy to use machine, which could be modified in the field to carry a small explosive and a release claw. Paired with the drone’s cameras and human operators watching from a control screen, soldiers could get a bird’s eye view of their human targets, and then attack from above.

This tactic persisted in the larger war from February 2022, where small drones joined medium and larger drones in the arsenals of both nations fighting. The war in Ukraine is hardly the first war to see extensive use of drones, but none so far have matched it in sheer scale.

“Never before have so many drones been used in a military confrontation,” writes Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations. “Many, possibly the majority, of the drones used by Ukrainian forces were originally designed for commercial purposes or for hobbyists.” 

The SkyKnight is described as domestically produced, a production of the present Ukrainian industry built for this specific war. It appears to share parts in common from the broader hobbyist drone market, and its assembly, complete with strapped-on batteries and exposed wires (at least according to how it’s depicted on Telegram), speaks to ease of assembly over finicky obsession with form.

In the announcement of the SkyKnight, the Ministry of Defence says that if the pilot has any familiarity with DJI or Autel drones, which stabilize themselves in flight, then the pilot can learn to fly the SkyKnight in about a week.

“DJI and Autel are a staple [Uncrewed Aerial Vehicle] across the Ukrainian military, with many thousands fielded since the start of the Russian invasion,” says Bendett. “DJI especially as a go-to drone for ISR, target tracking, artillery spotting and light combat missions. Ukrainian forces and drone operators have amassed a lot of experience flying these Chinese-made drones.”

Domestic manufacture is important, not just because of the shorter supply lines, but because DJI’s response to the conflict has been to ban the sale of its drone to Ukraine and Russia.

“The Chinese manufacturer DJI produces most of these systems,” writes Franke. “It officially suspended operations in Ukraine and Russia a few weeks into the war, but its drones, most notably the Mavic type, remain among the most used and most sought-after systems.”

By making its own self-detonating drone weapons, Ukraine is able to use the drones as a direct weapon, which can attack from above and is hard to see or stop. In a war where soldiers describe fighting without quadcopters as being “like blind kittens,” a flying camera with a bomb attached makes soldiers deadly, at greater range, and in new ways.

Beyond the airframe and remote control, the Ministry of Defense boasts that the SkyKnight has an automatic flight mode, and can continue to fly towards a target selected by the operator even if the operator loses communication with the drone.

“Ukraine is investing a lot of resources in domestic combat drone production to meet the challenge from the Russian military that is increasingly fielding more quadcopter and FPV-type drones,” says Bendett. “This SkyKnight needs to be manufactured in sufficient quantities to start making a difference on the battlefield.”

The post Ukraine claims it built a battle drone called SkyKnight that can carry a bomb appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
See the coolest and strangest machines from the World Robot Conference https://www.popsci.com/technology/world-robot-conference-2023/ Tue, 22 Aug 2023 19:00:00 +0000 https://www.popsci.com/?p=564388
2023 World Robot Conference In Beijing robots
Humanoid robots are on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 16, 2023 in Beijing, China. VCG / VCG via Getty Images

From cute bionic cats to giant welding arms, check out this year's bots in pictures.

The post See the coolest and strangest machines from the World Robot Conference appeared first on Popular Science.

]]>
2023 World Robot Conference In Beijing robots
Humanoid robots are on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 16, 2023 in Beijing, China. VCG / VCG via Getty Images

The annual World Robot Conference wrapped up today in Beijing after a full week of humanoid, dog, butterfly, and industrial showcases. First launched in 2015, the WRC serves as a meetup for designers, investors, students, researchers, and curious visitors to check out the latest advancements in AI-powered machinery.

From four-legged assistants, to human-like expressive heads, to bipedal “cyborgs,” WRC offered some of the latest, greatest, and strangest projects currently in the works. Check out the gallery below highlighting which robots dazzled onlookers and could soon move from the conference showroom to the everyday world.

BEIJING, CHINA - AUGUST 19: A child interacts with a bionic cat during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. (Photo by Du Jianpo/VCG via Getty Images)
A child interacts with a bionic cat during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. Photo: Du Jianpo / VCG via Getty Images
BEIJING, CHINA - AUGUST 19: Welding robots assemble a car during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. (Photo by Zhan Min/VCG via Getty Images)
Welding robots assemble a car during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. Photo by Zhan Min / VCG via Getty Images
BEIJING, CHINA - AUGUST 18, 2023 - Humanoid robots perform a dance during the 2023 World Robot Conference in Beijing, China, August 18, 2023. In the first half of 2023, the output of China's industrial robots and service robots increased by 5.4% and 9.6%, respectively. (Photo credit should read CFOTO/Future Publishing via Getty Images)
Humanoid robots perform a dance during the 2023 World Robot Conference in Beijing, China, August 18, 2023. In the first half of 2023, the output of China’s industrial robots and service robots increased by 5.4% and 9.6%, respectively. Photo: CFOTO / Future Publishing via Getty Images
BEIJING, CHINA - AUGUST 18: Humanoid robots are on display at the booth of EX Robots during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 18, 2023 in Beijing, China. (Photo by Song Yu/VCG via Getty Images)
 Humanoid robots are on display at the booth of EX Robots during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 18, 2023 in Beijing, China. Photo: Song Yu / VCG via Getty Images
BEIJING, CHINA - AUGUST 19: People visit brain-computer interface (BCI) exhibition area during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. (Photo by Zhan Min/VCG via Getty Images)
People visit brain-computer interface (BCI) exhibition area during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. Photo: by Zhan Min / VCG via Getty Images
BEIJING, CHINA - AUGUST 18: UBTECH Panda Robot performs during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 18, 2023 in Beijing, China. (Photo by VCG/VCG via Getty Images)
UBTECH Panda Robot performs during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 18, 2023 in Beijing, China. Photo: VCG / VCG via Getty Images
BEIJING, CHINA - AUGUST 19: Humanoid robot is on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. (Photo by Zhan Min/VCG via Getty Images)
Humanoid robot is on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 19, 2023 in Beijing, China. Photo: Zhan Min/VCG via Getty Images
BEIJING, CHINA - AUGUST 16: Unitree robotic dog is on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 16, 2023 in Beijing, China. (Photo by VCG/VCG via Getty Images)
Unitree robotic dog is on display during 2023 World Robot Conference at Beijing Etrong International Exhibition & Convention Center on August 16, 2023 in Beijing, China. Photo: VCG/VCG via Getty Images

The post See the coolest and strangest machines from the World Robot Conference appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These bathroom-cleaning bots won’t replace human janitors any time soon https://www.popsci.com/technology/somatic-bathroom-robot/ Tue, 22 Aug 2023 16:00:00 +0000 https://www.popsci.com/?p=564294
Artist rendering of Somatic bathroom cleaning robot entering restroom
Somatic offers a robot service that automates many janitorial tasks. Somatic

Somatic offers a fleet of automated bathroom cleaning robots for businesses. Some experts are wary.

The post These bathroom-cleaning bots won’t replace human janitors any time soon appeared first on Popular Science.

]]>
Artist rendering of Somatic bathroom cleaning robot entering restroom
Somatic offers a robot service that automates many janitorial tasks. Somatic

Despite long hours, physical demands, and often unhygienic working conditions, the average salary of a janitor in the United States is less than $34,000. Somatic, a robotic cleaning service for commercial buildings founded in 2020, offers a robot that can do the job for barely a third of that wage.

“We aim for customers to walk in and not be able to tell if a person or a robot cleaned the bathroom,” Somatic CEO Michael Levy tells PopSci.

In time-lapsed video footage recently highlighted by New Atlas and elsewhere, a Somatic bathroom bot can be seen roaming the halls of a medical facility, entering both single and multi-stall restrooms, as well as attending to tasks like spraying disinfectant and vacuuming the floor. According to Levy, “Less frequent tasks like restocking consumables (paper towels, toilet paper, and soap) and taking the trash out still require local staff input.”

[Related: The germiest places you might not be cleaning.]

But although Levy assures customers—and laborers—that its line of semi-autonomous machines first unveiled in 2020 are meant as an aid alongside sanitation workers, others aren’t so sure.

For Paris Marx, a longtime tech industry critic and host of the podcast, Tech Won’t Save Us, Somatic’s robots are another  example of an engineering team believing they can understand and automate tasks when “they really have no idea what a janitor (in this case) really does.”

Marx notes the more pressing concerns in many automation drives are the scare tactics employed by managers, who often use the threat of new technologies to force workers into accepting lower wages, worse conditions, and consent to surveillance technologies on the job.

Meanwhile, those within the janitorial and service industries echoed their concern about the rise of automated products like those offered by Somatic.

“As we have seen during the WGA and SAG-AFTRA strikes this summer, every industry is experiencing technological changes that may impact workers’ lives,” a spokesperson for Service Employees International Union (SEIU) said. “Whether it’s actors, writers, or janitors, what is important is for employers to negotiate with workers through their unions how these technologies will be employed to ensure the best outcomes—for consumers, workers, their families, and our communities.”

[Related: Study shows the impact of automation on worker pay.]

According to Marx, the current onslaught of AI-powered and robotic labor products are reminiscent of tech companies’ and media’s job doomsday prophecies from the mid-2010s. “[T]he reality was that while robots were trialed in everything from elder care to food service, very few of them actually stuck around because they simply didn’t do the job as well as a human,” says Marx. “A decade later, we’re repeating a similar cycle with generative AI and robots like Somatic’s bathroom cleaning robot, but I don’t expect the outcome to be any different than last time.”

“When you watch their demonstration video, the robot is only going over clean bathrooms—it never shows us how it handles a real mess,  says Marx, noting the robot appears slow, and doesn’t seem to provide a deep clean most people might expect for a public restroom.

According to Levy, each initial setup of their bathroom bots is done virtually from Somatic’s office. “We ‘play the worst video game ever,’” he says. “We clean the bathroom one time using software we built. That cleaning plan is then pushed to the robot via an [over the air] update.”

For Marx, however, there’s room for improvement. “I also didn’t see it touch the sinks,” they note.

The post These bathroom-cleaning bots won’t replace human janitors any time soon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Dead whales and dinosaur eggs: 7 fascinating images by researchers https://www.popsci.com/environment/science-images-competition-2023/ Fri, 18 Aug 2023 10:00:00 +0000 https://www.popsci.com/?p=563700
Dead humback whale on beach from aerial view
Researchers from the University of Glasgow’s Scottish Marine Animal Stranding Scheme conduct a necropsy of a stranded humpback whale. Submitted by Professor Paul Thompson, photo captured by James Bunyan from Tracks Ecology

See the world from a scientist's perspective.

The post Dead whales and dinosaur eggs: 7 fascinating images by researchers appeared first on Popular Science.

]]>
Dead humback whale on beach from aerial view
Researchers from the University of Glasgow’s Scottish Marine Animal Stranding Scheme conduct a necropsy of a stranded humpback whale. Submitted by Professor Paul Thompson, photo captured by James Bunyan from Tracks Ecology

Oh, the wonders scientists see in the field. Documenting the encounters can be an integral part of the discovery process, but it can also pull others into the experience. These seven photos and illustrations are the winners of this year’s BMC Ecology and Evolution image competition, which gets submissions from researchers all around the world each year. It includes four categories: “Research in Action,” “Protecting our planet,” “Plants and Fungi,” and “Paleoecology.”

See the full gallery of winners and their stories on the BMC Ecology and Evolution website. And explore last year’s winners here.

Fruiting bodies of small orange fungi
An invasive orange pore fungus poses unknown ecological consequences for Australian ecosystems. Cornelia Sattler
Beekeepers holding honeycomb in Guinea
The Chimpanzee Conservation Center in Guinea to protect our planet and empower local communities is a sustainable beekeeping project, launched in the surrounding villages of Faranah, which showcases an inspiring solution to combat deforestation caused by traditional honey harvesting from wild bees. By cultivating their own honey, the locals avoid tree felling and increase production. Roberto García-Roa
Marine biologist releasing black-tip reef shark in ocean
A researcher releases a new-born blacktip reef shark in Mo’orea, French Polynesia. Victor Huertas
Hadrosaur egg with embryo. Illustration.
This digital illustration is based on a pair of hadrosauroid dinosaur eggs and embryos from China’s Upper Cretaceous red beds, dating back approximately 72 to 66 million years ago. It depicts an example of a “primitive” hadrosaur developing within the safety of its small egg. Submitted by Jordan Mallon. Restoration by Wenyu Ren.
Brown spider on wood parasitized by fungus
While it is not uncommon to encounter insects parasitised by “zombie” fungi in the wild, it is a rarity to witness large spiders succumbing to these fungal conquerors. In the jungle, near a stream, lies the remains of a conquest shaped by thousands of years of evolution. Roberto García-Roa
Marine biologists steering underwater robot in the ocean
Researchers from the Hoey Reef Ecology Lab deploy an underwater ROV at Diamond Reef within the Coral Sea Marine Park. Victor Huertas

The post Dead whales and dinosaur eggs: 7 fascinating images by researchers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Origami-inspired robot can gently turn pages and carry objects 16,000 times its weight https://www.popsci.com/technology/robot-gripper-kirigami/ Tue, 15 Aug 2023 18:00:00 +0000 https://www.popsci.com/?p=563193
Soft robot gripper turning a book page
The new gripper is delicate enough to turn individual book pages. NC State

The gripper design finds a balance between 'strength, precision and gentleness.'

The post Origami-inspired robot can gently turn pages and carry objects 16,000 times its weight appeared first on Popular Science.

]]>
Soft robot gripper turning a book page
The new gripper is delicate enough to turn individual book pages. NC State

The Japanese art of paper cutting and folding known as kirigami has provided a wealth of inspiration for ingenious robotic designs, but the latest example might be the most versatile and impressive yet. As first detailed earlier this month in Nature Communications, a team at North Carolina State University recently developed a new soft robot gripper sensitive enough to handle water droplets and turn book pages, but strong enough to achieve a 16,000 payload-to-weight ratio. With additional refinements, engineers believe the gripper could find its way into a wide array of industries—as well as into human prosthetics.

“It is difficult to develop a single, soft gripper that is capable of handling ultrasoft, ultrathin, and heavy objects, due to tradeoffs between strength, precision and gentleness,” study author Jie Yin, an NC State associate professor of mechanical and aerospace engineering, said in a statement. “Our design achieves an excellent balance of these characteristics.”

[Related: Foldable robots with intricate transistors can squeeze into extreme situations.]

While previous soft grippers have been developed using elements of kirigami, the researchers’ tendril-like structures distribute their force in such a way as to be delicate and precise enough to help zip certain zippers and pick up coins. As New Scientist recently also noted, the shape and angling allows the 0.4 gram grippers to hold objects as heavy as 6.4 kilograms—a payload-to-weight ratio 2.5 times higher than the previous industry record. 

Because the grippers’ abilities derive from their design and not the materials themselves, the team also showcased additional potential by building iterations from plant leaves. The potential for biodegradable grippers could prove extremely useful in situations where they are only temporarily necessary, such as handling dangerous medical waste like needles.

If all that weren’t enough, the NC State team went yet one step further by experimenting with attaching their grippers to a myoelectric prosthetic hand controlled via muscle activity in a user’s forearm. “The new gripper can’t replace all of the functions of existing prosthetic hands, but it could be used to supplement those other functions,” said Helen Huang, paper co-author and NC State’s Jackson Family Distinguished Professor in the Joint Department of Biomedical Engineering. “And one of the advantages of the kirigami grippers is that you would not need to replace or augment the existing motors used in robotic prosthetics. You could simply make use of the existing motor when utilizing the grippers.”

Yin, Huang, and their colleagues hope to eventually collaborate with robotic prosthetic makers, food processing companies, as well as electronics and pharmaceutical businesses to develop additional usages for their soft grippers.

The post Origami-inspired robot can gently turn pages and carry objects 16,000 times its weight appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This waddling robot could guide baby turtles to the sea https://www.popsci.com/technology/baby-sea-turtle-robot/ Tue, 08 Aug 2023 18:00:00 +0000 https://www.popsci.com/?p=561891
Sea turtle robot crawling across sand
A tiny robot mimics the gait and movement of baby sea turtles. University of Notre Dame

Engineers synthesized the gaits and anatomy of multiple sea turtle species to create a helpful turtle bot.

The post This waddling robot could guide baby turtles to the sea appeared first on Popular Science.

]]>
Sea turtle robot crawling across sand
A tiny robot mimics the gait and movement of baby sea turtles. University of Notre Dame

It’s sad but true: only around one in 1,000 baby sea turtles survive the arduous trek from their beach nests to the open ocean. This journey has only grown more fraught thanks to continued seaside development and all manner of human trash obstacles for the tiny creatures. To both better understand their movements, as well as potentially help them out, a team of researchers at the University of Notre Dame recently designed and built their own turtle robot.

Their results? Well, take a look for yourself and try not to say “Awww.”

“The sea turtle’s unique body shape, the morphology of their flippers and their varied gait patterns makes them very adaptable,” explained Yasemin Ozkan-Aydin, an assistant professor of electrical engineering and roboticist at the University of Notre Dame who led the latest biomimicry project.

Along with an electrical engineering doctoral student Nnamdi Chikere and undergraduate John Simon McElroy, Ozkan-Aydin broke down sea turtles’ evolutionary design into a few key parts: an oval-shaped frame, four individually operated remote-controlled flippers, a multisensor device, battery, as well as an onboard control unit. The trio relied on silicone molds to ensure the flippers’ necessary flexibility, and utilized 3D printed rigid polymers for both its frame and flipper connectors.

[Related: Safely share the beach with endangered sea turtles this summer.]

To maximize its overall efficacy, the team’s new turtle-bot isn’t inspired by a single species. Instead, Ozkan-Aydin and her colleagues synthesized the gait patterns, morphology, and flipper anatomy of multiple turtle species to take “the most effective aspects from each,” she said on August 7.

Unlike other animal-inspired robots, however, Ozkan-Aydin’s turtle tech is initially intended solely to help their biological mirrors. “Our hope is to use these baby sea turtle robots to safely guide sea turtle hatchlings to the ocean and minimize the risks they face during this critical period,” she explains.

Judging from recent reports, they could use all the help they can get. According to the Wild Animal Health Fund, 6 out of 7 sea turtle species are currently considered threatened or endangered. The aptly named nonprofit sea turtle organization, See Turtles, lists a number of current threats facing the species, including getting entangled in fishing gear, illegal trade and consumption of eggs and meat, marine pollution, and global warming. 

The post This waddling robot could guide baby turtles to the sea appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why industrial automation can be so costly https://www.popsci.com/technology/robot-profit-study/ Mon, 07 Aug 2023 16:00:00 +0000 https://www.popsci.com/?p=561580
Robotic arms welding car frames on automotive assembling line
Research indicates businesses can't necessarily ease their way into automation. Deposit Photos

A new study tracks robotic labor's potential for profit—and the rat race to maintain it.

The post Why industrial automation can be so costly appeared first on Popular Science.

]]>
Robotic arms welding car frames on automotive assembling line
Research indicates businesses can't necessarily ease their way into automation. Deposit Photos

Companies often invest in automation with the expectation of increased profits and productivity, but that might not always be the case. A recent study indicates businesses are likely to see diminished returns from automation—at least initially. What’s more, becoming too focused on robotic integration could hurt a company’s ability to differentiate itself from its competitors.

According to a new review of European and UK industrial data between 1995 and 2017, researchers at the University of Cambridge determined that many businesses experienced a “U-shaped curve” in profit margins as they moved to adopt robotic tech into their production processes. The findings, published on August 2 in IEEE Transactions on Engineering Management, suggest companies should not necessarily rush towards automation without first considering the wider logistical implications.

[Related: Workplace automation could affect income inequality even more than we thought.]

“Initially, firms are adopting robots to create a competitive advantage by lowering costs,” said Chandler Velu, the study’s co-author and a professor of innovation and economics at Cambridge’s Institute for Manufacturing. “But process innovation is cheap to copy, and competitors will also adopt robots if it helps them make their products more cheaply. This then starts to squeeze margins and reduce profit margin.”

As co-author Philip Chen also notes, researchers “intuitively” believed more robotic tech upgrades would naturally lead to higher profits, “but the fact that we see this U-shaped curve instead was surprising.” Following interviews with a “major American medical manufacturer,” the team also noted that as robotics continue to integrate into production, companies appear to eventually reach a point when their entire process requires a complete redesign. Meanwhile, focusing too much on robotics for too long could allow other businesses time to invest in new products that set themselves apart for consumers, leading to a further disadvantage.

[Related: Chipotle is testing an avocado-pitting, -cutting, and -scooping robot.]

“When you start bringing more and more robots into your process, eventually you reach a point where your whole process needs to be redesigned from the bottom up,” said Velu. “It’s important that companies develop new processes at the same time as they’re incorporating robots, otherwise they will reach this same pinch point.”

Regardless of profit margins and speed, all of this automation frequently comes at huge costs to human laborers. Last year, a study from researchers at MIT and Boston University found that the negative effects stemming from robotic integrations could be even worse than originally believed. Between 1980 and 2016, researchers estimated that automation reduced the wages of men without high school degrees by nearly nine percent, and women without the same degree by around two percent, adjusted for inflation.

The post Why industrial automation can be so costly appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA gears up to send a trio of rovers to the moon in 2024 https://www.popsci.com/technology/nasa-cadre-rovers/ Fri, 04 Aug 2023 13:30:00 +0000 https://www.popsci.com/?p=561096
Two NASA lunar CADRE rovers parked on the ground
Each prototype CADRE rover is roughly the size of a shoe box. NASA/JPL-Caltech

If successful, the CADRE robots could change how future space missions are planned.

The post NASA gears up to send a trio of rovers to the moon in 2024 appeared first on Popular Science.

]]>
Two NASA lunar CADRE rovers parked on the ground
Each prototype CADRE rover is roughly the size of a shoe box. NASA/JPL-Caltech

A team of small, solar-powered rovers are traveling to the moon next year. There, they will attempt to autonomously organize and carry out a mission with next-to-no input from NASA’s human controllers. If successful, similar robotic fleets could one day tackle a multitude of mission tasks, thus allowing their human team members to focus on a host of other responsibilities.

Three robots, each roughly the size of a carry-on suitcase, comprise the Cooperative Autonomous Distributed Robotic Exploration (CADRE) project. The trio will descend onto the lunar surface via tethers deployed by a 13-foot-tall lander. From there, NASA managers back on Earth, such as CADRE principal investigator Jean-Pierre de la Croix, plan to transmit a basic command such as “Go explore this region.”

[Related: Meet the first 4 astronauts of the ‘Artemis Generation’.]

“[T]he rovers figure out everything else: when they’ll do the driving, what path they’ll take, how they’ll maneuver around local hazards,” de la Croix explained in an August 2 announcement via NASA. “You only tell them the high-level goal, and they have to determine how to accomplish it.”

The trio will even elect a “leader” at their mission’s outset to divvy up work responsibilities, which will reportedly include traveling in formation, exploring a roughly 4,300 square foot region of the moon, and creating 3D topographical maps of the area using stereoscopic cameras. The results of CADRE’s roughly 14-day robot excursion will better indicate the feasibility of deploying similar autonomous teams on space missions in the years to come.

Engineer observes a development model rover during a test for NASA’s CADRE technology demonstration in JPL’s Mars Yard
Credit: NASA / JPL-Caltech

As NASA notes, the mission’s robot trifecta requires a careful balance of form and function. Near the moon’s equator—where the CADRE bots will land—temperatures can rise to as high as 237 degrees Fahrenheit. Each machine will need to be hardy enough to survive the harsh lunar climate and lightweight enough to get the job done, all while housing the computing power necessary to autonomously operate. To solve for this, NASA engineers believe installing a 30-minute wake-sleep cycle will allow for the robots to sufficiently cool off, assess their respective heath, and then elect a new leader to continue organizing their mission as necessary.

“It could change how we do exploration in the future,” explains Subha Comandur, CADRE project manager for NASA’s Jet Propulsion Laboratory. “The question for future missions will become: ‘How many rovers do we send, and what will they do together?’”

The post NASA gears up to send a trio of rovers to the moon in 2024 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robots could now understand us better with some help from the web https://www.popsci.com/technology/deepmind-google-robot-model/ Mon, 31 Jul 2023 11:00:00 +0000 https://www.popsci.com/?p=559920
a robot starting at toy objects on table
This robot is powered by RT-2. DeepMind

A new type of language model could give robots insights into the human world.

The post Robots could now understand us better with some help from the web appeared first on Popular Science.

]]>
a robot starting at toy objects on table
This robot is powered by RT-2. DeepMind

Tech giant Google and its subsidiary AI research lab, DeepMind, have created a basic human-to-robot translator of sorts. They describe it as a “first-of-its-kind vision-language-action model.” The pair said in two separate announcements Friday that the model, called RT-2, is trained with language and visual inputs and is designed to translate knowledge from the web into instructions that robots can understand and respond to.

In a series of trials, the robot demonstrated that it can recognize and distinguish between the flags of different countries, a soccer ball from a basketball, pop icons like Taylor Swift, and items like a can of Red Bull. 

“The pursuit of helpful robots has always been a herculean effort, because a robot capable of doing general tasks in the world needs to be able to handle complex, abstract tasks in highly variable environments — especially ones it’s never seen before,” Vincent Vanhoucke, head of robotics at Google DeepMind, said in a blog post. “Unlike chatbots, robots need ‘grounding’ in the real world and their abilities… A robot needs to be able to recognize an apple in context, distinguish it from a red ball, understand what it looks like, and most importantly, know how to pick it up.”

That means that training robots traditionally required generating billions of data points from scratch, along with specific instructions and commands. A task like telling a bot to throw away a piece of trash involved programmers explicitly training the robot to identify the object that is the trash, the trash can, and what actions to take to pick the object up and throw it away. 

For the last few years, Google has been exploring various avenues of teaching robots to do tasks the way you would teach a human (or a dog). Last year, Google demonstrated a robot that can write its own code based on natural language instructions from humans. Another Google subsidiary called Everyday Robots tried to pair user inputs with a predicted response using a model called SayCan that pulled information from Wikipedia and social media. 

[Related: Google is testing a new robot that can program itself]

AI photo
Some examples of tasks the robot can do. DeepMind

RT-2 builds off a similar precursor model called RT-1 that allows machines to interpret new user commands through a chain of basic reasoning. Additionally, RT-2 possesses skills related to symbol understanding and human recognition—skills that Google thinks will make it adept as a general purpose robot working in a human-centric environment. 
More details on what robots can and can’t do with RT-2 is available in a paper DeepMind and Google put online.

[Related: A simple guide to the expansive world of artificial intelligence]

RT-2 also draws from work done through vision-language models (VLMs) that have been used to caption images, recognize objects in a frame, or answer questions about a certain picture. So, unlike SayCan, this model can actually see the world around it. But to make it so that VLMs can control robots, a component for output actions needs to be added on to it. And this is done by representing different actions the robot can perform as tokens in the model. With this, the model can not only predict what the answer to someone’s query might be, but it can also generate the action most likely associated with it. 

DeepMind notes that, for example, if a person says they’re tired and wants a drink, the robot could decide to get them an energy drink.

The post Robots could now understand us better with some help from the web appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What is DARPA? The rich history of the Pentagon’s secretive tech agency https://www.popsci.com/technology/what-is-darpa/ Sat, 29 Jul 2023 11:00:00 +0000 https://www.popsci.com/?p=559956
The U.S. Air Force X-37B Orbital Test Vehicle 4
The U.S. Air Force X-37B Orbital Test Vehicle 4 as seen in 2017. For a time, this vehicle was developed under DARPA. U.S. Air Force

The famous DOD research arm has been working on tech breakthroughs since 1958. Here's how it got started—and what it does now.

The post What is DARPA? The rich history of the Pentagon’s secretive tech agency appeared first on Popular Science.

]]>
The U.S. Air Force X-37B Orbital Test Vehicle 4
The U.S. Air Force X-37B Orbital Test Vehicle 4 as seen in 2017. For a time, this vehicle was developed under DARPA. U.S. Air Force

In 1957, the Soviet Union changed the night sky. Sputnik, the first satellite, was in orbit for just 22 days, but its arrival brought a tremendous set of new implications for nations down on Earth, especially the United States. The USSR was ahead in orbit, and the rocket that launched Sputnik meant that the USSR would likely be able to launch atomic or thermonuclear warheads through space and back down to nations below. 

In the defense policy of the United States, Sputnik became an example of “technological surprise,” or when a rival country demonstrates a new and startling tool. To ensure that the United States is always the nation making the surprises, rather than being surprised, in 1958 president Dwight D. Eisenhower created what we now know as DARPA, the Defense Advanced Research Projects Agency.

Originally called the Advanced Research Projects Agency, or ARPA, ARPA/DARPA has had a tremendous impact on technological development, both for the US military and well beyond it. (Its name became DARPA in 1972, then ARPA again from 1993 to 1996, and it’s been DARPA ever since.) The most monumental achievement of DARPA is the precursor to the service that makes reading this article possible. That would be ARPANET, the immediate predecessor to the internet as we know it, which started as a way to guarantee continuous lines of communication over a distributed network. 

Other projects include the more explicitly military ones, like work on what became the MQ-1 Predator drone, and endeavors that exist in the space between the civilian and military world, like research into self-driving cars.

What is the main purpose of DARPA?

The specific military services have offices that can conduct their own research, designed to bring service-specific technological improvements. Some of these are the Office of Naval Research, the Air Force Research Laboratory, and the Army’s Combat Capabilities Development Command (DEVCOM). DARPA’s mission, from its founding, is to tackle research and development of technologies that do not fall cleanly into any of the services, that are considered worth pursuing on their own merits, and that may end up in the hands of the services later.

How did DARPA start?

Sputnik is foundational to the story of DARPA and ARPA. It’s the event that motivated President Eisenhower to create the agency by executive order. Missiles and rockets at the time were not new, but they were largely secret. During World War II, Nazi Germany had launched rockets carrying explosives against the United Kingdom. These V-2 rockets, complete with some of the engineers who designed and built them, were captured by the United States and the USSR, and each country set to work developing weapons programs from this knowledge.

Rockets on their own are a devastatingly effective way to attack another country, because they can travel beyond the front lines and hit military targets, like ammunition depots, or civilian targets, like neighborhoods and churches, causing disruption and terror and devastation beyond the front lines. What so frightened the United States about Sputnik was that, instead of a rocket that could travel hundred of miles within Earth’s atmosphere, this was a rocket that could go into space, demonstrating that the USSR had a rocket that could serve as the basis for an Intercontinental Ballistic Missile, or ICBM. 

ICBMs carried with them a special fear, because they could deliver thermonuclear warheads, threatening massive destruction across continents. The US’s creation and use of atomic weapons, and then the development of hydrogen bombs (H-bombs), can also be understood as a kind of technological surprise, though both projects preceded DARPA.

[Related: Why DARPA put AI at the controls of a fighter jet]

Popular Science first covered DAPRA in July 1959, with “U.S. ‘Space Fence’ on Alert for Russian Spy-Satellites.” It outlined the new threat posed to the United States from space surveillance and thermonuclear bombs, but did not take a particularly favorable light to ARPA’s work.

“A task force or convoy could no longer cloak itself in radio silence and ocean vastness. Once spotted, it could be wiped out by a single H-bomb,” the story read. “This disquieting new problem was passed to ARPA (Advanced Research Projects Agency), which appointed a committee, naturally.”

That space fence formed an early basis for US surveillance of objects in orbit, a task that now falls to the Space Force and its existing tried-and-true network of sensors.

Did DARPA invent the internet?

Before the internet, electronic communications were routed through telecommunications circuits and switchboards. If a relay between two callers stopped working, the call would end, as there was no other way to sustain the communication link. ARPANET was built as a way to allow computers to share information, but pass it through distributed networks, so that if one node was lost, the chain of communication could continue through another.

“By moving packets of data that dynamically worked their way through a network to the destination where they would reassemble themselves, it became possible to avoid losing data even if one or more nodes went down,” describes DARPA

The earliest ARPANET, established in 1969 (it started running in October of that year), was a mostly West Coast affair. It connected nodes at University of California, Santa Barbara; University of California, Los Angeles; University of Utah; and Stanford Research Institute. By September 1971 it had reached the East Coast, and was a continent-spanning network connecting military bases, labs, and universities by the late 1970s, all sending communication over telephone lines.

[Related: How a US intelligence program created a team of ‘Superforecasters’]

Two other key innovations made ARPANET a durable template for the internet. The first was commissioning the first production of traffic routers to serve as relay points for these packets. (Modern wireless routers are a distant descendant of this earlier wired technology.) Another was setting up universal protocols for transmission and function, allowing products and computers made by different companies to share a communication language and form. 

The formal ARPANET was decommissioned in 1988, thanks in part to redundancy with the then-new internet. It had demonstrated that computer communications could work across great distances, through distributed networks. This became a template for other communications technologies pursued by the United States, like mesh networks and satellite constellations, all designed to ensure that sending signals is hard to disrupt.

“At a time when computers were still stuffed with vacuum tubes, the Arpanauts understood that these machines were much more than computational devices. They were destined to become the most powerful communications tools in history,” wrote Phil Patton for Popular Science in 1995.

What are key DARPA projects?

For 65 years, DARPA has spurred the development of technologies by funding projects and managing them at the research and development stage, before handing those projects off to other entities, like the service’s labs or private industry, to see them carried to full fruition. 

DARPA has had a hand in shaping technology across computers, sensors, robotics, autonomy, uncrewed vehicles, stealth, and even the Moderna COVID-19 vaccine. The list is extensive, and DARPA has ongoing research programs that make a comprehensive picture daunting. Not every one of DARPA’s projects yields success, but the ones that do have had an outsized impact, like the following list of game-changers:

Stealth: Improvements in missile and sensor technology made it risky to fly fighters into combat. During the Vietnam War, the Navy and Air Force adapted with “wild weasel” missions, where daring pilots would draw fire from anti-air missiles and then attempt to out-maneuver them, allowing others to destroy the radar and missile launch sites. That’s not an ideal approach. Stealth, in which the shape and materials of an aircraft are used to minimize its appearance on enemy sensors, especially radar, was one such adaptation pursued by DARPA to protect aircraft. DARPA’s development of stealth demonstrator HAVE BLUE (tested at Area 51) paved the way for early stealth aircraft like the F-117 fighter and B-2 bomber, which in turn cleared a path for modern stealth planes like the F-22 and F-35 fights, and the B-21 stealth bomber.

Vaccines: In 2011, DARPA started its Autonomous Diagnostics to Enable Prevention and Therapeutics (ADEPT) program. Through this, in 2013 Moderna received $25 million from DARPA, funding that helped support its work. It was a bet that paid off tremendously in the COVID-19 pandemic, and was one of many such efforts to fund and support everything from diagnostic to treatment to production technologies.

Secret space plane: The X-37B is a maneuverable shuttle-like robotic space plane that started as a NASA program, was developed under DARPA for a time, and then became an Air Force project. Today it is operated by Space Force. This robot can remain in orbit for extraordinarily long lengths of time, with a recent mission lasting over 900 days. The vehicle serves as a testbed for a range of technologies, including autonomous orbital flight as well as sensors and materials testing. There is some speculation as to what the X-37B will lead to in orbit. For now, observations match its stated testing objectives, but the possibility that a reusable, maneuverable robot could prove useful in attacking satellites is one that many militaries are cautiously worried about. 

That may be a list of some of DARPA’s greatest hits, and in recent years it’s announced projects relating to jetpacks, cave cartography, and new orbits for satellites. It even has a project related to scrap wood and paper, cleverly called WUD.

The post What is DARPA? The rich history of the Pentagon’s secretive tech agency appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new kind of thermal imaging sees the world in striking colors https://www.popsci.com/technology/hadar-thermal-camera/ Wed, 26 Jul 2023 16:00:00 +0000 https://www.popsci.com/?p=559135
Thermal vision of a home.
Thermal imaging (seen here) has been around for a while, but HADAR could up the game. Deposit Photos

Here's how 'heat-assisted detection and ranging,' aka HADAR, could revolutionize AI visualization systems.

The post A new kind of thermal imaging sees the world in striking colors appeared first on Popular Science.

]]>
Thermal vision of a home.
Thermal imaging (seen here) has been around for a while, but HADAR could up the game. Deposit Photos

A team of researchers has designed a completely new camera imaging system based on AI interpretations of heat signatures. Once refined, “heat-assisted detection and ranging,” aka HADAR, could one day revolutionize the way autonomous vehicles and robots perceive the world around them.

The image of a robot visualizing its surroundings solely using heat signature cameras remains in the realm of sci-fi for a reason—basic physics. Although objects are constantly emitting thermal radiation, those particles subsequently diffuse into their nearby environments, resulting in heat vision’s trademark murky, textureless imagery, an issue understandably referred to as “ghosting.”

[Related: Stanford researchers want to give digital cameras better depth perception.]

Researchers at Purdue University and Michigan State University have remarkably solved this persistent problem using machine learning algorithms, according to their paper published in Nature on July 26. Employing AI trained specifically for the task, the team was able to derive the physical properties of objects and surroundings from information captured by commercial infrared cameras. HADAR cuts through the optical clutter to detect temperature, material composition, and thermal radiation patterns—regardless of visual obstructions like fog, smoke, and darkness. HADAR’s depth and texture renderings thus create incredibly detailed, clear images no matter the time of day or environment.

AI photo
HADAR versus ‘ghosted’ thermal imaging. Credit: Nature

“Active modalities like sonar, radar and LiDAR send out signals and detect the reflection to infer the presence/absence of any object and its distance. This gives extra information of the scene in addition to the camera vision, especially when the ambient illumination is poor,” Zubin Jacob, a professor of electrical and computer engineering at Purdue and article co-author, tells PopSci. “HADAR is fundamentally different, it uses invisible infrared radiation to reconstruct a night-time scene with clarity like daytime.”

One look at HADAR’s visual renderings makes it clear (so to speak) that the technology could soon become a vital part of AI systems within self-driving vehicles, autonomous robots, and even touchless security screenings at public events. That said, a few hurdles remain before cars can navigate 24/7 thanks to heat sensors—HADAR is currently expensive, requires real-time calibration, and is still susceptible to environmental barriers that detract from its accuracy. Still, researchers are confident these barriers can be overcome in the near future, allowing HADAR to find its way into everyday systems. Still, HADAR is already proving beneficial to at least one of its creators.

“To be honest, I am afraid of the dark. Who isn’t?” writes Jacob. “It is great to know that thermal photons carry vibrant information in the night similar to daytime. Someday we will have machine perception using HADAR which is so accurate that it does not distinguish between night and day.”

The post A new kind of thermal imaging sees the world in striking colors appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Deep underground, robotic teamwork saves the day https://www.popsci.com/technology/search-and-rescue-robots/ Wed, 26 Jul 2023 01:00:00 +0000 https://www.popsci.com/?p=558901
Engineering photo
CREDIT: DARPA

Deploying a motley crew of robots that can roll, walk and fly is a smart strategy for search-and-rescue operations.

The post Deep underground, robotic teamwork saves the day appeared first on Popular Science.

]]>
Engineering photo
CREDIT: DARPA

This article originally appeared in Knowable Magazine.

When a Manhattan parking garage collapsed in April this year, rescuers were reluctant to stay in the damaged building, fearing further danger. So they used a combination of flying drones and a doglike walking robot to inspect the damage, look for survivors and make sure the site was safe for human rescuers to return.

Despite the robot dog falling over onto its side while walking over a pile of rubble — a moment that became internet-famous — New York Mayor Eric Adams called the robots a success, saying they had ensured there were no overlooked survivors while helping keep human rescuers safe.

Soon, rescuers may be able to call on a much more sophisticated robotic search-and-rescue response. Researchers are developing teams of flying, walking and rolling robots that can cooperate to explore areas that no one robot could navigate on its own. And they are giving robots the ability to communicate with one another and make many of their own decisions independent of their human controller.

Such teams of robots could be useful in other challenging environments like caves or mines where it can be difficult for rescuers to find and reach survivors. In cities, collapsed buildings and underground sites such as subways or utility tunnels often have hazardous areas where human rescuers can’t be sure of the dangers.

Operating in such places has proved difficult for robots. “You have mud, rock, rubble, constrained passages, large open areas … Just the range and complexity of these environments present a lot of mobility challenges for robots,” says Viktor Orekhov, a roboticist and a a technical advisor to the Defense Advanced Research Projects Agency (DARPA), which has been funding research into the field.

Underground spaces are also dark and can be full of dust or smoke if they are the site of a recent disaster. Even worse, the rock and rubble can block radio signals, so robots tend to lose contact with their human controller the farther they go.

Despite these difficulties, roboticists have made progress, says Orekhov, who coauthored an overview of their efforts in the 2023 Annual Review of Control, Robotics, and Autonomous Systems.

One promising strategy is to use a mix of robots, with some combination of treads, wheels, rotors and legs, to navigate the different spaces. Each type of robot has its own unique set of strengths and weaknesses. Wheeled or treaded robots can carry heavy payloads, and they have big batteries that allow them to operate for a long time. Walking robots can climb stairs or tiptoe over loose rubble. And flying robots are good at mapping out big spaces quickly.

There are also robots that carry other robots. Flying robots tend to have relatively short battery lives, so rescuers can call on “marsupials” — wheeled, treaded or legged robots that carry the flying robots deep into the area to be explored, releasing them when there is a big space that needs to be mapped.

The US government’s Defense Advanced Research Projects Agency (DARPA) challenged robotics researchers to develop teams of robots that could explore a complex underground space. Here’s a legged robot from the winning team, shown exploring a tunnel. Its legs allow it to cross uneven terrain more easily.
CREDIT: ROBOTIC SYSTEMS LAB: LEGGED ROBOTICS AT ETH ZÜRICH

A team of robots also allows for different instruments to be used. Some robots might carry lights, others radar, sonar or thermal imaging tools. This diversity allows different robots to see under varied conditions of light or dust. All of the robots, working together, provide the humans that deploy them with a constantly growing map of the space they are working in.

Although teams of robots are good for overall mobility, they present a new problem. A human controller can have difficulty coordinating such a team, especially in underground environments, where thick walls block out radio signals.

One solution is to make sure the robots can communicate with one another. That allows a robot that’s gone deeper and lost radio contact with the surface to potentially relay messages through other robots that are still in touch. Robots could also extend the communications range by dropping portable radio relays, sometimes called “bread crumbs,” while on the move, making it easier to stay in contact with the controller and other robots.

Even when communication is maintained, though, the demands of operating several robots at once can overwhelm a single person. To solve that problem, researchers are working on giving the robots autonomy to cooperate with one another.

In 2017, DARPA funded a multiyear challenge to develop technologies for robots deployed underground. Participants, including engineers working at universities and technology companies, had to map and search a complex subterranean space as quickly and efficiently as possible.

Engineering photo
Participants in the DARPA challenge used teams of robots to explore a varied underground space that included tunnels, caves and urban spaces such as subway stations.
CREDIT: DARPA

The teams that performed best at this task were those who gave the robots some autonomy, says Orekhov. When robots lost touch with one another and their human operator, they could explore on their own for a certain amount of time, then return to radio range and communicate what they had found.

One team, from Australia’s Commonwealth Scientific and Industrial Research Organisation (CSIRO), took this further by designing its robots to make decisions cooperatively, says Navinda Kottege, a CSIRO roboticist who led the effort. The robots themselves decided which tasks to undertake — whether to map this room, explore that corridor or drop a communications node in a particular spot.

The robots also decided how to split up the work most effectively. If a rolling robot spotted a corridor that was too narrow to enter, a smaller walking robot could come and take over the job. If one robot needed to upload information to the base station, it might transmit it to a robot that was nearer to the entrance, and ask that robot to walk back to within communications range.

“There were some very interesting emergent behaviors. You could see robots swapping tasks amongst themselves based on some of those factors,” Kottege says.

In fact, the human operator can become the weak link. In one effort, a CSIRO robot wouldn’t enter a corridor, even though an unexplored area lay beyond it. The human operator took over and steered the robot through — but it turned out that the corridor had an incline that was too steep for the robot to manage. The robot knew that, but the human didn’t.

“So it did a backflip, and it ended up crushing the drone on its back in the process,” Kottege says.

To correct the problem, the team built a control system that lets the human operator decide on overall strategy, such as which parts of the course to prioritize, and then trusts the robots to make the on-the-ground decisions about how to get it done. “The human support could kind of mark out an area in the map, and say, ‘This is a high priority area, you need to go and look in that area,’” Kottege says. “This was very different than them picking up a joystick and trying to control the robots.”

This autonomous team concept broke new ground in robotics, says Kostas Alexis, a roboticist at the Norwegian University of Science and Technology whose team ultimately won the challenge. “The idea that you can do this completely autonomously, with a single human controlling the team of robots, just providing some high-level commands here and there … it had not been done before.”

Ideally, underground robots should be able to explore autonomously, allowing them to work even when rock or other materials block radio contact with the surface. This video shows how robots can search and map an unknown space on their own, and move through it safely. (Warning: This video has flashing lights.)
CREDIT: KOSTAS ALEXIS

There are still problems to overcome, Orekhov notes. During the competition, for example, many robots broke down or got stuck and needed to be hauled off the course when the competition was over. After just an hour, most teams had only one or two functioning robots left.

But as robots become better, teams of them may one day be able to go into a hazardous disaster site, locate survivors and report back to their human operators with a minimum of supervision.

“There’s definitely lots more work that can and needs to be done,” Orekhov says. “But at the same time, we’ve seen the ability of the teams advanced so rapidly that even now, with their current capabilities, they’re able to make a significant difference in real-life environments.”

10.1146/knowable-072023-2

Kurt Kleiner is a freelance writer living in Toronto.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. Sign up for the newsletter.

Knowable Magazine | Annual Reviews

The post Deep underground, robotic teamwork saves the day appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robot limbs could keep satellite plasma thrusters at an arm’s length https://www.popsci.com/technology/esa-onesat-robot-arms/ Tue, 25 Jul 2023 18:00:00 +0000 https://www.popsci.com/?p=559018
onesat robotic arm
This robot arm enables the OneSat to change and control its own orbit fuel. ESA

The ESA's OneSat is getting a pair of robotic appendages to help improve its fuel efficiency.

The post Robot limbs could keep satellite plasma thrusters at an arm’s length appeared first on Popular Science.

]]>
onesat robotic arm
This robot arm enables the OneSat to change and control its own orbit fuel. ESA

Satellites are usually a bespoke endeavor, requiring a range of specific needs and logistical plans to meet project requirements. To simplify some of these challenges a few years ago, a team of researchers and engineers working together between the European Space Agency (ESA), France’s CNES, the UK Space Agency, and Airbus unveiled the OneSat—a standardized telecommunications satellite capable of adjusting capacity, coverage areas, and frequency “on the fly” while in orbit.

On Tuesday, the ESA announced a new feature passed its inspection reviews and is ready to ship out with future OneSat launches. The latest addition is a “deployment and pointing system” featuring robotic arms capable of positioning a satellite’s plasma thrusters far away from its body. Such an addition will optimize the usage of OneSats’ xenon fuel reserves.

[Related: DARPA wants to push the boundaries of where satellites can fly.]

As tech news site The Next Web noted on Tuesday, the announcement means that OneSat is now “fully propelled by European technology.” In its official statement, the ESA explained that, “The deployment and pointing system promotes European autonomy and constitutes an essential feature of the industrial footprint in Europe of OneSat.”

Construction of the OneSat deployment and pointing system was truly a multinational effort within Europe—France’s Airbus designed the system, while Belgian manufacturer Euro Heat Pipes built the devices. A company in Spain provided its rotary actuator, while the booms, harnesses, and plasma thrusters were all also developed and assembled by multiple French outlets.

[Related: This giant solar power station could beam energy to lunar bases.]

OneSat deployment is meant to have extremely tangible effects across the world, including providing traditional TV broadcasting, boosting in-flight internet connections for air travelers, and helping remote communities gain previously unreliable or wholly lacking access to communications.

Because of their modular design, The Next Web also explained each OneSat can be built using largely off-the-shelf components, thereby allowing them to potentially enter the market in half the time as other satellite options, and for less cost. Multiple companies around the world have already placed orders for OneSat, including Japan’s main satellite operator, SKY Perfect JSAT Corporation. According to the ESA, this marks the first time a European satellite has been sold to a Japanese telecom company.

The post Robot limbs could keep satellite plasma thrusters at an arm’s length appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An electric cow, a robot mailman, and other automatons we overestimated https://www.popsci.com/technology/robot-fails/ Sat, 15 Jul 2023 11:00:00 +0000 https://www.popsci.com/?p=557015
Vintage robots
Predicting the future is fraught with peril. Popular Science

A look back at some robotic inventions that didn't quite get there.

The post An electric cow, a robot mailman, and other automatons we overestimated appeared first on Popular Science.

]]>
Vintage robots
Predicting the future is fraught with peril. Popular Science

In the series I Made a Big Mistake, PopSci explores mishaps and misunderstandings, in all their shame and glory.


In Hollywood, robots have come in many shapes and sizes. There’s the classic, corrugated-tubing-limbed Robot from the television series Lost In Space (1965); the clunky C-3PO and cute R2-D2, the Star Wars (1977) duo; the tough Terminator from The Terminator (1984) played by Arnold Schwarzenegger; the mischievous Johnny 5 from Short Circuit (1986); the kind-hearted, ill-fated Sonny in I, Robot (2004); and WALL-E (2008), the endearing trash-collecting robot. Robot-reality, however, still lags behind robot-fiction by quite a bit. Even Elon Musk’s October 2022 debut of Optimus—a distinctly masculine humanoid-frame robot prototype built by Tesla that, for the first time, wobbled along sans cables—failed to wow critics, who compared it to decades-old Japanese robotics and noted that it lacked any differentiating capabilities. 

And yet, automatons—self-propelled machines—are not new. More than two millennia ago, Archytas, an inventor from ancient Greece, built a pulley-activated wooden dove, capable of flapping its wings and flying a very short distance (a puff of air triggered a counterweight that set the bird in motion). Around the 12th century, Al-Jazari, a prolific Muslim inventor, built a panoply of automatons, including a water-powered mechanical orchestra—a harpist, a flutist, and two drummers—that rowed around a lake by means of mechanical oarsmen. Leonardo Da Vinci’s notebooks are peppered with detailed sketches of various automatons, including a mechanical knight capable of sitting up, waving its arms, and moving its head and purportedly debuted in 1495. But it was Czech playwright Karel Čapek in his 1920 play, R.U.R. (Rossum’s Universal Robots), who first coined the phrase “robot” as a distinct category of automaton. Robot comes from the Czech, robota, which means forced labor. As Popular Science editor, Robert E. Martin, wrote in December 1928, a robot is a “working automaton,” built to serve humans. Isaac Asimov enshrined Čapek’s forced-labor concept in his three laws of robotics, which first appeared in 1942 in his short story “Runaround.”

Predicting the future is fraught with peril, especially for the science writer enthralled by the promise of a new technology. But that hasn’t stopped Popular Science writers and editors from trying. Past issues are peppered with stories of robots ready to take the world by storm. And yet, our domestic lives are still relatively robot free. (Factory automation is another story.) That’s because we underestimate just how sophisticated humans can be, taking on menial tasks with ease, like sorting and folding laundry. Even in the 21st century, service and domestic robots disappoint: design-challenged, single-purpose machines, like the pancake-shaped vacuums that knock about our living rooms. Advances in machine learning may finally add some agility and real-world adaptability to the next generation of robots, but until we get there (if we get there), a look back at some of the stranger robotic inventions, shaped by the miscalculations and misguided visions of their human inventors, might inform the future. 

Robots for hire

Robots photo
Popular Science August 1940 Issue

Looking for “live” entertainment to punctuate a party, banquet, or convention? Renting out robot entertainers may have roots as far back as 1940, according to a Popular Science story that described the star-studded life of Clarence the robot. Clarence, who resembled a supersized Tinman, could walk, talk, gesture with his arms, and “perform other feats.” More than eight decades later, however, robot entertainers are only slightly more sophisticated than their 1940s ancestor, even if they do have sleeker forms. For instance, Disney deploys talking, arm-waving, wing-flapping robots to animate rides, but they’re still pre-programmed to perform a limited range of activities. Chuck E. Cheese, which made a name for itself decades ago by fusing high-tech entertainment with the dining experience, has been phasing out its once-popular animatronics. Pre-programmed, stiff-gestured animal robots seem to have lost their charm for kiddos. They still can’t dance, twirl, or shake their robot booties. Not until Blade Runner-style androids hit the market will robot entertainment be worth the ticket price.

Animatronics that smoke, drink, and—moo

Robots photo
Popular Science May 1933

In May 1933, Popular Science previewed the dawn of animatronics, covering a prototype bound for the 1934 Chicago World’s Fair. The beast in question was not prehistoric, did not stalk its prey, and had no teeth to bare, but it could moo, wink its eyes, chew its cud, and even squirt a glassful of milk. The robotic cow may have been World’s Fair-worthy in 1933, but by 1935, Brooklyn inventor Milton Tenenbaum upped the stakes when he introduced a life-like mechanical dummy that, according to Popular Science, was known for “singing, smoking, drinking, and holding an animated conversation.” Tenenbaum proposed using such robots for “animated movie cartoons.” Although Hollywood was slow to adopt mooing cows and smoking dummies, Tenenbaum may have been crystal-balling the animatronics industry that eventually propelled blockbuster films like Jaws, Jurassic Park, and Aliens. Alas, with the advent of AI-generated movies, like Waymark’s The Frost, released in March 2023, animatronic props may be doomed to extinction.

The robot mailman

Robots photo
Popular Science October 1976 Issue

In October 1976, Popular Science saw the automated future of office mail delivery, declaring that the “Mailmobile is catching on.” Mailmobiles were (past tense) automated office mail carts that followed “a fluorescent chemical that can be sprayed without harm on most floor surfaces.” Later models used laser-guidance systems to navigate office floors. Mailmobiles were likely doomed by the advent of email, not to mention the limitations of their singular purpose. But in their heyday they were loved by their human office workers, who bestowed them with nicknames like Ivan, Igor, and Blue-eyes. A Mailmobile even played a cinematic role in the FX series, The Americans. Despite being shuttered in 2016 by their manufacturer, Dematic, (the original manufacturer was Lear Siegler, who also made Lear jets), there’s no denying their impressive four decade run. Of course, the United States Postal Service employs automation to process mail, including computer vision and sophisticated sorting machines, but you’re not likely to see your mail delivered by a self-driving mail mobile anytime soon. 

Lawn chair mowers

Robots photo

Suburban homeowners would probably part with a hefty sum for a lawn-mowing robot that really works. Today’s generation of wireless automated grass-cutters may be a bit easier to operate than the tethered type that Popular Science described in April 1954, but they’re still sub-par when it comes to navigating the average lawn, including steep grades, rough turf, and irregular geometries. In other words, more than a half century after their debut, the heart-stopping price tags on robot lawn mowers are not likely to appeal to most homeowners. Sorry suburbanites—lawn-chair mowing is still a thing of the future.

Teaching robots

Robots photo
Popular Science May 1983 Issue

It was in the early 1980s that companies began to roll out what Popular Science dubbed personal robots in the May 1983 issue. With names like B.O.B, HERO, RB5X, and ITSABOX for their nascent machines, the fledgling companies had set their sights on the domestic service market. According to one of the inventors, however, there was a big catch: “Robots can do an enormous number of things. But right now they can’t do things that require a great deal of mechanical or cognitive ability.” That ruled out just about everything on the home front, except, according to the inventors and, by extension, Popular Science, “entertaining guests and teaching children.” Ahem. Teaching children doesn’t require a great deal of cognitive ability? Go tell that to a teacher. Gaffes aside, fast forward four decades and, with the capabilities of large language models demonstrated by applications like Open AI’s ChatGPT, we might be on the cusp of building robots with just enough cognitive ability to somewhat augment the human learning experience (if they ever learn to get the facts right). As for robots that can reliably fold laundry and cook dinner while you’re at work? Don’t hold your breath.

The post An electric cow, a robot mailman, and other automatons we overestimated appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Chipotle is testing an avocado-pitting, -cutting, and -scooping robot https://www.popsci.com/technology/chipotle-avocado-robot/ Thu, 13 Jul 2023 19:00:00 +0000 https://www.popsci.com/?p=556746
Chipotle worker removing peeled and sliced avocados from Autocado robot
Autocado halves, peels, and cores avocados in half the time humans can. Chipotle

The prototype machine reportedly helps workers cut the time it takes to make guac by half.

The post Chipotle is testing an avocado-pitting, -cutting, and -scooping robot appeared first on Popular Science.

]]>
Chipotle worker removing peeled and sliced avocados from Autocado robot
Autocado halves, peels, and cores avocados in half the time humans can. Chipotle

According to Chipotle, it takes approximately 50 minutes for human employees to cut, core, and scoop out enough avocados to make a fresh batch of guacamole. It’s such a labor-intensive process that Chipotle reports some locations apparently have workers wholly “dedicated” to the condiment composition. The time it takes to complete the lengthy task could soon be cut in half, however, thanks to a new robotic coworker.

On Wednesday, Chipotle announced its partnership with the food automation company Vebu to roll out the Autocado—an aptly named “avocado processing cobotic prototype” designed specifically to prepare the fruit for human hands to then mash into tasty guac.

[Related: You’re throwing away the healthiest part of the avocado.]

Per the company’s announcement, Chipotle locales throughout the US, Canada, and Europe are estimated to run through 4.5 million cases of avocados in 2023—reportedly over 100 million pounds of fruit. The Autocado is designed specifically to cut down on labor time, as well as also optimize the amount of harvested avocado. Doing so not only would save costs for the company, but cut down on food waste.

To use the Autocado, employees first dump up to 25-pounds of avocados into a loading area. Artificial intelligence and machine learning then vertically orient each individual fruit before moving the ingredients along to a processing station to be halved, cored, and peeled. Employees can then retrieve the ready avocado from a basin, then combine them with the additional guacamole ingredients and mash away.

“Our purpose as a robotic company is to leverage automation technology to give workers more flexibility in their day-to-day work,” said Vebu CEO Buck Jordan in yesterday’s announcement.

[Related: Workplace automation could affect income inequality even more than we thought.]

But as Engadget and other automation critics have warned, such robotic rollouts often can result in sacrificing human jobs for businesses’ bottom lines. In one study last year, researchers found that job automation may actually extract an even heavier toll on workers’ livelihoods, job security, and quality of life than previously believed. Chipotle’s Autocado machine may not contribute to any layoffs just yet, but it isn’t the only example of the company’s embrace of similar technology: a tortilla chip making robot rolled out last year as well. 

Automation isn’t only limited to burrito bowls, of course. Wendy’s recently announcing plans to test an underground pneumatic tube system to deliver food to parking spots, while Panera is also experimenting with AI-assisted coffeemakers. Automation isn’t necessarily a problem if human employees are reassigned or retrained in other areas of service, but it remains to be seen which companies will move in that direction. 

Although only one machine is currently being tested at the Chipotle Cultivate Center in Irvine, California, the company hopes Autocado could soon become a staple of many franchise locations.

Correction 7/13/23: A previous version of this article referred to Chipotle’s tortilla chip making robot as a tortilla making robot.

The post Chipotle is testing an avocado-pitting, -cutting, and -scooping robot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Four-legged dog robots could one day explore the moon https://www.popsci.com/technology/robot-dog-team-moon/ Wed, 12 Jul 2023 18:00:00 +0000 https://www.popsci.com/?p=556224
Three quadruped robots standing on rocky terrain
Teams of quadruped robots could one day prove vital to lunar mining. ETH ZURICH / TAKAHIRO MIKI

Built-in redundancies may enable teams of quadruped dog bots to explore the lunar surface.

The post Four-legged dog robots could one day explore the moon appeared first on Popular Science.

]]>
Three quadruped robots standing on rocky terrain
Teams of quadruped robots could one day prove vital to lunar mining. ETH ZURICH / TAKAHIRO MIKI

Humans are going to need a lot of supplies if they hope to establish a permanent lunar base on the moon—an incredibly expensive logistical hurdle to clear. While return missions can hypothetically restock a great deal of the astronauts’ needs, it would be a lot cheaper and easier to harvest at least some of the necessary materials right there for base construction and repair projects. Of course, doing so will require serious teamwork to pull off—a team that could one day include packs of four-legged robots.

According to a study published on Wednesday in Science Robotics, researchers at Switzerland’s ETH Zurich university recently oversaw a series of outdoor excursions for a trio of modified quadruped ANYmal robots. Researchers tested their team on a variety of terrains across Switzerland and at the European Space Resources Innovation Centre (ESRIC) in Luxembourg.

[Related: NASA could build a future lunar base from 3D-printed moon-dust bricks.]

Engineers at ETH Zurich worked alongside the Universities of Basel, Bern, and Zurich to program each ANYmal with specific lunar tasks: One was taught to utilize a microscopy camera alongside a spectrometer to identify varieties of rock, while another focused on using cameras and a laser scanner to map and classify its surrounding landscape. Finally, a third robot could both identify rocks and map its surroundings—albeit less precisely for each task than either of its companions.

“Using multiple robots has two advantages,” doctoral student and researcher Philip Arm explains. “The individual robots can take on specialized tasks and perform them simultaneously. Moreover, thanks to its redundancy, a robot team is able to compensate for a teammate’s failure.” Because of their overlaps, a mission could still be completed even if one of the three robots breaks down during its duties.

The team’s redundancy-focused explorers even won an ESRIC and ESA Space Resources Challenge, which tasked competitors with locating and identifying minerals placed throughout a test area modeled after the lunar surface. In taking first place, the jury provided another year of funding to expand both their number and variety of robots. Researchers say future iterations of the lunar exploration team could include both wheeled and flying units. Although all of the robots’ tasks and maneuvers are currently directly controlled by human inputs, the researchers also hope to eventually upgrade their explorers to be semi-autonomous.

The post Four-legged dog robots could one day explore the moon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The best robot vacuums of 2023 https://www.popsci.com/story/reviews/best-robot-vacuum/ Mon, 01 Nov 2021 11:00:00 +0000 https://www.popsci.com/uncategorized/best-robot-vaccum/
Home photo

Here’s what to look for when you’re shopping, plus a few of our favorite models.

The post The best robot vacuums of 2023 appeared first on Popular Science.

]]>
Home photo

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Best smart Thin, black robot vacuum cleaner for hardwood floors with boundary stripes eufy BoostIQ RoboVac 30C MAX
SEE IT

Connect to the eufyHome app and Alexa or Google Assistant for streamlined cleaning where you can control schedules, notifications, and locate your vacuum.

Best vacuum and mop combo The Robovac S7 MaxV Ultra Robot Vacuum and Mop is one of the best robot vacuums that's a combo.Roborock-S7-MaxV-Ultra-Robot-Vacuum-and-Mop-best-robot-vacuums Roborock S7 MaxV Ultra Robot Vacuum
SEE IT

This two-in-one pick uses artificial intelligence and 3D scanning to map out your home and provides strong suction and sonic scrubbing power.

Best self-emptying iRobot Roomba s9+ (9550) Robot Vacuum with Automatic Dirt Disposal- Empties itself, Wi-Fi Connected, Smart Mapping, Powerful Suction, Anti-Allergen System, Corners & Edges, Ideal for Pet Hair is the smart vacuum that you need in your house. iRobot Roomba s9+ Robot Vacuum
SEE IT

With powerful suction and a self-emptying function, you can go up to 60 days without emptying the canister. It’s never been easier to maintain a clean home without lifting a finger.

Nothing beats hands-free cleaning—and it truly doesn’t get any better than a robot vacuum. With the push of a button, the best robot vacuums can tackle the largest room in your house without wasting any time. They’re equipped with special features like a quick connection to handheld devices or the ability to remember the overall layout of each room in your home. Stop spending hours panic-vacuuming before guests come over or doing chores on your weekends. Enjoy more free time while these devices take care of the dirty work. All you need to operate a robot vacuum is an open outlet for its charging port and you’re ready to roll. Below are our favorite options and the things you will want to consider in your search for the best robot vacuum cleaner.

How we selected the best robot vacuums

We compared a range of over 50 robot vacuum models for price, brand, added features, mapping technology, reviews, and battery life. No two people’s cleaning needs are the same, which is why we provided a variety of options—from mop-only options from Samsung that can cut through stubborn grime to self-emptying picks for those that don’t want to lift a pinky. Many of the brands we selected have made a name for themselves in tech and vacuums, so we could be sure you’re choosing a robo-vac that will be both reliable and worth the investment.

Best robot vacuums: Reviews & Recommendations

Best smart: eufy BoostIQ RoboVac 30C MAX

Amazon

SEE IT

Why it made the cut: With a large-capacity dust bin and powerful motor, the eufy is a great pick for just about any home.

Specs:

  • Surfaces: Hard floor, carpet
  • Bin size: .6 L
  • Run time: Maximum 100 minutes

Pros:

  • Strong suction power
  • Voice-control equipped
  • Boundary strips for personalized control

Cons:

  • Does not map floor plan

The brand eufy is a branch of Anker Innovations founded by Steven Yang in 2011, following his work with Google. The goal of eufy is to create products that make the “smart home simplified” with a focus on accessibility and convenience. The company’s devices are designed to easily connect with one another, creating cohesion and coherence in the home, from wireless security systems to robot vacuums and light bulbs. And the RoboVac from eufy is as “smart” as it gets when it comes to robot vacuums. It connects to Alexa and Google Assistant, as well as the specifically designed eufyHome app where you can set cleaning schedules, direct the clean with remote control, receive notifications, and locate your robot. You can easily program boundary strips that the RoboVac will identify using 10 built-in sensors as it uses the bounce method to clean.

Best vacuum and mop combo: Roborock S7 MaxV Ultra Robot Vacuum and Mop

Billy Cadden

SEE IT

Why it made the cut: This pick auto-detects the difference between carpet and hard floors to give you a complete and hassle-free clean.

Specs:

  • Surfaces: Hard floor, carpet, tile
  • Bin size: 300 mL water tank
  • Run time: Three hours

Pros:

  • AI detects obstacles
  • Long battery life
  • Self-cleaning

Cons:

  • Expensive

Our favorite robot vacuum-and-mop hybrid is the Roborock S7 MaxV Ultra Robot Vacuum and Mop. This state-of-the-art device is designed with LiDAR navigation reactive artificial intelligence, and structured light 3D scanning to help it map put your home and cover steer clear of obstacles like shoes and toys. The Robovac also provides a powerful suction of 5100 Pa. The mopping function, meanwhile, incorporates sonic vibration technology, which allows it to scrub up to 3,000 times a minute. That a lot of time, since it stays charged for up to three hours. Just tell the Robovac what you want with Alexa or Google Assistant. Oh, and it’s self-cleaning as well.

Best self-emptying: iRobot Roomba s9+ Robot Vacuum

Amazon

SEE IT

Why it made the cut: The power of a robot vacuum, with a self-emptying feature to eliminate every step of this household task.

Specs:

  • Surfaces: Carpet
  • Bin size: Reported 60 days of dirt
  • Run time: Maximum 120 minutes

Pros:

  • Self-emptying design
  • Three stage cleaning for more thorough vacuum
  • Smart mapping

Cons:

  • Some software issues in-app

The Roomba S9+ is iRobot’s most powerful vacuum to date and, boy, does it pack a punch. The iRobot company was founded in 1990 by three MIT roboticists—Colin Angle, Helen Geiner, and Rodney Brooks—with the vision of making practical robots a reality. Their first robot vacuum was released in 2002 and they have been consistently adding to and improving this design ever since. This vacuum self-evacuates after each clean at its docking station, which is equipped with a dirt disposal container that can hold up to 60 days of dust and debris. That means you can vacuum every day for almost two months without being bothered by multiple trips to the trash can.

Best with mapping technology: Neato Robotics Botvac D8 Connected

Neato

SEE IT

Why it made the cut: Easily designate which areas you want to be cleaned with the virtual No-Go lines and high-tech features on this Neato Robotics pick.

Specs:

  • Surfaces: Hard floor
  • Bin size: .7 L
  • Run time: 100 minutes

Pros:

  • Gets into hard-to-reach areas
  • HEPA filter
  • Automatic learning

Cons:

  • Louder than other options

The Botvac D8 from Neato Robotics is a great go-to vacuum that can map and store the memory of up to three floors in your home for a methodical, planned clean, as well as zone-clean specific messes or spills when you tell it to. You can easily draw no-go lines on your phone’s touchscreen using the Neato app that the vacuum will automatically learn and follow. It comes equipped with a HEPA filter to capture dust mites and allergens, battery life of up to 100 minutes, a large 0.7-liter dustbin, and a flat edge design for quick and easy corner clean. Additionally, the brush on the D8 is 70-percent larger than other leading brands, so this vacuum is specifically great for picking up pet hair.

Best for marathon cleaning sessions: Ecovacs Deebot Ozmo T5

Amazon

SEE IT

Why it made the cut: This mop-meets-vacuum has a long battery life and high-tech features to make your clean as seamless as possible.

Specs:

  • Surfaces: Hard floor, carpet
  • Bin size: 430 mL
  • Run time: Over three hours

Pros:

  • Long battery life
  • Mopping included
  • Laser-mapping technology for a complete clean

Cons:

  • Mop could use more water

Ecovacs was established as a company in 1998 with the official Ecovacs Robotics brand created in 2006. They specialize in spatially aware, mobile robots that clean your home, and the Deebot Ozmo is no exception. The Deebot Ozmo T5 from Ecovacs can run for over three hours, cleaning up to 3,200 square feet in a single session. Along with the impressive battery life, this vacuum is equipped with Smart Navi 3.0 laser-mapping technology to keep track of your home and prevent any missed areas, a high-efficiency filter, and three levels of suction power. It connects to your smartphone for a customized clean, and, did we mention? It’s also a mop. Yep, this vacuum can also simultaneously mop your floors, recognizing and avoiding carpeted areas as it cleans.

Best mop-only robot: SAMSUNG Electronics Jetbot Robotic

Why it made the cut: When it comes to automated mopping, this Samsung pick is designed with squeaky-clean floors in mind.

Specs:

  • Surfaces: Tile, vinyl, laminate, and hardwood
  • Run time: 100 minutes

Pros:

  • Multiple cleaning pads
  • Eight cleaning modes
  • Dual pads remove grime

Cons:

  • No mapping

Whether you’re cleaning your bathroom floors, hardwood in the living room, or laminate in the kitchen, the dual spinning pads on the Samsung Jetbot (you can choose machine-washable Microfiber or Mother Yarn) scrub away grime and dirt without the effort of mopping. The eight cleaning modes (selectable via remote) include hand mode, focus mode, and random mode, among others, allowing you to personalize your clean depending on the room and mess level. A 100-minute battery allows for enough time for the double water tanks to offer edge-to-edge coverage.

What to consider when shopping for the best robot vacuums

There are five major things you should take into consideration when purchasing a robot vacuum. The best robot vacuums have a long-lasting battery and a large bin capacity so they can work away in your home without needing to be dumped out or recharged before the job is over. You might want to find one that can easily connect to your smartphone for customized or remote control. And if you’re really looking to elevate your floors, consider a robot vacuum with a mopping function to make your surfaces shine. Finally, look for other advanced features like mapping capabilities or smart-timers. We know that’s a lot of information to keep in mind while you shop, so we’ve created a thorough guide to help you better understand these features, as well as some product suggestions to get you started.

How much cleaning time do you want?

A robot vacuum is only as good as its battery life. Fortunately, many robot vacuums have batteries that last at least one hour. If you have a larger living space you might want to look for something that can last between 90 to 120 minutes to make sure the robot can get to every nook and cranny before needing to recharge. Keep in mind, some vacuums have different power settings, like high intensity or turbo that might drain its battery more quickly. Think about how you want to use your vacuum, what your regular time frames for cleaning will look like, and whether or not you need more surface coverage or suction power.

Most robot vacuums will either alert you when the battery is low or they will dock themselves at their charger. They may also do this automatically after every clean, which means you’ll never have to bother with locating a charging cable or deal with the consequences of forgetting to plug it in. A truly smart robot vacuum will take care of itself after taking care of your floors.

Do you want to control the robot vacuum with your phone?

The best robot vacuums pair with your smartphone so you can create customized settings and control your clean remotely. When we say these things can get fancy, we mean fancy. A device compatible robot vacuum might be able to pair with Alexa or Google Assistant, follow invisible boundary lines you create to keep it away from loose rugs or lots of cables, generate statistics based on a recent clean, tell you how much battery life is left, and virtually map your living space. Being able to control a robot vacuum from your phone means going to brunch with friends, running to the grocery store, picking up your kids from school, and coming home to a clean house. Some models even allow you to set a predetermined schedule for cleaning so you won’t even have to pull out your phone to get it going. Keep in mind, it might be a good idea to be home for your robot’s first clean so you can identify any tough spots or issues your little machine might face.

Before purchasing make sure you check each vacuum’s compatibility, especially if you are using an Android or you are looking to connect to a specific virtual assistant. Many of the vacuums are going to work great with any smart device, but we would hate for you to get ready to connect only to end up disappointed.

Do you want it to take out the trash for you?

Not all robot vacuums can collect the same amount of debris and detritus before needing to be emptied out. Think about how frequently you’re hoping to vacuum your home and how much dust, dirt, and pet dander might accumulate in the meantime. If you have a smaller living area, keep things relatively tidy, dust and sweep often, or vacuum regularly, you might be able to survive on a smaller bin. However, if you know you need something more heavy-duty, don’t skimp on bin storage. The average dustbin size is 600 milliliters; some can go up to 700 or 750. These dustbins are easy to remove and don’t require extra work, such as bag or filter replacement. If you have a cat or dog (or a very hairy human) running around the house, consider a vacuum that specifically boasts its ability to pick up hair and dander.

One of the best features a robot vacuum can have is a self-evacuating bin. Instead of emptying a bin after every one or two cleaning sessions, your vacuum will automatically deposit all of its collected dust bunnies, forgotten LEGO pieces, food crumbs, and other artifacts to a larger bin at its docking station. Many of these stations come with allergen filters and other sensors to keep its contents completely sealed. It will let you know when it needs to be emptied so you don’t have to worry about spillage or clogging. Now that’s some seriously futuristic cleaning.

Do you want a mop, too?

We are pleased to inform you that the best robot vacuums can also mop, so not only will you have all the dirt and debris sucked away but you’ll also have sparkling clean floors free of stains and spills. These vacuum-mop hybrids have two compartments: one for collecting the bits and pieces that are suctioned up and another to hold water that will go over hardwood or tile flooring. These hybrids typically come with a sensor that informs the robot where carpeted areas can be found, which the vacuum will avoid when it’s time to mop. That’s one more chore your smart vacuum can take care of and one more episode of TV you get to watch instead!

If the vacuum you are looking at doesn’t have its own mopping function, or maybe a hybrid isn’t in your price range, look for models that are able to pair with a separate robot mopper all together. Many brands create individual vacuums and mops that communicate with one another via smartphone or internal programming to schedule cleanings one right after the other. They can often be stored next to one another and have similar special features and battery life—so you can count on this dynamic duo to get the job done.

Does it know your home?

We touched on special features a little bit when we outlined smartphone compatibility, but we want to dive in further and really explain the kinds of advanced features you might want to prioritize when considering which robot vacuum is right for you. The first thing to look for is a vacuum with obstacle identification features, so your vacuum can identify small barriers like power strips, cables, pet toys, or shoes. There’s nothing worse than coming home and finding your vacuum trapped in an endless battle between your internet router and your kid’s favorite stuffed animal, right?

You can also look for specific mapping capabilities that determine whether or not your robot cleans randomly or methodically. A random robot using a “bounce” cleaning method might have object identification sensors—but it won’t necessarily keep track of where in your house it has been, and will go over areas more than once for a thorough clean. A methodical vacuum has sensors that track where it’s been and what areas of the house it’s covered. This is often faster, but not always the most thorough. However, these methodical cleaners collect data over time to retain a virtual map of your home for a more efficient clean. Just make sure you keep the lights on during a vacuuming session, because these sensors need to quite literally “see” in order to collect information and avoid bumping into things. Once this data has been collected, you might also be able to set up boundaries or no-clean zones from your phone. This tells the robot where to avoid, like a play area or delicate carpet.

You can also look for a vacuum with a camera so you can see where it is or simply check in on your home. There are almost endless advanced features you can choose to prioritize depending on your needs.

FAQs

Q: Do cheap robot vacuums work?

Affordable robot vacuums can still achieve the clean of your dreams but might sacrifice some added features, like self-emptying, smart-home connectivity, or mopping capabilities. That said, even a cheap robot vacuum will still drastically cut down the time you spend on chores—in our book, that’s a win.

Q: Is it worth getting a robot vacuum?

You can spend less time cleaning when you have a robot vacuum in your arsenal. While models are still evolving with better technology, those with families, pets, or simply limited spare time can benefit from investing in a robot vacuum. Regular vacuums—like a this one from Dyson—can be quite pricey as well, so why not spend a bit more and relegate the chore to hands-free software?

Q: Can robot vacuums go from hardwood to carpet?

In short, it depends. While some models can auto-detect the transition from carpet to hardwood floors, others will need you to map out different zones. These maps can help your robot vacuum determine what modes it needs to be on for each area to ensure an overall deep clean.

The final word on shopping for the best robot vacuums

An amazing, hands-free clean should now be well within reach with a robot vacuum. There are so many options out there and we hope you now know what to look for when venturing out to get your new robotized housekeeper. Keep in mind that the best robot vacuums are worth investing in for an efficient, smart, and clean home with the push of a button.

Why trust us

Popular Science started writing about technology more than 150 years ago. There was no such thing as “gadget writing” when we published our first issue in 1872, but if there was, our mission to demystify the world of innovation for everyday readers means we would have been all over it. Here in the present, PopSci is fully committed to helping readers navigate the increasingly intimidating array of devices on the market right now.

Our writers and editors have combined decades of experience covering and reviewing consumer electronics. We each have our own obsessive specialties—from high-end audio to video games to cameras and beyond—but when we’re reviewing devices outside of our immediate wheelhouses, we do our best to seek out trustworthy voices and opinions to help guide people to the very best recommendations. We know we don’t know everything, but we’re excited to live through the analysis paralysis that internet shopping can spur so readers don’t have to.

The post The best robot vacuums of 2023 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A look at the weird intersection of taxidermy and car design https://www.popsci.com/technology/hyundai-risd-car-design-nature/ Mon, 10 Jul 2023 20:39:58 +0000 https://www.popsci.com/?p=554932
a model of a Kia EV9
A 3D-printed model of a Kia EV9. Kia America

An automaker and a design school have been collaborating on nature-based auto ideas. Here's what's been growing out of the partnership.

The post A look at the weird intersection of taxidermy and car design appeared first on Popular Science.

]]>
a model of a Kia EV9
A 3D-printed model of a Kia EV9. Kia America

In general, we see cars as artificial and inanimate machines made from welded steel and plastic. But what if vehicles could be designed with the evolution of microorganisms in mind, representing a collaboration with nature? Kia, Hyundai, and Genesis are investigating that worldview with a group of young artists and scientists at the renowned Rhode Island School of Design. 

Hyundai Motor Group (HMG), the parent company of all three brands, kicked off the RISD x Hyundai Motor Group Research Collaborative in 2019. Now in its fourth year, the unique partnership is focused on actively exploring the relationship between nature, art, and design for the good of humankind. Using phrases like “biologized skin” for robots and “chemotaxi processes” to describe movement, the team of students, professors, and HMG engineers and designers are challenging traditional ideas about how machines can work. 

Here’s what to know about projects that RISD students have created with the future of Hyundai Motor Group in mind. 

Using slime mold to mimic autonomous vehicles

While test-driving a brand-new 2024 Kia Seltos in and around Providence, Rhode Island with a group of journalists, we made a stop at RISD to hear from students in the program. In an initiative called Future Spaces and Autonomous Vehicles, students examined the future of autonomous vehicles using scientific methodologies combined with design-focused thinking. 

The first presenter, 2023 graduate Manini Banerjee, studied at Brown and Harvard before making her way to RISD, and she challenged us to think about how a car might work if it were driven by organisms instead of algorithms. 

In their research, Banerjee and her lab partner, Mehek Vohra, discovered that each autonomous vehicle processes 40 terabytes of data per hour; that’s the equivalent of typical use of an iPhone for 3,000 years, Banerjee says. The problem, she asserts, is that data processing and data storage relies heavily on carbon-emitting data centers, which only accelerates global warming. Vohra and Banerjee set out to find out if there is an opportunity for organic, sustainable data-free navigation. 

[Related: Inside the lab that’s growing mushroom computers]

Using a slime mold organism as a vehicle, the team observed how the mold grows, learns, and adapts. In a cardboard maze, the slim mold organism mimicked the movements of autonomous vehicles. During the study, they noticed the slime mold learned how to find the maze’s center through sensing chemicals and light in its environment. Is it possible to replace carbon-heavy data processes with a nature-based solution? Yes, Banerjee says. (According to Texas A&M, slime molds exist in nature as a “blob,” similar to an amoeba, engulfing their food, which is mostly bacteria. And in related work, research out of the University of Chicago involved using slime mold in a smartwatch in 2022.) 

“Civilization has been measured by this distance between the natural and the built environment,” she told the group. “I feel that we’ve begun to build that space with technological advancements.”

“Turn away from blindly pursuing innovation”

Today, designers and engineers look to the outside world to better understand physiology, patterns in nature, and beauty. The future of nature and cars as collaborators is front and center for the RISD and HMG partnership. 

There are about 100,000 taxidermied specimens in RISD’s Nature Lab collection; it’s on par with a world-class natural history museum and has been around since 1939. Students can check out a specimen from the lab like one might check out a library book for study. For instance, studying the wings of the kingfisher may spur an idea for not just colors but patterns, textures, and utility. Observing the bone structure of a pelican for strength points or the ways an insect’s wing repels water can advance the way vehicles are made, too. 

The RISD team is also exploring how to embrace entropy, or the degree of disorder or uncertainty in a system, versus strict mechanical processes. Sustainability is also an important element in this research, meaning that researchers should understand how materials break down instead of contributing to waste and climate change. Together, those two concepts inform the idea that engineering and technology can be programmed with built-in degradation (an expiration date, if you will) at the rate of human innovation.

“The intent is to turn away from blindly pursuing innovation and toward creating living machines that may restore our relationship with nature,” Banerjee said during a TedX presentation earlier this year. “If we understand the organisms we’re working with, we won’t have to hurt, edit, or decapitate them. We can move from ‘nature inspired’ to ‘nature collaborated.’”

The post A look at the weird intersection of taxidermy and car design appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Bee brains could teach robots to make split-second decisions https://www.popsci.com/science/bee-brain-decision-making-robot/ Mon, 10 Jul 2023 16:45:00 +0000 https://www.popsci.com/?p=554670
A honey bee pollinates a yellow flower against a bright blue sky.
Bee brains have evolved over millions of years to become incredibly efficient. Deposit Photos

The power pollinators can make multiple quick decisions with a brain smaller than a sesame seed.

The post Bee brains could teach robots to make split-second decisions appeared first on Popular Science.

]]>
A honey bee pollinates a yellow flower against a bright blue sky.
Bee brains have evolved over millions of years to become incredibly efficient. Deposit Photos

The phrase “busy as a bee” certainly applies to the brains of honey bees. The insects have to balance effort, risk and reward, avoid predators, and make accurate assessments of which flowers are the most likely to offer food for their hive while they fly. Speed and efficiency are thus critical to their survival, and scientists are taking a look at their brains to understand how. A study published June 27 in the journal eLife explores how millions of years of evolution engineered honey bee brains to make these lightning fast decisions and reduce their risks. 

[Related: What busy bees’ brains can teach us about human evolution.]

“Decision-making is at the core of cognition. It’s the result of an evaluation of possible outcomes, and animal lives are full of decisions,” co-author and comparative neurobiologist  at Australia’s Macquarie University Andrew Barron said in a statement. “A honey bee has a brain smaller than a sesame seed. And yet she can make decisions faster and more accurately than we can. A robot programmed to do a bee’s job would need the backup of a supercomputer.”

Barron cites that today’s autonomous robots primarily work with the support of remote computing, and that drones have to be in wireless communication with some sort of data center. Looking at how bees’ brains work and could help design better robots that explore more autonomously

In the study, the team trained 20 bees to recognize five different colored “flower disks.” The blue flowers always had a sugar syrup, while the green flowers always had tonic water that tasted bitter to the bees. The other colors sometimes had glucose. Then, the team introduced each bee to a makeshift garden where the flowers only had distilled water. Each bee was filmed and the team watched over 40 hours of footage, tracking the path the insects took and timing how long it took for them to make a decision. 

“If the bees were confident that a flower would have food, then they quickly decided to land on it, taking an average of  0.6 seconds,” HaDi MaBouDi, co-author and computational neuroethologist from the University of Sheffield in England, said in a statement. “If they were confident that a flower would not have food, they made a decision just as quickly.”

If the bees were unsure, they took significantly more time–1.4 seconds on average–and the time reflected the probability that a flower contained some food.

Next, the team built a computer model that aimed to replicate the bees’ decision-making process. They noticed that the structure looked similar to the physical layout of a bee’s brain. They found that the bees’ brains could make complex autonomous decision making with minimal neural circuits. 

[Related: A robot inspired by centipedes has no trouble finding its footing.]

“Now we know how bees make such smart decisions, we are studying how they are so fast at gathering and sampling information. We think bees are using their flight movements to enhance their visual system to make them better at detecting the best flowers,” co-author and theoretical and computational biologist at the University of Sheffield James Marshall said in a statement

Marshall also co-founded Opteran, a company that reverse-engineers insect brain algorithms to enable machines to move autonomously. He believes that nature will inspire the future of the AI industry, as millions of years of insect brain evolution has led to these incredibly efficient brains that require minimal power. 

The post Bee brains could teach robots to make split-second decisions appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This magnetic robot could worm its way into human blood vessels https://www.popsci.com/technology/magnet-soft-worm-robot/ Mon, 10 Jul 2023 14:30:00 +0000 https://www.popsci.com/?p=554646
Screenshot of inching magnetic soft robot
Magnetic strategic portions of this soft robot allows it to move in three dimensions. MIT / Ankeeva et al

Just one magnetic field can create 'a movement-driving profile of magnetic forces.'

The post This magnetic robot could worm its way into human blood vessels appeared first on Popular Science.

]]>
Screenshot of inching magnetic soft robot
Magnetic strategic portions of this soft robot allows it to move in three dimensions. MIT / Ankeeva et al

Researchers at MIT have created a tiny, cucumber-inspired soft robot capable of scooting around otherwise hard-to-reach, three-dimensional environments using a single, weak magnetic field. As first detailed last month in an open access paper published with Advanced Materials, an inchworm-like mechanism made from strategically magnetized rubber polymer spirals shows immense promise in maneuvering through spaces as tiny as human blood vessels.

[Related: Seals provided inspiration for a new waddling robot.]

Before this newest wormbot, locomotive soft bots required moving magnetic fields to control their direction and angle. “[I]f you want your robot to walk, your magnet walks with it. If you want it to rotate, you rotate your magnet,” Polina Ankeeva, the paper’s lead author and a professor of materials science and engineering and brain and cognitive sciences, said in a statement. “If you are trying to operate in a really constrained environment, a moving magnet may not be the safest solution,” Ankeeva added. “You want to be able to have a stationary instrument that just applies [a] magnetic field to the whole sample.”

As such, the MIT research team’s new design isn’t uniformly magnetized like many other soft robots. By only magnetizing select areas and directions, just one magnetic field can create “a movement-driving profile of magnetic forces,” according to MIT’s announcement.

Interestingly, engineers turned to cucumber vines’ coiled tendrils for inspiration: Two types of rubber are first layered atop one another before being heated and stretched into a thin fiber. As the new thread cools, one rubber contracts while the other retains its form to create a tightly wound spiral, much like a cucumber plant’s thin vines wrapping around nearby structures. Finally, a magnetizable material is threaded through the polymer spiral, then strategically magnetized to allow for a host of movement and directional options.

Because of each robot’s customizable magnetic patterns, multiple soft bots can be individually mapped to move in different directions when both exposed to a single, uniform weak magnetic field. Additionally, a subtle field manipulation allows the robots to vibrate—thus allowing the tiny worms to carry cargo to a designated location, then shake it off to deliver a payload. Because of their soft materials and relatively simple manipulation, researchers believe such mechanisms could be used in biomedical situations, such as inching through human blood vessels to deliver a drug at a precise location.

The post This magnetic robot could worm its way into human blood vessels appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA’s quirky new lunar rover will be the first to cruise the moon’s south pole https://www.popsci.com/science/nasa-viper-moon-rover-test/ Sun, 09 Jul 2023 17:00:00 +0000 https://www.popsci.com/?p=554322
VIPER moon rover coming down a ramp during a test at the NASA Ames Research Center
Antoine Tardy, VIPER rover egress driver, adjusts the cables that power and send commands to the VIPER test unit as engineers practice its exit/descent from the model Griffin lunar lander at NASA's Ames Research Center in California's Silicon Valley. NASA/Dominic Hart

Four wheels are better than six for off-roading in craters.

The post NASA’s quirky new lunar rover will be the first to cruise the moon’s south pole appeared first on Popular Science.

]]>
VIPER moon rover coming down a ramp during a test at the NASA Ames Research Center
Antoine Tardy, VIPER rover egress driver, adjusts the cables that power and send commands to the VIPER test unit as engineers practice its exit/descent from the model Griffin lunar lander at NASA's Ames Research Center in California's Silicon Valley. NASA/Dominic Hart

It’s no simple feat to send a rover to space, land it on a celestial body, and get the wheels rolling. NASA has used all kinds of techniques: The Pathfinder rover landed on Mars in 1997 inside a cluster of airbags, then rolled down its landing vehicle’s “petals,” which bloomed open like a flower, to the dusty surface. Cables attached to a rocket-powered “sky crane” spacecraft dropped the Perseverance Mars rover to the Red Planet’s surface in 2021. On the moon, Apollo 15, 16, and 17 astronauts pulled mylar cables to unfold and lower their buggies from the vehicles’ compact stowage compartments on lunar landers. 

But NASA’s first-ever rover mission to the lunar south pole will use a more familiar method of getting moving on Earth’s satellite: a pair of ramps. VIPER, which stands for Volatiles Investigating Polar Exploration Rover, will roll down an offramp to touch the lunar soil, or regolith, when it lands on the moon in late 2024. 

This is familiar technology in an unforgiving location. “We all know how to work with ramps, and we just need to optimize it for the environment we’re going to be in,” says NASA’s VIPER program manager Daniel Andrews.

A VIPER test vehicle recently descended down a pair of metal ramps at NASA’s Ames Research Center in California, as seen in the agency’s recently published photos, with one beam for each set of the rover’s wheels. Because the terrain where VIPER will land—the edge of the massive Nobile Crater—is expected to be rough, the engineering team has been testing VIPER’s ability to descend the ramps at extreme angles. They have altered the steepness, as measured from the lander VIPER will descend from, and differences in elevation between the ramp for each wheel. 

”We have two ramps, not just for the left and right wheels, but a ramp set that goes out the back too,” Andrews says. “So we actually get our pick of the litter, which one looks most safe and best to navigate as we’re at that moment where we have to roll off the lander.” 

[Related: The next generation of lunar rovers might move like flying saucers]

VIPER is a scientific successor to NASA’s Lunar Crater Observation and Sensing Satellite, or LCROSS mission, which in 2009 confirmed the presence of water ice on the lunar south pole. 

“It completely rewrote the books on the moon with respect to water,” says Andrews, who also worked on the LCROSS mission. “That really started the moon rush, commercially, and by state actors like NASA and other space agencies.”

The ice, if abundant, could be mined to create rocket propellant. It could also provide water for other purposes at long-term lunar habitats, which NASA plans to construct in the late 2020s as part of the Artemis moon program

But LCROSS only confirmed that ice was definitely present in a single crater at the moon’s south pole. VIPER, a mobile rover, will probe the distribution of water ice in greater detail. Drilling beneath the lunar surface is one task. Another is to move into steep, permanently shadowed regions—entering craters that, due to their sharp geometry, and the low angle of the sun at the lunar poles, have not seen sunlight in billions of years. 

The tests demonstrate the rover can navigate a 15-degree slope with ease—enough to explore these hidden dark spots, avoiding the need to make a machine designed for trickier descents. “We think there’s plenty of scientifically relevant opportunities, without having to make a superheroic rover that can do crazy things,” Andrews says.

Developed by NASA Ames and Pittsburgh-based company Astrobotic, VIPER is a square golf-cart-sized vehicle about 5 feet long and wide, and about 8 feet high. Unlike all of NASA’s Mars rovers, VIPER has four wheels, not six. 

”A problem with six wheels is it creates kind of the equivalent of a track, and so you’re forced to drive in a certain way,” Andrews says. VIPER’s four wheels are entirely independent from each other. Not only can they roll in any direction, they can be turned out, using the rover’s shoulder-like joints to crawl out of the soft regolith of the kind scientists believe exists in permanently shadowed moon craters. The wheels themselves are very similar to those on the Mars rovers, but with more paddle-like treads, known as grousers, to carry the robot through fluffy regolith.

“The metaphor I like to use is we have the ability to dip a toe into the [permanently shadowed region],” Andrews says. ”If we find we’re surprised or don’t like what we’re finding, we have the ability to lift that toe out, roll away on three wheels, and then put it back down.”

But VIPER won’t travel very far at all if it can’t get down the ramp from its lander, which is why Andrews and his team have been spending a lot of time testing that procedure. At first, the wheels would skid, just momentarily, as the VIPER test vehicle moved down the ramps. 

”We also found we could drive up and over the walls of the rampway,” Andrews says. “That’s probably not desirable.”

[Related on PopSci+: How Russia’s war in Ukraine almost derailed Europe’s Mars rover]

Together with Astrobotic, Andrews and his team have altered the ramps, and they now include specialized etchings down their lengths. The rover can detect this pattern along the rampway, using cameras in its wheel wells. “By just looking down there,” the robot knows where it is, he says. “That’s a new touch.”

Andrews is sure VIPER will be ready for deployment in 2024, however many tweaks are necessary. After all, this method is less complicated than a sky crane, he notes: “Ramps are pretty tried and true.”

The post NASA’s quirky new lunar rover will be the first to cruise the moon’s south pole appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robotic leg could give machines krill-like swimming abilities https://www.popsci.com/technology/krill-inspired-robot-leg/ Fri, 30 Jun 2023 20:00:00 +0000 https://www.popsci.com/?p=552598
Robot leg inspired by krill
A new robotic leg was inspired by krill's metachronal swimming. Wilhelmus Lab

It's called the Pleobot, and it was inspired by krill—tiny ocean creatures that are adept swimmers.

The post This robotic leg could give machines krill-like swimming abilities appeared first on Popular Science.

]]>
Robot leg inspired by krill
A new robotic leg was inspired by krill's metachronal swimming. Wilhelmus Lab

Robotics engineers are one step closer to building a swarm of krill bots capable of underwater exploration, as well as perhaps one day aiding in future search and rescue missions. According to a study published earlier this month in Scientific Reports, a team at Brown University working alongside researchers at the Universidad Nacional Autónoma de México recently designed and built a robotic platform dubbed the Pleobot. It’s a “unique krill-inspired robotic swimming appendage” researchers say is the first mechanism allowing for a comprehensive study of what’s known as metachronal propulsion.

While the whole assembly of the robot is about nine inches long, it’s based on krill, a tiny, paperclip-sized biological organism. Despite their small size, krill regularly travel comparatively massive distances—vertically migrating over 3,200 feet twice a day. One key to these daily journeys is their metachronal swimming—a form of movement often found in multi-legged aquatic creatures including shrimp and copepods in which their limbs undulate in wavelike patterns to propel them through their watery abodes.

[Related: When krill host social gatherings, other ocean animals thrive.]

For years, studying the intricacies of metachronal propulsion has been limited to the observation of live organisms. In a statement published on Monday, however, paper lead author Sara Oliveira Santos explained the Pleobot allows for “unparalleled resolution and control” to examine krill-like swimming, including studying how metachronal propulsion allows the creatures to “excel at maneuvering underwater.”

The Pleobot is constructed mainly from 3D printed parts assembled into three articulated sections. Researchers can actively control the first two portions of the krill-like leg, while the biramous (two-branched) end fins move passively against the water.

“We have snapshots of the mechanisms they use to swim efficiently, but we do not have comprehensive data,” said postdoctoral associate Nils Tack. “We built and programmed a robot that precisely emulates the essential movements of the legs to produce specific motions and change the shape of the appendages. This allows us to study different configurations to take measurements and make comparisons that are otherwise unobtainable with live animals.”

[Related: In constant darkness, Arctic krill migrate by twilight and the Northern Lights.]

According to the study, observing the Pleobot even allowed researchers to determine a previously unknown factor of krill movement—how they generate lift while swimming forward. As the team explained, krill must constantly swim to avoid sinking, which means at least some lift must be produced while moving horizontally within water. As Yunxing Su, another postdoctoral associate involved in the project, explained, “We identified an important effect of a low-pressure region at the back side of the swimming legs that contributes to the lift force enhancement during the power stroke of the moving legs.”

Moving forward, the team hopes to further expand their understanding of agile krill-like swimming and apply it to future Pleobot iterations. After honing this design—which the team has made open-source online—the krill leg mechanisms eventually could find their way onto underwater robots for all sorts of uses, including exploration and even rescue missions.

Take a look at how it moves, below:

Correction: An earlier version of this article stated that the Pleobot is about a foot in size. This has been updated for accuracy.

The post This robotic leg could give machines krill-like swimming abilities appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This AI-powered glove could help stroke patients play the piano again https://www.popsci.com/technology/stroke-piano-smart-glove/ Fri, 30 Jun 2023 12:00:00 +0000 https://www.popsci.com/?p=552404
A hand wearing a smart glove playing keyboard next to computer readings of movements
Wearables like this smart glove could help stroke patients recover their ability to play the piano. Credit: Dr Maohua Lin et al

A prototype of the 3D printed glove uses lights and haptics to guide movement.

The post This AI-powered glove could help stroke patients play the piano again appeared first on Popular Science.

]]>
A hand wearing a smart glove playing keyboard next to computer readings of movements
Wearables like this smart glove could help stroke patients recover their ability to play the piano. Credit: Dr Maohua Lin et al

A customizable smart glove powered by artificial intelligence shows promise as an easy-to-use, wearable tutoring aide for musicians recovering from strokes. According to a study published with Frontiers in Robotics and AI, a team at Florida Atlantic University has developed a lightweight “smart hand exoskeleton” prototype using 3D printed materials and machine learning. This new smart glove could soon help patients relearn how to play the piano “by ‘feeling’ the difference between correct and incorrect versions of the same song.”

[Related: A tiny patch can take images of muscles and cells underneath your skin.]

In the aftermath of a debilitating stroke, many patients require extensive therapy regimens to relearn certain motor movements and functionalities affected by neurotraumas. Sometimes, this loss of control unfortunately can extend to the patient’s ability to play instruments. And while therapeutic technology exists for other movement recovery, very few options are available to someone such as a pianist hoping to return to music.

Researchers’ new smart glove aims to remedy this issue via imbuing a 3D printed wearable with soft pneumatic actuators housed in the fingertips The researchers have equipped each fingertip with 16 tactile sensors, aka “taxels,” to monitor the wearer’s keystrokes and hand movements. The team also used machine learning to train the glove in differentiating the “feel” of correct versus incorrect renditions of “Mary Had a Little Lamb.” Putting it all together, a user could play the song themselves while receiving real-time feedback in the form of visual indicators, sound, or even touch-sensitive haptic responses. 

[Related: These wearable cyborg arms were modeled after Japanese horror fiction and puppets.]

“The glove is designed to assist and enhance their natural hand movements, allowing them to control the flexion and extension of their fingers,” Erik Engeberg, the paper’s senior author and a professor in FAU’s department of ocean and mechanical engineering, said in a statement on Thursday. “The glove supplies hand guidance, providing support and amplifying dexterity.”

Although only one smart glove currently exists, the research team hopes to eventually design a second one to create a full pair. Such devices could even one day be programmed to help with other forms of object manipulation and movement therapy. First, however, the wearable’s tactile sensing, accuracy, and reliability still need improvements, alongside advancing machine learning to better understand human inputs in real time.

The post This AI-powered glove could help stroke patients play the piano again appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This weird robot uses living bugs as gripping tools https://www.popsci.com/technology/pill-bug-robot/ Wed, 28 Jun 2023 15:00:00 +0000 https://www.popsci.com/?p=551795
Robotic gripper holding pillbug that is gripping piece of cotton
Pill bugs and mollusks were recently shown to be effective grippers for robots. Tadakuma Mechanisms Group, Tohoku University

A recent intersection between biology and robotics is causing some to wonder about the limits and ethics of domestication.

The post This weird robot uses living bugs as gripping tools appeared first on Popular Science.

]]>
Robotic gripper holding pillbug that is gripping piece of cotton
Pill bugs and mollusks were recently shown to be effective grippers for robots. Tadakuma Mechanisms Group, Tohoku University

The term “necrobotics” is relatively self-explanatory—using dead source material within robotic designs. Case in point: researchers at Rice University made headlines last year after repurposing a spider’s corpse as part of a “pneumatically actuating gripper tool” capable of grasping asymmetrical objects up to 130 percent its own mass.

But what if researchers harnessed living creatures as part of robotic devices? That’s the question recently posed by a team collaborating between multiple Japanese universities. In their paper, “Biological Organisms as End Effectors,” published earlier this month on the arXiv preprint server, researchers from Tohoku, Yamagata, and Keio Universities detailed how they developed a way to literally harness living pillbugs and underwater mollusks known as chiton as a robot’s gripping mechanisms without harming the animals.

[Related: Watch this bird-like robot make a graceful landing on its perch.]

In demonstration videos, a 3D-printed harness is ostensibly lassoed around the pill bug using either one or two flexible threads. In the single thread configuration, the pill bug is allowed to roll into its closed, defensive shape; with two threads, the insect is prevented from doing so, thus maintaining its open, walking stance. Attaching a harness to the mollusk required a bit more trial-and-error, with researchers settling on a removable epoxy glue applied to its external shell. In both experiments, the pill bug and chiton were shown to effectively grasp and maneuver objects, either via the insect’s closing into its defensive stance while grasping an object, or via the mollusk’s suctioning ability.

“This approach departs from traditional methodologies by leveraging the structures and movements of specific body parts without disconnecting them from the organism, all the while preserving the life and integrity of the creature,” reads a portion of the team’s paper. The team also notes that for future research, it will be “crucially important to enforce bioethics rules and regulations, especially when dealing with animals that have higher cognition.”  

But, researchers such as Kent State University geographer James Tyner, aren’t completely sold. “To a degree, this is simply the domestication of species not yet domesticated,” Tyner explains to PopSci. Tyner co-authored an essay last year lambasting Rice University’s recycled arachnid necrobot as an “omen” of potentially even further “subsumption of life and death to circuits of capital.” When it comes to employing living organisms within robotic systems, Tyner also questions their efficacy and purpose.

“I’m hard pressed to think of a situation where I’d feel comfortable deploying biotechnologies solely or even partially dependent on the gripping power of a pillbug,” Tyner adds.

For Josephine Galipon, a molecular biologist at Yamagata University and one of the project’s team members, such situations are easier to envision. “Let’s imagine a robot stuck at the bottom of the ocean that needs to improvise a gripper function to complete a task,” she offers via email to PopSci. “Instead of building a gripper from the ground up, it could borrow help from a chiton, and as a reward, the chiton would be transported to a new place with possibly more food.”

According to Galipon, establishing such mutually beneficial, cooperative, and dynamic interactions between living organisms and machines could offer advancements in both biology and robotic engineering.

[Related: These wearable cyborg arms were modeled after Japanese horror fiction and puppets.]

“‘Locomotion’ can be used for more than just getting around from one spot to another,” Galipon continues. “Surprisingly, it can also be used for tasks like picking up and moving objects, as illustrated [by the pillbug]. We can also learn more about how these organisms perceive the world around them.” Galipon points to previous instances of domestication, such as horses and messenger pigeons, and views their pillbug and chiton trials in a similar vein. 

Tyner, meanwhile, points to the longstanding history of biomimicry within robotics as a promising alternative to domesticating new animal species. They also raise the question of experts’ expanding concepts of sentience, and what that might entail for even creepy-crawler companions. Recent studies, in fact, offer evidence of a wider array of “feelings” for insects, notably the capacity for injury or discomfort in insects, such as fruit flies potentially experiencing a form of chronic pain. But critics like Tyner, however, the question still stands with or without evidence: “Do we extend moral standing, for example, only to sentient beings?”

In this sense, it’s a thought shared by Galipon and their fellow researchers. “[We] recommend caution when handling any type of animal, and to exercise mindfulness in avoiding their suffering as much as possible and to the best of our knowledge,” they write in their paper.

The post This weird robot uses living bugs as gripping tools appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These wearable cyborg arms were modeled after Japanese horror fiction and puppets https://www.popsci.com/technology/jizai-arms-cyborg/ Tue, 27 Jun 2023 20:00:00 +0000 https://www.popsci.com/?p=551506
Two dancers wearing Jizai Arms wearable robotic appendagees
Jizai Arms are wearable, swappable, cybernetic arms designed for human expression. Kazuaki Koyama/Jizai Arms

Robot-assisted ballet never looked so good.

The post These wearable cyborg arms were modeled after Japanese horror fiction and puppets appeared first on Popular Science.

]]>
Two dancers wearing Jizai Arms wearable robotic appendagees
Jizai Arms are wearable, swappable, cybernetic arms designed for human expression. Kazuaki Koyama/Jizai Arms

Speculative horror fiction, traditional Japanese puppetry, and cultural concepts of autonomy are inspiring a new project aimed at providing humans with sets of detachable cyborg arms. Jizai Arms are sleek, controllable appendages designed to compliment users’ movements, expression, and artistry. The University of Tokyo team lead by co-creator Masahiko Inami presented their creation for the first time last month at the 2023 CHI Conference on Human Factors in Computing Systems.

Unlike the headline-grabbing worlds of AI and autonomous robot technologies, however, Inami explained to Reuters on Tuesday that Jizai Arms are “absolutely not a rival to human beings.” Instead, the interchangeable limbs are meant to aid users to “do as we please… it supports us and can unlock creativity” in accordance with the Japanese concept of “jizai.” The term roughly translates to autonomy or freedom. According to the presentation’s abstract, the project is also intended to explore myriad possibilities between “digital cyborgs in a cyborg society.”

[Related: The EU just took a huge step towards regulating AI.]

To use Jizai Arms, subjects first strap on a harness to their torso. From there, arms can be attached into back sockets, and are currently controlled by a user or third-party via a miniature model of the same technology.

The project is partially inspired by centuries’ old “Jizai Okimono” animal puppetry, as well as Nobel Prize-winning author Yasunari Kawabata’s magical realism short story, “One Arm.” In this 1964 tale, a woman lets a man borrow her detached arm for an evening. “Half a century since its writing,” reads the paper’s introduction, “emerging human-machine integration technologies have begun to allow us to physically experience Kawabata’s world.”

Videos provided by the project showcase dancers performing choreography alongside classical music while wearing the accessory arms. The team’s paper describes other experiences such as varying the number and designs of the cybernetic arms, swapping appendages between multiple users, and interacting with each other’s extra limbs. In the proof-of-concept video, for example, the two ballet dancers ultimately embrace one another using both their human and artificial arms.

[Related: Cyborg cockroaches could one day scurry to your rescue.]

According to Inami, users are already forming bonds with their wearables after experiencing the Jizai Arms. “Taking them off after using them for a while feels a little sad,” they relayed to Reuters. “That’s where they’re a little different [from] other tools.” In a similar vein, researchers plan to look into long term usage of such devices, and how that could fundamentally change humans’ daily perceptions of themselves and others. 

The post These wearable cyborg arms were modeled after Japanese horror fiction and puppets appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robot used a fake raspberry to practice picking fruit https://www.popsci.com/technology/raspberry-picking-robot/ Sat, 24 Jun 2023 11:00:00 +0000 https://www.popsci.com/?p=550940
fake raspberry for testing robot pickers
EPFL CREATE

Soon, it will leave the lab for a real world test.

The post This robot used a fake raspberry to practice picking fruit appeared first on Popular Science.

]]>
fake raspberry for testing robot pickers
EPFL CREATE

It’s summer, and raspberries are in season. These soft, tartly sweet fruits are delicious but delicate. Most of the time, they have to be harvested by human hands. To help alleviate labor costs and worker shortages, a team at École polytechnique fédérale de Lausanne’s Computational Robot Design & Fabrication Lab (EPFL CREATE) in Switzerland made a robot that knows how to gently support, grasp, and pluck these berries without bruising or squishing them in the process. Their approach is detailed this week in the journal Communications Engineering

Agriculture, like many other fields that have scaled up dramatically over the last few decades, has become increasingly reliant on complex technology from sensors to robots and more. A growing number of farmers are interested in using robots for more time-intensive tasks such as harvesting strawberries, sweet peppers, apples, lettuce, and tomatoes. But many of these machines are still in an early stage, with the bottleneck factor being the inefficient and costly field trials companies typically have to undergo to fine tune the robot. 

The EPFL team’s solution was to create a fake berry and stem for the robot to learn on. To familiarize robots with picking raspberries, the engineers made a silicone raspberry with an artificial stem that “can ‘tell’ the robot how much pressure is being applied, both while the fruit is still attached to the receptacle and after it’s been released,” according to a press release. The faux raspberry contains sensors that measure compression force and pressure. Two magnets hold the fruit and the stem together. 

[Related: This lanternfly-egg-hunting robot could mean fewer bugs to squish]

In a small test with real raspberries, the robot was able to harvest 60 percent of the fruits without damaging them. That’s fairly low compared to the 90 percent from human harvesters on average, signaling to the team that there are still kinks to work out. For example, the robot’s range of reach is not great, and it gets confused when the berries are clustered together. 

Making a better fake raspberry could help the robot improve. Moreover, building an extended set that can simulate “environmental conditions such as lighting, temperature, and humidity could further close the Lab2Field reality gap,” the team wrote in the paper.

For now, the next step for the engineers is to modify the controllers and develop a camera system that “will allow robots to not only ‘feel’ raspberries, but also ‘see’ where they’re located and whether they’re ready to be harvested,” Josie Hughes, a professor at EPFL CREATE noted in the press release. 

They plan to put their pre-trained robot in a real field this summer to see how well it performs during the height of the local raspberry season in Switzerland. If the tech works as planned, the team wants to look into expanding its fake fruit repertoire to potentially cover berries, tomatoes, apricots or even grapes. 

Watch the robot and fake raspberry system at work from a trial run last year: 

The post This robot used a fake raspberry to practice picking fruit appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This pangolin-inspired robot can curl up into a healing ball https://www.popsci.com/technology/pangolin-robot-medicine/ Fri, 23 Jun 2023 15:00:00 +0000 https://www.popsci.com/?p=550767
Hard keratin scales inspired this tiny robot.
Hard keratin scales inspired this tiny robot. Max Planck Institute for Intelligent Systems

Pangolins are the only mammals to sport overlapping scales—a trait that could prove surprisingly useful for internal medicine.

The post This pangolin-inspired robot can curl up into a healing ball appeared first on Popular Science.

]]>
Hard keratin scales inspired this tiny robot.
Hard keratin scales inspired this tiny robot. Max Planck Institute for Intelligent Systems

If you don’t know what a pangolin is, then today is your lucky day. Primarily found in tropical regions of Africa and Asia, the tiny, adorable, sadly endangered creature is the only mammal known to be covered completely in overlapping scales composed of durable keratin—the same material that makes up your nails and hair. When needed, the flexible scales’ structure allows a pangolin to curl up into a defensive ball—a novel evolutionary design that recently inspired a team of engineers’ newest invention.

[Related: The Pangolin Finally Made It Onto The List Of The World’s Most Protected Animals.]

As described in a paper published on June 20 with Nature Communications, researchers at the Max Planck Institute for Intelligent Systems in Germany created a robot that could mimic a pangolins’ roly-poly resiliency. Instead of doing so for protection, however, the miniature robot uses its scaly design to quickly traverse environments while simultaneously carrying small payloads. With an added ability to heat to over 70 degrees Celsius (roughly 158 degrees Fahrenheit), the team’s barely two-centimeter-long robot shows immense promise for delivering medication within patients, as well as helping in procedures such as mitigating unwanted internal bleeding.

The pangolin-inspired robot features a comparatively simple, two-layer design—a soft polymer layer studded in magnetic particles, and a harder exterior layer of overlapping metal scales. Exposing the robot to a low-frequency magnetic field causes it to roll into a cylindrical shape, and subsequently directing the magnetic field can influence the robot’s movement. While in this rolled shape, the team showed that their pangolin-bot can house deliverables such as medicine, and safely transport them through animal tissues and artificial organs to a desired location for release.

[Related: These 2D machines can shapeshift into moving 3D robots.]

Exposing their robot to a high-frequency magnetic field, however, offers even more avenues for potential medical treatment. In such instances, the pangolin robot’s metals heat up dramatically, providing thermal energy for situations such as treating thrombosis, cauterizing tumor tissues, or even stopping internal bleeding. “Untethered robots that can move freely, even though they are made of hard elements such as metal and can also emit heat, are rare,” reads a statement from the Planck Institute, adding that researchers’ new robot “could one day reach even the narrowest and most sensitive regions in the body in a minimally invasive and gentle way and emit heat as needed.”

The post This pangolin-inspired robot can curl up into a healing ball appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These 2D machines can shapeshift into moving 3D robots https://www.popsci.com/technology/mori3-robot-space/ Tue, 13 Jun 2023 14:00:00 +0000 https://www.popsci.com/?p=548156
Mori3 robots combined to form 3D walking shape
By joining forces, a team of Mori3 robots can form almost any 3D shape. EPFL

Mori3's triangular, modular design allows it to fuse with its companions, and could one day make it into space.

The post These 2D machines can shapeshift into moving 3D robots appeared first on Popular Science.

]]>
Mori3 robots combined to form 3D walking shape
By joining forces, a team of Mori3 robots can form almost any 3D shape. EPFL

In order to keep costs low while maximizing utility, engineers are getting creative with their potential robotic cargo. As governments and private companies set their sights returning to the moon and,eventually, establishing a human presence on Mars, space-friendly robots are all the more crucial. Taking inspiration from biological swarm behaviors and geometrical patterns, researchers at Switzerland’s Ecole Polytechnique Fédérale de Lausanne (EPFL) recently showcased Mori3, a new line of shapeshifting, 2D triangular robots capable of combining to form virtually any 3D shape.

[Related: Foldable robots with intricate transistors can squeeze into extreme situations.]

As detailed in a paper published on Monday with Nature Machine Intelligence, the team’s modular, origami-like Mori3 machines “can be assembled and disassembled at will depending on the environment and task at hand,” Jamie Paik, the paper’s co-author and director of EPFL’s aptly-named Reconfigurable Robotics Lab, said in a statement.

Although prices are steadily falling, space is still at a premium when traveling to, well, space. Reaching low-earth orbit via one of SpaceX’s Falcon 9 rockets, for example, can set you back approximately $1,200 per pound of payload—therefore, the more uses you can pack into a small design, the better. And Mori3’s (or, more accurately, a team of Mori3’s) appear up to the challenge.

In the team’s proof of concept, Mori3 robots were able to shuffle around, handle and move objects, as well as interact with their users in a variety of design shapes. Instead of specializing in a single task or function, Mori3 is meant as more of an all-purpose system, forming and reforming for astronauts’ various needs, such as external station repairs, to simply transporting materials throughout a lunar base or spacecraft.

[Related: The ISS’s latest delivery includes space plants and atmospheric lightning monitors.]

In footage first highlighted by The Daily Beast, multiple Mori3 bots are shown to fuse together and alter their overall shape to form a single, walking quadrupedal machine. In another video, an array of flat, triangular Mori3’s manage to morph and position itself into the upright, three-dimensional same walking robot.

“We had to rethink the way we understand robotics,” added Christoph Belke, a robotics researcher at EPFL and one of study’s other co-authors. “These robots can change their own shape, attach to each other, communicate and reconfigure to form functional and articulated structures.”

Check out videos from EPFL showcasing a single Mori3’s 2D movement, as well as a team’s combined 3D capabilities below:

The post These 2D machines can shapeshift into moving 3D robots appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This lanternfly-egg-hunting robot could mean fewer bugs to squish https://www.popsci.com/technology/robot-kill-lanternfly/ Sat, 10 Jun 2023 11:00:00 +0000 https://www.popsci.com/?p=547640
Spotted lanternfly adult on leaf
That pop of color on the adult spotted lanternfly is a warning to predators—and property owners. Stephen Ausmus/USDA

It’s good to get them before they can fly away.

The post This lanternfly-egg-hunting robot could mean fewer bugs to squish appeared first on Popular Science.

]]>
Spotted lanternfly adult on leaf
That pop of color on the adult spotted lanternfly is a warning to predators—and property owners. Stephen Ausmus/USDA

It’s that time of the year again. The invasive, crop-damaging spotted lanternflies are emerging, as they typically do in springtime. You may already start to see some of the polka-dotted nymphs out and about. As with the adult lanternflies, the advice from experts is to kill them on sight

But another way to prevent these pests from spreading is to scrape off and kill the egg masses that these bugs leave on wood, vehicles, and furniture. Inspecting every tree and every surface for lanternfly eggs is no fun task. That’s why a team of undergraduate engineering students at Carnegie Mellon University programmed a robot, called TartanPest, to do it. 

TartanPest was designed as a part of the Farm Robotics Challenge, where teams of students had to design a creative add-on to the preexisting tractor-like farm-ng robot in order to tackle a problem in the food and agriculture industry. 

Engineering photo
TartanPest scrubbing an egg mass off a tree. Carnegie Mellon University

[Related: Taiwan sent mosquito-fighting robots into its sewers]

Since lanternflies voraciously munch on a variety of economically important crops like hardwoods, ornamentals, and grapevines, getting rid of them before they become a problem can save farms from potential damage. The solution from the team at Carnegie Mellon is a robot arm with a machine learning-powered vision system for spotting the egg masses, and an attachment that can brush them off. 

Engineering photo
TartanPest in the wild. Carnegie Mellon University

The machine learning model was trained with 700 images of lanternfly egg masses from the platform iNaturalist, where citizen scientists can upload photos of plant or wildlife observations they have made. 

Of course, TartanPest is not the first robot that helps humans not get their hands dirty (from murdering bugs). Making robots that can find and kill harmful pests on farms has long been a topic of discussion among engineers, as these could be key to decreasing the amount of pesticides used. Beyond crops, bug-terminating robots could have a place in households, too. 

But ask yourself this if you’re squeamish about robots designed to kill bugs: Would you rather have laser wielding robots snuff out cockroaches and mosquitoes, or would you prefer to suck it up and squish them yourself? 

Watch the robot at work: 

The post This lanternfly-egg-hunting robot could mean fewer bugs to squish appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robot dog sniffs out fire ants without the painful sting https://www.popsci.com/technology/fire-ant-robot-dog/ Fri, 09 Jun 2023 19:00:00 +0000 https://www.popsci.com/?p=547487
Fire ants moving across ground
The robot dog identified ant hives with a 95 percent accuracy rate. Deposit Photos

Fire ants are a major nuisance, but scientists created a quadrupedal bot that can identify them better than humans.

The post Robot dog sniffs out fire ants without the painful sting appeared first on Popular Science.

]]>
Fire ants moving across ground
The robot dog identified ant hives with a 95 percent accuracy rate. Deposit Photos

From an ecological standpoint, most ants are great—they aerate soil, clean up organic matter, and help to spread plant seeds. Fire ants, on the other hand… well, you probably already know. The incredibly painful, invasive pests can cause serious harm to their surroundings by disrupting food chains and causing general chaos. It only takes one accidental encounter with the little monsters to know that you never want a repeat experience, but a new AI-powered robotic system could help reduce the number of painful run-ins by locating hives for eradication—no awful ant bites needed.

[Related: The terrifying way fire ants take advantage of hurricane floods.]

According to a new preprint paper highlighted on Friday by New Scientist, researchers at China’s Lanzhou University recently trained an open-source AI system on images of fire ant nests from varying angles and environmental conditions. From there, the engineers installed their program onto a quadrupedal Xiaomi CyberDog, then tasked it to survey 300-square-meter nursery gardens for ant mounds. Once a hive was located, the robot dog “pawed” at it to disturb its residents, after which researchers stepped in to analyze the insects’ numbers and aggression levels to determine regular species from the invasive fire ants.

Impressively, the team’s ant-finding robot dog far outperformed three human control surveyors, even after each received an hour of pest identification and management training. Both the robot and its human competitors searched the same nursery fields for 10 minutes, but the AI system detected three times more nests while also identifying them more accurately at a 95 percent precision rate. The search robot reportedly only fell short when it came to identifying smaller nests recently founded by a colony’s queen.

[Related: Save caterpillars by turning off your outdoor lights.]

Although in its early stages, researchers say that such a system utilizing a more advanced robot boasting more battery life, maneuverability, and speed could optimize its fire ant search-and-destroy missions. 

But then again, with an estimated 20 quadrillion ants across the world, even the most advanced future ant-identifying robots will likely have their work cut out for them.

The post Robot dog sniffs out fire ants without the painful sting appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Taiwan sent mosquito-fighting robots into its sewers https://www.popsci.com/technology/mosquito-killing-robot/ Fri, 09 Jun 2023 14:00:00 +0000 https://www.popsci.com/?p=547292
a tunnel in a sewer system
Mosquito larvae could be hiding out in sewers. Denny Müller / Unsplash

They were equipped with insecticide and hot water blasters.

The post Taiwan sent mosquito-fighting robots into its sewers appeared first on Popular Science.

]]>
a tunnel in a sewer system
Mosquito larvae could be hiding out in sewers. Denny Müller / Unsplash

Mosquitoes are a problem, especially when they’re carrying viruses like dengue and zika that they can pass to unsuspecting humans with a bite. Since there’s no vaccine against dengue, public health experts have to focus on controlling the blood-sucking critters themselves. When cities got big, mosquitoes, in search of standing water, took to the sewers to breed, making them harder to monitor. In response to this problem, a team of scientists at Taiwan National Mosquito-Borne Diseases Control Research Center had an idea: send in robots.

It’s a method that’s been tested by other countries, but often with flying robots that keep an eye on the ground below, and not with remote-controlled crawlers that snoop in sewers. In a new study published this week in PLOS Neglected Tropical Diseases, the team dispatched unmanned vehicles underground to scope out and eliminate mosquito larvae that have congregated in ditches under and around Kaohsiung city in southern Taiwan. After all, getting the pests before they develop wings is much easier than trying to catch them in the air. 

[Related: Spider robots could soon be swarming Japan’s aging sewer systems]

Engineering photo
Robots at work. Chen et. al, PLOS Neglected Tropical Disease

These multipurpose robots come with a suite of tools, digital cameras, and LED lights that help them visualize the sewer environment, detect mosquito larvae in areas with standing water, and either spray the area with insecticide or blast it with hot water. The wheeled robots crawl at a rate of 5 meters/min (that’s 16 feet each minute). They’re also designed in a way that prevents them from being overturned in areas where it would be difficult for humans to set them right again. The target for these robots are mosquitoes in the genus Aedes, which contain several species that commonly carry infectious tropical disease. 

[Related: Look inside London’s new Super Sewer, an engineering marvel for rubbish and poo]

To surveil mosquito activity around the ditches, scientists also set up a series of Gravitraps that can be used to lure and capture female mosquitoes. The team later analyzed these specimens to see where the dengue-positive mosquitoes tend to go. In many ditches where there were high concentrations of dengue-positive mosquitoes, after the robots were deployed, traps around them showed that the positivity rates for dengue dropped (probably because the mosquito population as a whole took a dip as well), indicating that the bots could be a useful tool for disease control and prevention. Of course, they could always be further improved. With better sensors, AI, mobility, and autonomy functions, these robots could become more usable and practical.

The post Taiwan sent mosquito-fighting robots into its sewers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch a ‘flying fish’ drone hover in the air and then swim underwater https://www.popsci.com/technology/amphibious-quadcopter-drone/ Thu, 08 Jun 2023 21:30:00 +0000 https://www.popsci.com/?p=547182
a drone flies over the water
Aaron Burden / Unsplash

It's at home buzzing over dry ground, but then plunges into a pool. Here's how the flying machine works.

The post Watch a ‘flying fish’ drone hover in the air and then swim underwater appeared first on Popular Science.

]]>
a drone flies over the water
Aaron Burden / Unsplash

At a conference last week, a group of engineers presented a fascinating new drone capable of flying through the air—and operating underwater. While it’s only a prototype, the researchers had to solve interesting problems in order to create a working aerial-aquatic quadcopter. 

A large group of researchers from seven universities and laboratories throughout China and Hong Kong contributed to what they’ve dubbed the TJ-FlyingFish. In the paper presented at the Institute of Electrical and Electronics Engineers (IEEE) 2023 International Conference on Robotics and Automation (ICRA) in London, they described how they developed the 3.6-pound quadcopter. 

According to the research paper, there have been previous multi-rotor aerial-aquatic hybrid prototypes, however, they have mostly relied on “standard aerial hardware constructions with water resistance.” In other words, instead of creating a drone truly capable of flying through the skies and also operating underwater, most researchers have designed aerial drones with some waterproofing so the machines don’t stop working when they splash down. 

The issue is that while both water and air are technically fluid mediums, they have vastly different properties. The different viscosities and densities affect how the propulsion system should operate, as well as the overall design of the drone. To operate in the air, the drone needs to be able to overcome gravity. To operate effectively underwater, the drone needs to be neutrally buoyant and able to generate enough thrust to overcome water resistance. 

As a result, achieving effective thrust is different in both mediums. The propulsion force is determined by the amount of mass that the propellers move. To fly, the drone needs high-speed propellers to throw a lot of lightweight air around as fast as possible. To move through the much denser water, though, the drone needs slower speed high-torque propellers. Instead of using two sets of propellers, the engineers designed an innovative system that uses one motor and two separate gearboxes: one rigged for aerial flight and the other for underwater movement. This has the advantage of keeping weight low, though it makes for an inherently complex system.

[Related: The Army’s Black Hawk helicopter replacement is a speedy tiltrotor aircraft]

When the drone takes off, it works like a regular quadcopter. The propellers force air down and give it enough lift to fly, and they are able to tilt and rotate independently so it can maneuver and hover. It has enough battery power to hover for six minutes. But it’s when it lands in water that things get interesting. 

The drone is slightly negatively buoyant, so when it hits the water it slowly starts to sink. Then, one way for it to travel underwater is for the whole drone to rotate, so the propellers pull it sideways through the water. Alternatively, the drone’s body can stay upright, and it can maneuver by tilting the props in different ways. These two systems enabled by the tilting propellers and dual gearbox allow it to operate effectively underwater and maneuver in three dimensions (just as it can in air) in a way that a waterproof drone equipped solely with traditional aerial propellers can’t. 

A huge amount of the researchers’ effort went into making the drone efficient underwater, rather than just designing a waterproof quadcopter. As a result, the prototype’s underwater performance is surprisingly good. It can spend around 40 minutes submerged and has a maximum dive depth of just under 10 feet. It was designed to be lightweight, so there was a tradeoff with waterproofing. Future prototypes will likely be able to go deeper. 

Most impressively, the drone can also take off from the water. By rising to the surface and rotating its propellers, it generates enough lift to get back into the sky. 

Although only a prototype, it’s hard not to imagine uses for a drone like this. The researchers suggest remote sensing operations and disaster rescue, but it could also be used to conduct civil and military surveys, capturing incredible video footage, and inspire sci-fi authors. It could also lead to waterproof drones, which could save some people a lot of money

Take a look at this cool aerial-aquatic drone in action, below.

The post Watch a ‘flying fish’ drone hover in the air and then swim underwater appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This very sweaty robot measures how heat affects humans https://www.popsci.com/technology/sweat-robot-heat-climate-change/ Thu, 08 Jun 2023 19:00:00 +0000 https://www.popsci.com/?p=547019
Rear side of sweating robot
Researchers are using a perspiring robot to test bodily responses to extreme heat. Christopher Goulet/ASU

ANDI is helping researchers learn how extreme temperatures affect the human body—every part of it.

The post This very sweaty robot measures how heat affects humans appeared first on Popular Science.

]]>
Rear side of sweating robot
Researchers are using a perspiring robot to test bodily responses to extreme heat. Christopher Goulet/ASU

Researchers at Arizona State University are employing a breathing, perspiring, humanoid robot to study extreme temperatures’ effects on the body—including, yes, butt sweat. But as uncanny as ANDI (and its rear end) may look, the device could help experts better devise products, methods, and treatments to keep populations safe as the planet continues its dangerous, climate change-induced warming patterns.

Aside from such visible, sometimes socially awkward physical signs of heat stress, there’s actually a lot that experts still don’t know about humans’ biological reactions to high temperatures. But researchers like Jenni Vanos, an associate professor in ASU’s School of Sustainability, can’t simply plop test subjects into dangerously extreme heat scenarios and observe the dire effects. “There are situations we know of… where people are dying of heat and we still don’t fully understand what happened,” Vanos said in a recent statement. “ANDI can help us figure that out.”

[Related: 1 in 5 people are likely to live in dangerously hot climates by 2100.]

Funded by a National Science Foundation Major Research Instrumentation Grant and custom-built by Thermetrics, ASU’s ANDI is one of only two currently deployed at a research institution. It’s also the first thermal manikin capable of being used outdoors, thanks to a novel internal cooling channels. Within this unique system, cool water circulates throughout ANDI’s “body” to keep its overall temperature low enough to endure extreme heat, while sensors measure numerous variables influencing human perceptions of heat, such as sun brightness and air convection.

These perceptions are as varied as humans’ health and body types are—something ANDI can easily accommodate. “We can [enter] different BMI models, different age characteristics and different medical conditions,” said Ankit Joshi, an ASU research scientist and lead operator of ANDI. Joshi offers a diabetes patient, who has different thermal regulation abilities as a healthy person, as an example. “We can account for all this modification with our customized models.”

ASU’s ANDI generally resides in the aptly-named “Warm Room,” a chamber built to simulate heat-exposure scenarios seen in regions around the world, which includes factors such as wind, solar radiation, and temperatures as high as 140-degrees Fahrenheit. Within the Warm Room, ANDI can accurately measure human sweating mechanics such as changing core and skin temperatures.

Outside the Warm Room, however, ANDI is reportedly getting a walking buddy. Over the summer, the research team will pair the manikin with the non-humanoid MaRTy, ASU’s biometeorological heat robot. Both machines will stroll through ASU’s (very hot) campus, with MaRTy measuring the heat that hits a body, while ANDI can record how a body reacts to said temperatures.

[Related: Heat is the silent killer we should all be worried about.]

There is no single solution to adapting to rising temperatures, and researchers are well aware of this. “We’re trying to approach this from a very holistic point, but there’s not going to be a silver bullet for anything,” said Konrad Rykaczewski, an associate professor in ASU’s School for Engineering of Matter, Transport and Energy and the study’s principal investigator. Such varying options include designing better cooling clothing, or even exoskeleton backpacks made specifically to help cool down its wearers.

The post This very sweaty robot measures how heat affects humans appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Inside Blue Abyss’ plan to build super-deep pools for astronauts and military bots https://www.popsci.com/technology/blue-abyss-astronaut-training-pools/ Thu, 08 Jun 2023 13:00:00 +0000 https://www.popsci.com/?p=546613
Person underwater looking up at outer space to represent Blue Abyss training tank. Illustrated.
Ard Su for Popular Science

The proposed 160-foot-deep pools would be training grounds for astronauts, or provide a watery place for those in the defense sector to test their equipment.

The post Inside Blue Abyss’ plan to build super-deep pools for astronauts and military bots appeared first on Popular Science.

]]>
Person underwater looking up at outer space to represent Blue Abyss training tank. Illustrated.
Ard Su for Popular Science

In Overmatched, we take a close look at the science and technology at the heart of the defense industry—the world of soldiers and spies.

VLADIMIR PLETSER stands in front of an eclectic audience—a group of people attending the Analog Astronaut Conference in Arizona. Analog astronauts are folks who simulate the lives of spacefarers, for science, while remaining on Earth. For days or weeks or months, they inhabit and experiment in facilities that mimic cosmic conditions, living as quasi-astronauts. Sometimes those facilities are settlements in the Utah desert that look like the Red Planet, such as the Mars Desert Research Station, run by the nonprofit Mars Society; others are mocked-up astro-habitats inside NASA centers, like the Human Exploration Research Analog at Johnson Space Center. 

But Pletser, on this Saturday in May, is here to discuss a new analog facility courtesy of Blue Abyss, a company where he serves as space operations training director. That’s an appropriate position, as he’s managed microgravity research for the European Space Agency, he’s worked in support of China’s space station, and he is an astronaut candidate for Belgium.

Blue Abyss, a company focused on enabling research, training, and testing in extreme environments, is planning to build the second-deepest pools in the world. (The deepest pool is in Dubai, built for recreation and filming.) The proposed bodies of water will be 160 feet deep and about 130 to 160 feet wide. They’ll be the largest pools in the world by volume. Giant bodies of water like these will be useful to astronauts who want to practice in an environment analogous to space—an oxygen-deprived place with neutral buoyancy. They’re also of interest to deep-sea divers and people in the offshore energy sector. Then there are operators in the defense industry who find themselves in the ocean for tasks like reconnaissance, search and rescue, and mine hunting. Blue Abyss aims to serve them all.

Diving in 

The pools will be built in Cornwall, England, and Brook Park, Ohio, near Cleveland, if all goes according to plan. And they won’t just be super-size swimming holes. They will have multiple underwater levels for research and provide enough room for big instruments and vehicles to enter the buildings and the water. 

“We envisage that the size and flexibility of our pools will enable some of the more complex planetary [extravehicular activity] that will be undertaken in the future on the moon and Mars to be practiced here on Earth, something that is still quite difficult to conduct in the neutral buoyancy pools that exist today, which weren’t developed with this in mind,” says John Vickers, Blue Abyss’ CEO. The facility will also be able to mimic the tides and currents of the real world and the varied lighting conditions people might find in the ocean or outer space. Specific chambers will simulate the pressure found at depths of up to thousands of meters. 

While Blue Abyss’ plans for facilities are not limited to big pools, they will be the centerpieces. Pools like these are not a totally unique idea in the astronaut world; NASA has a similar aqueous facility, called the Neutral Buoyancy Lab, in Houston—but it goes down only 40 feet. Roscosmos, Russia’s space agency, hosts its own Hydro Lab, of similar depth. China’s Neutral Buoyancy Facility in Beijing and the European Space Agency’s in Germany both dip down 33 feet. Blue Abyss’ pools will be bigger, and perhaps better able to accommodate the needs of future astronauts, who will likely be doing complex missions outside their spacecraft. 

Analog oceans aren’t exactly a new idea in the defense sector either; the US Navy, for instance, has an “indoor ocean” in Maryland, called the Maneuvering and Seakeeping Basin. It is 35 feet deep at its lowest point and is used to test scale models of subs. But existing facilities weren’t necessarily made for the seagoing vehicles of today, which are often autonomous, drone-like, or both.

Water worlds 

If they succeed, Blue Abyss’ projects will provide access via the private sector to the same types of facilities that are today, in some cases, run by governments. The pools will be for humans (be they space explorers or divers or small-craft conductors) and robots (be they remotely operated vehicles or autonomous underwater vehicles). “Centers will provide training, certification, and technology demonstration, ensuring that divers, operators, and other underwater professionals have the skills and knowledge to operate safely and effectively in challenging circumstances,” says Vickers.

Or at least, that’s the idea. “We’re still in the phase of trying to find funding,” Pletser tells those at the conference. “So the project that we have in England, in Cornwall, is going much slower than the one that we have here in the States.”

The Cleveland area—an aerospace hub—has been supportive of the venture, says Vickers, but the company has had a harder time in its home territory of England, the original proposed site. “Brexit, the pandemic, and a lack of sufficient vision within parts of government have meant that what should have been the world’s first site may now come second,” he says.

It likely isn’t the interest of the analog astronauts gathered to hear Pletser speak that makes the general idea feasible, regardless of what country the pools are constructed in. After all, the world doesn’t have that many astronauts to train. 

But Blue Abyss is hoping to attract a much larger potential pool of people, and of money, from other contexts. Those in the offshore energy sector could practice working with cables and pipes, inspecting the foundations of wind turbines, and checking out vessels—without the serious dangers that come with conducting operations in the open ocean, where unpredictable currents, sea creatures, and other X factors can provide potentially deadly complications. Divers could train regardless of the weather. Scientists could test undersea research tools before sending them into an actual oceanic abyss. And makers of submersibles could test their craft and practice tricky maneuvers in a controlled environment. “So we not only address the space sector, but also the marine sector,” says Pletser. 

Importantly, that marine sector includes the defense field, where contractors help navies and coast guards make sense of the ocean’s mysteries.

Wet work 

One contractor that does such military work is General Dynamics. “We have a number of programs of record with the US Navy,” says Michael Guay, director for autonomous undersea systems. (A subsidiary, General Dynamics Electric Boat, makes nuclear subs for the Navy.) One of General Dynamics’ programs, Knifefish, has created a vehicle that can detect, classify, and identify mines placed underwater. Similar autonomous vehicles are also useful to the military for surveillance, reconnaissance, and even anti-submarine warfare.

Autonomous vehicles can also do hydrographic surveys. Such vehicles, which use sensors to measure aspects of the water like turbidity, salinity, and fluorescence, are useful for exploring for new oil and gas drilling sites and doing scientific assessments of the oceanic environment. 

General Dynamics has its own “full-ocean-depth-simulating pressure test tank,” says Guay, and its tanks can test full vehicles or just their parts. One of its facilities is in Quincy, Massachusetts, “So we have rapid access to Boston Harbor and Massachusetts Bay,” he says. 

Another company, called SEAmagine, sells small submarines and submersible boats—specifically those that require human drivers, which has been going out of fashion. “We didn’t believe that we were going to know our oceans by simply putting cameras and robots in the water,” says Charles Kohnen, SEAMagine’s co-founder. “Somehow the human element has to remain for us to understand.”

Today, SEAmagine, based in California, offers its craft to tourists, scientific researchers, yacht operators, and the defense sector. Its manned marine craft are specifically of interest to coast guards, which use them for search and rescue. Argentina’s, for instance, uses a SEAmagine vehicle to recover bodies from the ultra-deep water in the mountainous country. “They have these lakes that are 500 meters deep in the Andes,” says Kohnen. “And they’re very full of tourists because it’s beautiful. There’s a lot of tourists, and then lots of accidents.” These diminutive subs can ride on trailers on highways and be backed into the water like regular boats—not the case for your typical submersible.

But before either company does any of that fieldwork, its vehicles have to undergo rigorous testing. “The first, most important part of testing before you go in the ocean is going to be the pressure testing of the hull,” says Kohnen. 

That happens in pressure chambers, like the ones Blue Abyss’ facilities will include. “There aren’t that many in the world that are large enough and deep enough,” says Kohnen. Today, SEAmagine uses a variety of different chambers in the US to test its hulls and other components, but Kohnen says there’s room for more. “I’d like to see more testing facilities that can do the under-pressure testing,” he says. “As you build more of a blue economy for all these marine industries, the world could use some more labs.”

Blue Abyss hopes its facilities will be useful in certifying early-stage technology—the kind of tech that companies may not want to experiment with in the actual sea—validating and demonstrating sensors and components and autonomous capabilities at work in their relevant environments. That way, they can know that the technology either works or needs a tweak, and then they can demonstrate to agencies or customers that the parts and systems are ready. 

And analog astronauts may be eager to take the plunge, too.

Read more PopSci+ stories. 

The post Inside Blue Abyss’ plan to build super-deep pools for astronauts and military bots appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robot ‘chef’ can follow video instructions to make a very simple salad https://www.popsci.com/technology/robot-salad-chef-maker/ Mon, 05 Jun 2023 14:30:00 +0000 https://www.popsci.com/?p=545805
Robot arm assembling salad from ingredients
The robot even created its own salad recipe after learning from examples. University of Cambridge

It may not make it on 'Top Chef,' but the robot's learning abilities are still impressive.

The post This robot ‘chef’ can follow video instructions to make a very simple salad appeared first on Popular Science.

]]>
Robot arm assembling salad from ingredients
The robot even created its own salad recipe after learning from examples. University of Cambridge

It may not win a restaurant any Michelin stars, but a research team’s new robotic ‘chef’ is still demonstrating some impressive leaps forward for culinary tech. As detailed in the journal IEEE Access, a group of engineers at the University of Cambridge’s Bio-Inspired Robotics Laboratory recently cooked up a robot capable of assembling a slate of salads after watching human demonstration videos. From there, the robot chef was even able to create its own, original salad based on its previous learning.

“We wanted to see whether we could train a robot chef to learn in the same incremental way that humans can—by identifying the ingredients and how they go together in the dish,” the paper’s first author Greg Sochacki, a Cambridge PhD candidate in information engineering, said in a statement.

[Related: 5 recipe apps to help organize your meals.]

What makes the team’s AI salad maker even more impressive is that the robot utilized a publicly available, off-the-shelf neural network already programmed to visually identify fruits and vegetables such as oranges, bananas, apples, broccoli, and carrots. The neural network also examined each video frame to identify the various objects, features, and movements depicted—for instance, the ingredients used, knives, and the human trainer’s face, hands, and arms. Afterwards, the videos and recipes were converted into vectors that the robot could then mathematically analyze.

Of the 16 videos observed, the robot correctly identified the recipe depicted 93 percent of the time, all while only recognizing 83 percent the human chef’s movements. Its observational abilities were so detailed, in fact, that the robot could tell when a recipe demonstration featured a double portion of an ingredient or if a human made a mistake, and know that these were variations on a learned recipe, and not an entirely new salad. According to the paper’s abstract, “A new recipe is added only if the current observation is substantially different than all recipes in the cookbook, which is decided by computing the similarity between the vectorizations of these two.”

Sochacki went only to explain that, while the recipes aren’t complex (think an un-tossed vegetable medley minus any dressings or flourishes), the robot was still “really effective at recognising, for example, that two chopped apples and two chopped carrots is the same recipe as three chopped apples and three chopped carrots.”

[Related: What robots can and can’t do for a restaurant.]

That said, there are still some clear limitations to the robotic chef’s chops—mainly, it needs clear, steady video footage of a dish being made with unimpeded views of human movements and their ingredients. Still, researchers are confident video platforms like YouTube could be utilized to train such robots on countless new recipes, even if they are unlikely to learn any creations from the site’s most popular influencers, whose clips traditionally feature fast editing and visual effects. Time to throw on some old reruns of Julia Child’s The French Chef and get to chopping.

The post This robot ‘chef’ can follow video instructions to make a very simple salad appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A robot inspired by centipedes has no trouble finding its footing https://www.popsci.com/technology/centipede-robot-japan/ Thu, 01 Jun 2023 16:00:00 +0000 https://www.popsci.com/?p=545090
The team was inspired by certain “extremely agile” insects able to utilize their own dynamic instability to quickly change movement and direction.
The team was inspired by certain “extremely agile” insects able to utilize their own dynamic instability to quickly change movement and direction. Youtube

Researchers at Osaka University designed a 'myriapod' bot that uses less energy and computational power than other walking machines.

The post A robot inspired by centipedes has no trouble finding its footing appeared first on Popular Science.

]]>
The team was inspired by certain “extremely agile” insects able to utilize their own dynamic instability to quickly change movement and direction.
The team was inspired by certain “extremely agile” insects able to utilize their own dynamic instability to quickly change movement and direction. Youtube

Last month, engineers at Georgia Institute of Technology unveiled a creepy, crawly centipede-inspired robot sporting a plethora of tiny legs. The multitude of extra limbs wasn’t simply meant to pay homage to the arthropods, but rather to improve the robot’s maneuverability across difficult terrains while simultaneously reducing the number of complicated sensor systems. Not to be outdone, a separate team of researchers at Japan just showed off their own biomimetic “myriapod” robot which leverages natural environmental instabilities to move in curved motions, thus reducing its computational and energy requirements.

[Related: To build a better crawly robot, add legs—lots of legs.]

As detailed in an article published in Soft Robotics, a team at Osaka University’s Mechanical Science and Bioengineering department recently created a 53-inch-long robot composed of six segments, each sporting two legs alongside agile joints. In a statement released earlier this week, study co-author Shinya Aoi explained their team was inspired by certain “extremely agile” insects able to utilize their own dynamic instability to quickly change movement and direction. To mimic its natural counterparts, the robot included tiny motors that controlled an adjustable screw to increase or decrease each segment’s flexibility while in motion. This leads to what’s known as “pitchfork bifurcation.” Basically, the forward-moving centipede robot becomes unstable.

But instead of tipping over or stopping, the robot can employ that bifurcation to begin moving in curved patterns to the left or right, depending on the circumstances. Taking advantage of this momentum allowed the team to control their robot extremely efficiently, and with much less computational complexity than other walking bots.

As impressive as many bipedal robots now are, their two legs can often prove extremely fragile and susceptible to failure. What’s more, losing control of one of those limbs can easily render the machine inoperable. Increasing the number of limbs a lá a centipede robot, creates system redundancies that also expand the terrains it can handle. “We can foresee applications in a wide variety of scenarios, such as search and rescue, working in hazardous environments or exploration on other planets,” explained Mau Adachi, one of the paper’s other co-authors.

[Related: NASA hopes its snake robot can search for alien life on Saturn’s moon Enceladus.]

Such serpentine robots are attracting the attention of numerous researchers across the world. Last month, NASA announced the latest advancements on its Exobiology Extant Life Surveyor (EELS), a snake-bot intended to potentially one day search Saturn’s icy moon Enceladus for signs of extraterrestrial life. Although EELS utilizes a slithering movement via “rotating propulsion units,” it’s not hard to envision it doing so alongside a “myriapod” partner—an image that’s as cute as it is exciting.

The post A robot inspired by centipedes has no trouble finding its footing appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Dallas airport is testing out EV charging bots that roll around like suitcases https://www.popsci.com/technology/ziggy-ev-charging-robot-dallas-airport/ Wed, 31 May 2023 22:00:00 +0000 https://www.popsci.com/?p=544933
ZiGGY mobile EV charger connected to vehicle in parking lot.
ZiGGY will show off its skills this summer at Dallas-Fort Worth International Airport. EV Safe Charge/YouTube

Mobile EV charging stations may soon juice up travelers' parked cars while they're flying high.

The post The Dallas airport is testing out EV charging bots that roll around like suitcases appeared first on Popular Science.

]]>
ZiGGY mobile EV charger connected to vehicle in parking lot.
ZiGGY will show off its skills this summer at Dallas-Fort Worth International Airport. EV Safe Charge/YouTube

One of the world’s busiest airports will soon showcase an innovative, undeniably cute way to speed up travelers’ entrances and exits. First announced earlier this month, Dallas Fort Worth International Airport (DFW) is partnering with EV Safe Charge to demonstrate how the company’s mobile electric vehicle charging station, ZiGGY, could be deployed in public spaces to economically and conveniently power up consumers’ parked cars.

[Related: Electric cars are better for the environment, no matter the power source.]

Electric vehicles are an integral component of the societal shift towards clean, renewable energy. Unfortunately, battery shortages stemming from supply chain issues alongside a need for evermore charging stations is hampering a wider adoption of green transportation. ZiGGY obviously isn’t a catch-all fix, but it’s still a novel tool that both its makers and DFW hope to highlight over the summer as part of the airport’s series of EV charging solution demos.

“We know that electric vehicles will be a big part of the future of transportation,” Paul Puopolo, DFW’s Executive VP of Innovation, said in a statement, adding their air hub is “leaning into emerging technology now so that we are prepared to meet the needs of the airport community well into the future.”

ZiGGY itself resembles a large vending machine on wheels, which makes a certain amount of sense given it dispenses electric fuel on demand. Using geofencing technology, app-based controls, and on-board cameras, ZiGGY can be deployed directly to the location of your parked EV, where a user can then connect the charging bot to their ride. To court additional revenue streams, each ZiGGY also features large video screens capable of displaying advertisements. Don’t worry about getting stuck behind it if someone is using a ZiGGY, either—its dimensions and mobility ensures each station can park itself behind an EV without the need for additional space.

Speaking with Ars Technica on Tuesday, EV Safe Charge’s founder and CEO Caradoc Ehrenhalt explained that the idea is to deploy ZiGGY fleets to commercial hubs around the world, such as additional airports, hotels, and shopping centers. “What we’re hearing from people… is the common thread of the infrastructure being very challenging or not possible to put in or not cost effective or takes too much time. And so there really is the need for a mobile charging solution,” said Ehrenhalt.

[Related: Why you barely see electric vehicles at car dealerships.]

Of course, such an autonomous vehicle could find itself prone to defacement and vandalism, but Ehrenhalt apparently opts to look on the sunnier side of things. “Ziggy is fairly heavy because of the battery,” they cautioned to Ars Technica. “It has cameras all around and sensors, including GPS, and so there potentially could be [vandalism], but I’m always hoping for the best of humanity.”

The post The Dallas airport is testing out EV charging bots that roll around like suitcases appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google engineers used real dogs to develop an agility course for robots https://www.popsci.com/technology/google-barkour-robot-dog-agility/ Tue, 30 May 2023 23:00:00 +0000 https://www.popsci.com/?p=544460
Beagle flying over an obstacle hurdle
A robot dog 'Barkour' course may provide a new industry standard for four-legged machines. Deposit Photos

Researchers hope the 'Barkour' challenge can become an industry benchmark.

The post Google engineers used real dogs to develop an agility course for robots appeared first on Popular Science.

]]>
Beagle flying over an obstacle hurdle
A robot dog 'Barkour' course may provide a new industry standard for four-legged machines. Deposit Photos

It feels like nearly every week or so, someone’s quadrupedal robot gains yet another impressive (occasionally terrifying) ability or trick. But as cool as a Boston Dynamics Spot bot’s new capability may be, it’s hard to reliably compare newly developed talents to others when there still aren’t any industry standard metrics. 

Knowing this, a team of research scientists at Google are aiming to streamline evaluations through their new system that’s as ingenious as it is obvious: robot obstacle courses akin to dog agility competitions. It’s time to stretch those robotic limbs and ready the next generation of four-legged machines for Barkour.

[Related: This robot dog learned a new trick—balancing like a cat.]

“[W]hile researchers have enabled robots to hike or jump over some obstacles, there is still no generally accepted benchmark that comprehensively measures robot agility or mobility,” the team explained in a blog post published last week. “In contrast, benchmarks are driving forces behind the development of machine learning, such as ImageNet for computer vision, and OpenAI Gym for reinforcement learning (RL).” As such, “Barkour: Benchmarking Animal-level Agility with Quadruped Robots” aims to rectify that missing piece of research.

Illustrated side-by-side of concept and real robot agility course.
Actual dogs can complete the Barkour course in about 10 seconds, but robots need about double that. CREDIT: Google Research

In simple terms, the Barkour agility course is nearly identical to many dog courses, albeit much more compact at 5-by-5 meters to allow for easy setup in labs. The current standard version includes four unique obstacles—a line of poles to weave between, an A-frame structure to climb up and down, a 0.5m broad jump, and finally, a step up onto an end table.

To make sure the Barkour setup was fair to robots mimicking dogs, the team first offered up the space to actual canines—in this case, a small group of “dooglers,” aka Google employees’ own four-legged friends. According to the team, small dogs managed to complete the course in around 10 seconds, while robots usually take about double that time.

[Related: Dogs can understand more complex words than we thought.]

Scoring occurs between 0 and 1 for each obstacle, and is based on target times set for small dogs in novice agility competitions (around 1.7m/s). In all, each quadrupedal robot must complete all five challenges, but is given penalties for failing, skipping stations, or maneuvering too slowly through the course.

“We believe that developing a benchmark for legged robotics is an important first step in quantifying progress toward animal-level agility,” explained the team, adding that, moving forward, the Barkour system potentially offers industry researchers an “easily customizable” benchmark.

The post Google engineers used real dogs to develop an agility course for robots appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch the US Navy launch an ocean glider from a helicopter https://www.popsci.com/technology/navy-deploys-slocum-glider-from-helicopter/ Tue, 30 May 2023 19:02:21 +0000 https://www.popsci.com/?p=544473
glider drops from navy helicopter
The test took place in March. Bobby Dixon / US Navy

The Slocum glider is a type of robot designed to gather information about the sea's conditions.

The post Watch the US Navy launch an ocean glider from a helicopter appeared first on Popular Science.

]]>
glider drops from navy helicopter
The test took place in March. Bobby Dixon / US Navy

On March 15, the US Navy launched a torpedo-shaped robot into the Persian Gulf from the back of a helicopter. The robot was a Slocum glider, an uncrewed sensing tool that can collect data on ocean conditions below the surface. Dropping it from a helicopter was a proof of concept, a test towards expanding the array of vehicles that can put the robots into the water. As the US Navy seeks to know more about the waterways it patrols, distributing data collection tools can provide a more complete image of the ocean without straining the existing pool of sailors.

The US Navy helicopter, part of Helicopter Mine Countermeasures Squadron (HM) 15, delivered the glider by flying low and slow over the sea surface. The glider, held between railings facing seaward, slid forward, diving but not tumbling into the water. The setup enabled smooth entry into the water, keeping the robot from falling aft over teakettle.

“We are excited to be a part of another series of firsts! In this instance, the first launch from a helicopter and the first-ever successful glider deployment from an aircraft,” Thomas Altshuler, a senior VP at Teledyne, said in a release. While the test took place in March, it was only recently announced by both the Navy and Teledyne, makers of the Slocum glider. “Teledyne Marine​ takes pride in our continued innovation and support of the U.S. Navy as it expands the operational envelope of underwater gliders.”

This is what that entry looked like:

A second video, which appears to be recorded by the phone camera of one of the sailors standing next to the rail, offers a different angle on the descent. The mechanics of the rail mount are clearer, from the horseshoe-shaped brace holding the glider in place, to the mechanism of release. When the glider hits water, it makes a splash, big at the moment then imperceptible in the wake of the rotor wash on the ocean surface.

For this operation, Teledyne says the glider was outfitted with “Littoral Battlespace Sensing – Glider (LBS-G) mine countermeasures (MCM) sensors.” In plain language, that means sensors designed to work near the shore, and to collect information about the conditions of the sea where the Navy is operating. This data is used by both the Navy for informing day-to-day operation and by the Naval Oceanographic Office, for understanding ocean conditions and informing both present and future operations.

[Related: What it’s like to rescue someone at sea from a Coast Guard helicopter]

In addition to HM 15, the test was coordinated with the aforementioned Naval Oceanographic Office, which regularly uses glider robots to collect and share oceanographic data. The Slocum glider is electrically powered, with range and endurance dependent upon battery type. At a minimum, that means the glider can travel 217 miles over 15 days, powerlessly gliding at an average speed of a little over 1 mph. (Optional thruster power doubles the speed to 2 mph.) With the most extensive power, Teledyne boasts that the gliders can range over 8,000 miles under water, stay in operation for 18 months, and work from shallows of 13 feet to depths of 3,280 feet.

“Naval Meteorology and Oceanography Command directs and oversees more than 2,500 globally-distributed military and civilian personnel who collect, process, and exploit environmental information to assist Fleet and Joint Commanders in all warfare areas to make better decisions faster than the adversary,” notes the Navy description of the test.

Communicating that data from an underwater robot to the rest of the Navy is done through radio signals, satellite uplink, and acoustic communication, among other methods. These methods allow the glider to transmit data and receive commands from remote human operators. 

“The invention of gliders addressed a long-standing problem in physical oceanography: how do you measure changes in the ocean over long periods of time?” reads an Office of Navy Research history of the program. The Slocum gliders themselves date back to a concept floated in 1989, where speculative fiction imagined hundreds of autonomous floats surveying the ocean by 2021. The prototype glider was first developed in 1991, had sea trials in 1998, and today according to that report,the Naval Oceanographic Office alone operates more than 150 gliders.

This information is useful generally, as it builds a comprehensive picture of the vast seas on which fleets operate. It is also specifically useful, as listening for acoustics underwater can help detect other ships and submarines. Undersea mines, hidden from the surface, can be found through sensing the sea, and revealing their location protects Navy ships, sailors, and commercial ocean traffic, too.

Releasing the gliders from helicopters expands how and where these exploratory machines can start operations, hastening deployment for the undersea watchers. When oceans are battlefields, knowing the condition of the waters first can make all the difference.

The post Watch the US Navy launch an ocean glider from a helicopter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A robot gardener outperformed human horticulturalists in one vital area https://www.popsci.com/technology/alphagarden-ai-robot-farming/ Tue, 30 May 2023 16:00:00 +0000 https://www.popsci.com/?p=544349
Gardener harvesting beets from ground.
AlphaGarden used as much as 44 percent less water than its human counterparts. Deposit Photos

UC Berkeley researchers claim their robotic farmer passes the green thumb Turing Test.

The post A robot gardener outperformed human horticulturalists in one vital area appeared first on Popular Science.

]]>
Gardener harvesting beets from ground.
AlphaGarden used as much as 44 percent less water than its human counterparts. Deposit Photos

Even after all that quarantine hobby honing, gardening can still be an uphill battle for those lacking a green thumb—but a little help from robotic friends apparently goes a long way. Recently, UC Berkeley unveiled AlphaGarden, a high-tech, AI-assisted plant ecosystem reportedly capable of cultivating a polycultural garden at least as well as its human counterparts. And in one particular, consequential metric, AlphaGarden actually excelled.

As detailed by IEEE Spectrum over the weekend, UC Berkeley’s gardening plot combined a commercial robotic gantry farming setup with AlphaGardenSim, an AI program developed in-house by utilizing a high-resolution camera alongside soil moisture sensors. Additionally, the developers included automated drip irrigation, pruning, and even seed planting. AlphaGarden (unfortunately) doesn’t feature a fleet of cute, tiny farm bots scuttling around its produce; instead, the system resembles a small crane installation capable of moving above and tending to the garden bed.

[Related: How to keep your houseplants from dying this summer.]

As an added challenge, AlphaGarden was a polyculture creation, meaning it contained a variety of crops like turnips, arugula, lettuce, cilantro, kale, and other plants. Polyculture gardens reflect nature much more accurately, and benefit from better soil health, pest resilience, and fewer fertilization requirements. At the same time, they are often much more labor-intensive given the myriad plant needs, growth rates, and other such issues when compared to a monoculture yield.

To test out AlphaGarden’s capabilities compared with humans, researchers simply built two plots and planted the same seeds in both of them. Over the next 60 days, AlphaGarden was largely left to its own literal and figurative devices, while professional horticulturalists did the same. Afterwards, UC Berkeley repeated the same growth cycle, but this time allowed AlphaGarden to give its slower-growing plants an earlier start.

According to researchers, the results from the two cycles  “suggest that the automated AlphaGarden performs comparably to professional horticulturalists in terms of coverage and diversity.” While that might not be too surprising given all the recent, impressive AI advancements, there was one aspect that AlphaGarden unequivocally outperformed its human farmer controls—over the two test periods, the robotic system reduced water consumption by as much as a whopping 44 percent. As IEEE Spectrum explained, that translates to several hundred liters less after the two month period.

[Related: Quick and dirty tips to make sure your plants love the soil they’re in.]

Although researchers claim “AlphaGarden has thus passed the Turing Test for gardening,” referencing the much-debated marker for robotic intelligence and sentience, there are a few caveats here. For one, these commercial gantry systems remain cost prohibitive for most people (the cheapest one looks to be about $3,000), and more research is needed to further optimize its artificial light sources and water usage. There’s also the question of scalability and customization, as different gardens have different shapes, sizes, and needs.

Still, in an era of increasingly dire water worries, it’s nice to see developers creating novel ways to reduce water consumption for one of the planet’s thirstiest industries.

The post A robot gardener outperformed human horticulturalists in one vital area appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cozy knit sweaters could help robots ‘feel’ contact https://www.popsci.com/technology/robot-sweaters-yarn/ Thu, 25 May 2023 20:00:00 +0000 https://www.popsci.com/?p=543752
Robot arm encased in smart sweater material
The sensitive 'yarn' encases robots to direct them based on human touch and guidance. Carnegie Mellon

The snuggly garb is used to teach robots how to sense possible collisions in advance.

The post Cozy knit sweaters could help robots ‘feel’ contact appeared first on Popular Science.

]]>
Robot arm encased in smart sweater material
The sensitive 'yarn' encases robots to direct them based on human touch and guidance. Carnegie Mellon

Certain robots can certainly sense cold temperatures, but feeling cold is a whole other ordeal. And yet the world is now blessed with robot sweaters.

To be fair, the new, adorable garb recently designed by an engineering team at Carnegie Mellon University’s Robotics Institute isn’t intended to keep machines warm. As detailed in a research paper scheduled to be presented at 2023 IEEE International Conference on Robotics and Automation, the group utilized the properties of a knitted sweater to create a fabric capable of sensing pressure and contact. The cutting-edge textile can now help indicate direction, orientation, and even grip strength via physical touch. 

[Related: A new material creates clean electricity from the air around it.]

Like its yarn inspiration, the new “RobotSweater” fabric can be woven into whatever three-dimensional shape is needed, and thus fitted over robots’ uneven shapes and surfaces. The knitted material itself features two layers of conductive, metallic fibers capable of conducting electricity. Between those two layers, another lace-like pattern is inserted. When pressure is applied, a closed circuit is generated and subsequently detected by sensors.

In order to ensure the metallic yarn didn’t degrade or break with usage, the team wrapped the wires around snap fasteners at the end of each stripe in the fabric. “You need a way of attaching these things together that is strong, so it can deal with stretching, but isn’t going to destroy the yarn,” James McCann, an assistant professor in Carnegie Mellon’s School of Computer Science (SCS), explained in a statement.

To demonstrate their creation, researchers dressed up a companion robot in their RobotSweater, then pushed it to direct its head and body movement. On a robotic arm, the fabric could respond to guided human pushes, while grabbing the arm itself opened and closed a gripping mechanism.

[Related: Dirty diapers could be recycled into cheap, sturdy concrete.]

Swaddling robots in smart sweaters isn’t just fashionable—it could prove extremely valuable in industrial settings to improve human worker safety. According to the team, most safety barriers are currently extremely rigid and shield-like; encasing machines in flexible, sensitive fabrics, however could make them much more sensitive, and thus able to “detect any possible collision,” said Changliu Liu, an assistant professor of robotics in the SCS. Moving forward, the team hopes to integrate touchscreen inputs like swiping and pinching motions to direct robots. Even if that takes a while to realize, at least the machines will look stylish and cozy.

The post Cozy knit sweaters could help robots ‘feel’ contact appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Wendy’s wants underground robots to deliver food to your car https://www.popsci.com/technology/wendys-underground-delivery-robot/ Thu, 18 May 2023 16:30:00 +0000 https://www.popsci.com/?p=541984
Wendy's chain restaurant at night.
Wendy's wants to automate its drive-thru. Batu Gezer / Unsplash

The concept is similar to a pneumatic tube system.

The post Wendy’s wants underground robots to deliver food to your car appeared first on Popular Science.

]]>
Wendy's chain restaurant at night.
Wendy's wants to automate its drive-thru. Batu Gezer / Unsplash

Wendy’s announced this week that it is going to try using underground autonomous robots to speed up how customers collect online orders. The burger joint plans to pilot the system designed by “hyperlogistics” company Pipedream, and aims to be able to send food from the kitchen to designated parking spots.

Wendy’s seems to be on a quest to become the most technologically advanced fast food restaurant in the country. Last week, it announced that it had partnered with Google to develop its own AI system (called Wendy’s FreshAI) that could take orders at a drive-thru. This week, it’s going full futuristic. (Pipedream’s current marketing line is “Someday we’ll use teleportation, until then we’ll use Pipedream.”)

According to a PR email sent to PopSci, digital orders now make up 11 percent of Wendy’s total sales and are growing. On top of the 75 to 80 percent of orders that are placed at a drive-thru.

The proposed autonomous system aims “to make digital order pick-up fast, reliable and invisible.” When customers or delivery drivers are collecting an online order, they pull into a dedicated parking spot with an “Instant Pickup portal,” where there will be a drive-thru style speaker and kiosk to confirm the order with the kitchen. In a matter of seconds, the food is then sent out by robots moving through an underground series of pipes using “Pipedream’s temperature-controlled delivery technology.” The customer can then grab their order from the kiosk without ever leaving their car. Apparently, the “first-of-its-kind delivery system” is designed so that drinks “are delivered without a spill and fries are always Hot & Crispy.”

[Related: What robots can and can’t do for a restaurant]

Wendy’s is far from the first company to try and use robots to streamline customer orders, though most go further than the parking lot. Starship operates a delivery service on 28 university campuses while Uber Eats is still trialing sidewalk delivery robots in Miami, Florida; Fairfax, Virginia; and Los Angeles, California. Whether these knee-height six-wheeled electric autonomous vehicles can graduate from school and make it into the real world remains to be seen.

The other big semi-autonomous delivery bets are aerial drones. Wing, a subsidiary of Google-parent Alphabet, unveiled a device called the Auto-Loader earlier this year. It also calls for a dedicated parking spot and aims to make it quicker and easier for staff at partner stores to attach deliveries to one of the company’s drones. 

What sets Wendy’s and Pipedream’s solution apart is that it all happens in a space that the restaurant controls. Starship, Uber Eats, and Wing are all trying to bring robots out into the wider world where they can get attacked by students, take out power lines, and otherwise have to deal with humans, street furniture, and the chaos of existence. Providing Wendy’s abides by building ordinances and any necessary health and safety laws, cost is the only stopping them adding tube-dwelling robots to every restaurant the company controls. Really, the option Wendy’s is trialing has more in common with a pneumatic tube system—hopefully it will be a bit more practical.

The post Wendy’s wants underground robots to deliver food to your car appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This helpful robot uses a camera to find items for people with dementia https://www.popsci.com/technology/memory-robot-dementia/ Mon, 15 May 2023 17:00:00 +0000 https://www.popsci.com/?p=541200
Fetch robot picking up dry erase marker off of table
A new 'artificial memory' can log and locate missing items for users. University of Waterloo

Researchers designed a new object-detection algorithm allowing robots to remember the locations of items they 'see.'

The post This helpful robot uses a camera to find items for people with dementia appeared first on Popular Science.

]]>
Fetch robot picking up dry erase marker off of table
A new 'artificial memory' can log and locate missing items for users. University of Waterloo

Researchers at Canada’s University of Waterloo have unveiled a new program for personal assistance robots. This new program utilizes episodic memory and object-detection algorithms to help locate lost items. Although designed specifically to aid patients suffering from cognitive issues, the team believes their advancements could eventually find their way onto people’s smartphones or tablets.

Dementia affects approximately 1 in 10 Americans over the age of 65, while another 22 percent of the same population contends with mild cognitive impairments. Symptoms vary between individuals, but forgetfulness is a common issue that can disrupt one’s day and increase stress levels for both those suffering from these conditions, as well as their caregivers.

Knowing this, a four-person team at the University of Waterloo created an algorithm they then uploaded to a commercial Fetch mobile manipulator robot, endowing the machine with a memory log of individual objects detected via its onboard video camera. Once enabled, the Fetch robot noted the time and date for anytime it spotted an object in its view area. Researchers also designed a graphical user interface (GUI) for individuals to pick and label which detected objects they wanted to track. Searching for a label via keyboard entry could then bring up Fetch’s “highly accurate” location log, according to a statement released on Monday.

[Related: The latest recommendations for preventing dementia are good advice for everyone.]

“The long-term impact of this is really exciting,” said Ali Ayub, a postdoctoral fellow in electrical and computer engineering and study co-author. “A user can be involved not just with a companion robot but a personalized companion robot that can give them more independence.”

Caregiving robotics is a rapidly expanding field that is showing promise in a number of areas. Recently, researchers at the Munich Institute of Robotics and Machine Intelligence announced Garmi, a personal assistant designed to help elderly users for telemedicine appointments, and potentially even physical tasks like opening bottles and serving meals.

Although Ayub and their colleagues have only tested their visual-based algorithm amongst themselves, the team hopes to soon conduct further trials—first with people without disabilities, then with people dealing with dementia and other cognitive issues. While Ayub’s team conceded that disabled individuals could potentially find the GUI and robot intimidating, they believe the system could still prove extremely beneficial for their caregivers and family members.

The post This helpful robot uses a camera to find items for people with dementia appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This lawn-mowing robot can save part of your yard for pollinators https://www.popsci.com/technology/husqvarna-rewilding-mower-mode/ Mon, 15 May 2023 14:30:00 +0000 https://www.popsci.com/?p=541155
Pink clover meadow and blue sky.
Husqvarna's Rewilding Mode saves one tenth of yard for natural growth. Deposit Photos

Husqvarna has introduced a new autopilot mode for its mowers that omits a portion of owners' yards to promote sustainability.

The post This lawn-mowing robot can save part of your yard for pollinators appeared first on Popular Science.

]]>
Pink clover meadow and blue sky.
Husqvarna's Rewilding Mode saves one tenth of yard for natural growth. Deposit Photos

This month marks the fifth anniversary of “No Mow May,” an annual environmental project dedicated to promoting sustainable, eco-friendly lawns via a 31-day landscaping moratorium. In doing so, the brief respite gives bees and other pollinators a chance to do what they do best: contribute to a vibrant, healthy, and biodiverse ecosystem. To keep the No Mow May momentum going, Swedish tech company Husqvarna has announced a new, simple feature for its line of robotic lawnmowers: a “rewilding” mode that ensures 10 percent of owners’ lawns remain untouched for pollinators and other local wildlife.

While meticulously manicured lawns are part of the traditional suburban American mindset, they come at steep ecological costs such as biodiversity loss and massive amounts of water waste. The Natural Resource Defense Council, for instance, estimates that grass lawns consume almost 3 trillion gallons of water each year alongside 200 million gallons of gas for traditional mowers, as well as another 70 million pounds of harmful pesticides. In contrast, rewilding is a straightforward, self-explanatory concept long pushed by environmentalists and sustainability experts that encourages a return to regionally native flora for all-around healthier ecosystems.

[Related: Build a garden that’ll have pollinators buzzin’.]

While convincing everyone to adopt rewilding practices may seem like a near-term impossibility, companies like Husqvarna are hoping to set the literal and figurative lawnmower rolling with its new autopilot feature. According to Husqvarna’s announcement, if Europeans set aside just a tenth of their lawns, the cumulative area would amount to four times the size of the continent’s largest nature preserve.

Enabling the Rewilding Mode only takes a few taps within the product line’s Automower Connect app, and can be customized to change the overall shape, size, and placement of the rewilding zones. Once established, the robotic mower’s onboard GPS systems ensure which areas of an owner’s lawn are off-limits and reserved for bees, butterflies, and whatever else wants to set up shop.

Of course, turning on Rewilding Mode means owning a Husqvarna robotic mower that supports the setting—and at a minimum of around $700 for such a tool, they might be out of many lawn care enthusiasts’ budgets. Even so, that doesn’t mean you should abandon giving rewilding a try for your own lawns. It’s easy to get started on the project, and as its name suggests, doesn’t take much maintenance once it’s thriving. If nothing else, there’s still two weeks left in No Mow May, so maybe consider postponing your weekend outdoor chore for a few more days.

The post This lawn-mowing robot can save part of your yard for pollinators appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A fleet of humanoid, open-source robots could change robotics research https://www.popsci.com/technology/nsf-quori-robot-research/ Tue, 09 May 2023 16:00:00 +0000 https://www.popsci.com/?p=539990
Two researchers standing next to Quori humanoid robot
Over two dozen Quori robots are heading to research teams across the country. Shivani Jinger/OSU

Not all robots are created equal—and the National Science Foundation wants to help level the playing field to speed up research.

The post A fleet of humanoid, open-source robots could change robotics research appeared first on Popular Science.

]]>
Two researchers standing next to Quori humanoid robot
Over two dozen Quori robots are heading to research teams across the country. Shivani Jinger/OSU

Immense strides in human-robot interactions have been made over the past few years. But, all of these robots tend to be quite different.  The lack of an affordable, generalized, modular robotic platform hampers many researchers’ progress, alongside their ability to share and compare findings.

The National Science Foundation, an independent US government-funded agency supporting research and education, wants to accelerate advancements in robotics, and is offering a $5 million fleet of standardized humanoid robots to speed things along. On Monday, the NSF announced plans to distribute another 50 of its Quori bots to various research projects, with assistance from Oregon State University, University of Pennsylvania’s GRASP Laboratory, and the robotics software company, Semio.

[Related: Meet Garmi, a robot nurse and companion for Germany’s elderly population.]

First designed with support from the NSF’s Computer and Information Science and Engineering (CISE) Community Research Infrastructure, Quori robots feature an omnidirectional, wheeled base, expressive video screen face, two gesturing arms, and a bowing spine. Quori is made to function both in labs and “in the wild,” according to its official description.

A previous pilot program built and tested 10 Quori robots that were subsequently awarded to research teams, including one from Carnegie Mellon University, who used their model to focus on social behavior and communication methods between humans and robots.

The new multimillion-dollar expansion will see many more of these standardized humanoid bots made available to applicants. All of Quori’s hardware designs are available as open-source, meaning anyone can access them to potentially build their own versions.

“A big hurdle in robotics research has been the lack of a common robot to work with,” Bill Smart, a professor of mechanical, industrial, and manufacturing engineering in OSU’s College of Engineering and project co-lead, explained in a statement.  “It’s tough to compare results and replicate and build on each other’s work when everyone is using a different type of robot. Robots come in many shapes and sizes, with different types of sensors and varying capabilities.”

[Related: Robot trash cans have survived a New York City field test.]

Alongside OSU project co-lead Naomi Fitter, Smart’s team will primarily set up and maintain a resource network for the Quori fleet, as well as beta test the robots. The project aims to soon connect both researchers and students through online collaborations, events, and various other opportunities in hopes of “building a community of roboticists that can learn from one another and advance the pace of research.”

According to Smart, pairing newcomers alongside experienced individuals can help quickly bring them up to speed in their field, while also increasing diversity and access in a field that is inordinately composed of white male researchers. 

The post A fleet of humanoid, open-source robots could change robotics research appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
NASA hopes its snake robot can search for alien life on Saturn’s moon Enceladus https://www.popsci.com/technology/eels-robot-saturn-enceladus-moon/ Mon, 08 May 2023 19:00:00 +0000 https://www.popsci.com/?p=539793
Concept art of NASA EELS snake robot on icy surface of Saturn's moon, Enceladus
The 200-pound robot is designed to maneuver both across ice and underwater. NASA/JPL-CalTech

EELS could one day wriggle its way into Enceladus' hidden oceans in search of extraterrestrial life.

The post NASA hopes its snake robot can search for alien life on Saturn’s moon Enceladus appeared first on Popular Science.

]]>
Concept art of NASA EELS snake robot on icy surface of Saturn's moon, Enceladus
The 200-pound robot is designed to maneuver both across ice and underwater. NASA/JPL-CalTech

At least 83 moons orbit Saturn, and experts believe its most reflective one could harbor life underneath its icy surface. To find out, NASA scientists hope to send a massive serpentine robot to scour Enceladus, both atop its frozen ground—and maybe even within a hidden ocean underneath.

As CBS News highlighted on Monday, researchers and engineers are nearing completion of their Exobiology Extant Life Surveyor (EELS) prototype. The 16-foot-long, 200-pound snakelike bot is capable of traversing both ground and watery environments via “first-of-a-kind rotating propulsion units,” according to NASA’s Jet Propulsion Laboratory. These repeating units could act as tracks, gripping mechanisms, and underwater propellers, depending on the surrounding environment’s need. The “head” of EELS also includes 3D mapping technology alongside real-time video recording and transmission capabilities to document its extraplanetary adventure.

[Related: Saturn’s rings have been slowly heating up its atmosphere.]

In theory, EELS would traverse the surface of Enceladus towards one of the moon’s many “plume vents,” which it could then enter to use as a passageway towards its oceanic source. Over 100 of these vents were discovered at Enceladus’ southern pole by the Cassini space probe during its tenure around Saturn. Scientists have since determined the fissures emitted water vapor into space that contained amino acids, which are considered pivotal in the creation of lifeforms.

NASA EELS snake robot in ice skating rink next to researchers.
EELS goes ice-skating. CREDIT: NASA/JPL-CalTech.

To assess its maneuverability, NASA researchers have already taken EELS out for test drives in environments such as an ice skating rink in Pasadena, CA, and even an excursion to Athabasca Glacier in Canada’s Jasper National Park. Should all go as planned, the team hopes to present a finalized concept by fall 2024. But be prepared to wait a while to see it in action on Enceladus—EELS’ journey to the mysterious moon would reportedly take roughly 12 years. Even if it never makes it there, however, the robotic prototype could prove extremely useful closer to Earth, and even on it. According to the Jet Propulsion Lab, EELS could show promise exploring the polar caps of Mars, or even ice sheet crevasses here on Earth.

[Related: Saturn has a slushy core and rings that wiggle.]

Enceladus’ fascinating environment was first unveiled thanks to NASA’s historic Cassini space probe. Launched in 1997, the satellite began transmitting data and images of the planet and its moons back to Earth after arriving following a 7 year voyage. After 13 years of service, a decommissioned Cassini descended towards Saturn, where it was vaporized within the upper atmosphere’s high pressure and temperature. Although NASA could have left Cassini to cruise sans trajectory once its fuel ran out, they opted for the controlled demolition due to the slim possibility of crashing into Enceladus or Titan, which might have disrupted the potential life ecosystems scientists hope to one day discover. 

The post NASA hopes its snake robot can search for alien life on Saturn’s moon Enceladus appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
To build a better crawly robot, add legs—lots of legs https://www.popsci.com/technology/centipede-robot-georgia-tech/ Mon, 08 May 2023 11:00:00 +0000 https://www.popsci.com/?p=539360
centipede robot
The centipede robot from Georgia Tech is a rough terrain crawler. Georgia Institute of Technology

Researchers hope that more limbs will allow them to have fewer sensors.

The post To build a better crawly robot, add legs—lots of legs appeared first on Popular Science.

]]>
centipede robot
The centipede robot from Georgia Tech is a rough terrain crawler. Georgia Institute of Technology

When traveling on rough and unpredictable roads, the more legs the better—at least for robots. Balancing on two legs is somewhat hard; on four legs, it’s slightly easier. But what if you had many many legs, like a centipede? Researchers at Georgia Institute of Technology have found that by giving a robot multiple, connected legs, it allows the machine to easily clamber over landscapes with cracks, hills, and uneven surfaces without the need for extensive sensor systems that would otherwise have helped it navigate its environment. Their results are published in a study this week in the journal Science.

The team has previously done work modeling the motion of these creepy critters. In this new study, they created a framework for operating this centipede-like robot that was influenced by mathematician Claude Shannon’s communication theory, which posits that in transmitting a signal between two points, that to avoid noise, it’s better to break up the message into discrete, repeating units. 

“We were inspired by this theory, and we tried to see if redundancy could be helpful in matter transportation,” Baxi Chong, a physics postdoctoral researcher, said in a news release. Their creation is a robot with joined parts like a model train with two legs sticking out from each segment that could allow it to “walk.” The notion is that after being told to go to a certain destination, along the way, these legs would make contact with a surface, and send information about the terrain to the other segments, which would then adjust motion and position accordingly. The team put it through a series of real-world and computer trials to see how it walked, how fast it could go, and how it performed on grass, blocks, and other rough surfaces. 

[Related: How a dumpy, short-legged bird could change water bottle designs]

“One value of our framework lies in its codification of the benefits of redundancy, which lead to locomotor robustness over environmental contact errors without requiring sensing,” the researchers wrote in the paper. “This contrasts with the prevailing paradigm of contact-error prevention in the conventional sensor-based closed-loop controls that take advantage of visual, tactile, or joint-torque information from the environment to change the robot dynamics.”

They repeated the experiment with robots that had different numbers of legs (six, 12, and 14). In future work with the robot, the researchers said that they want to hone in on finding the optimal number of legs to give its centipede-bot so that it can move smoothly in the most cost-effective way possible.

“With an advanced bipedal robot, many sensors are typically required to control it in real time,” Chong said. “But in applications such as search and rescue, exploring Mars, or even micro robots, there is a need to drive a robot with limited sensing.” 

The post To build a better crawly robot, add legs—lots of legs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Stunt or sinister: The Kremlin drone incident, unpacked https://www.popsci.com/technology/kremlin-drone-incident-analysis/ Sat, 06 May 2023 11:00:00 +0000 https://www.popsci.com/?p=539413
Drones photo
Deposit Photos

There is a long history of drones being used in eye-catching and even dangerous ways.

The post Stunt or sinister: The Kremlin drone incident, unpacked appeared first on Popular Science.

]]>
Drones photo
Deposit Photos

Early in the morning of May 3, local Moscow time, a pair of explosions occurred above the Kremlin. Videos of the incident appeared to show two small drones detonating—ultramodern tech lit up against the venerable citadel. The incident was exclusively the domain of Russian social media for half a day, before Russian President Vladimir Putin declared it a failed assassination attempt.

What actually happened in the night sky above the Russian capital? It is a task being pieced together in public and in secret. Open-source analysts, examining the information available in the public, have constructed a picture of the event and video release, forming a good starting point.

Writing at Radio Liberty, a US-government-funded Russian-language outlet, reporters Sergei Dobrynin and Mark Krutov point out that a video showing smoke above the Kremlin was published around 3:30 am local time on a Moscow Telegram channel. Twelve hours later, Putin released a statement on the attack, and then, write Dobrynin and Krutov, “several other videos of the night attack appeared, according to which Radio Liberty established that two drones actually exploded in the area of ​​​​the dome of the Senate Palace with an interval of about 16 minutes, arriving from opposite directions. The first caused a small fire on the roof of the building, the second exploded in the air.”

That the drones exploded outside a symbolic target, without reaching a practical one, could be by design, or it could owe to the nature of Kremlin air defense, which may have shot the drones down at the last moment before they became more threatening. 

Other investigations into the origin, nature, and means of the drone incident are likely being carried out behind the closed doors and covert channels of intelligence services. Without being privy to those conversations, and aware that information released by governments is only a selective portion of what is collected, it’s possible to instead answer a different set of questions: could drones do this? And why would someone use a drone for an attack like this?

To answer both, it is important to understand gimmick drones.

What’s a gimmick drone?

Drones, especially the models able to carry a small payload and fly long enough to travel a practical distance, can be useful tools for a variety of real functions. Those can include real-estate photography, crop surveying, creating videos, and even carrying small explosives in war. But drones can also carry less-useful payloads, and be used as a way to advertise something other than the drone itself, like coffee delivery, beer vending, or returning shirts from a dry cleaner. For a certain part of the 2010s, attaching a product to a drone video was a good way to get the media to write about it. 

What stands out about gimmick drones is not that they were doing something only a drone could do, but instead that the people behind the stunt were using a drone as a publicity technique for something else. In 2018, a commercial drone was allegedly used in an assassination attempt against Venezuelan president Nicolás Maduro, in which drones flew at Maduro and then exploded in the sky, away from people and without reports of injury. 

As I noted at the time about gimmick drones, “In every case, the drone is the entry point to a sales pitch about something else, a prelude to an ad for sunblock or holiday specials at a casual restaurant. The drone was always part of the theater, a robotic pitchman, an unmanned MC. What mattered was the spectacle, the hook, to get people to listen to whatever was said afterwards.”

Drones are a hard weapon to use for precision assassination. Compared to firearms, poisoning, explosives in cars or buildings, or a host of other attacks, drones represent a clumsy and difficult method. Wind can blow the drones off course, they can be intercepted before they get close, and the flight time of a commercial drone laden with explosives is in minutes, not hours.

What a drone can do, though, is explode in a high-profile manner.

Why fly explosive-laden drones at the  Kremlin?

Without knowing the exact type of drone or the motives of the drone operator (or operators), it is hard to say exactly why one was flown at and blown up above one of Russia’s most iconic edifices of state power. Russia’s government initially blamed Ukraine, before moving on to attribute the attack to the United States. The United States denied involvement in the attack, and US Secretary of State Anthony Blinken said to take any Russian claims with “a very large shaker of salt.”

Asked about the news, Ukraine’s President Zelensky said the country fights Russia on its own territory, not through direct attacks on Putin or Moscow. The war has seen successful attacks on Putin-aligned figures and war proponents in Russia, as well as the family of Putin allies, though attribution for these attacks remains at least somewhat contested, with the United States attributing at least one of them to Ukrainian efforts.

Some war commentators in the US have floated the possibility that the attack was staged by Russia against Russia, as a way to rally support for the government’s invasion. However, that would demonstrate that Russian air defenses and security services are inept enough to miss two explosive-laden drones flying over the capital and would be an unusual way to argue that the country is powerful and strong. 

Ultimately, the drone attackers may have not conducted this operation to achieve any direct kill or material victory, but as a proof of concept, showing that such attacks are possible. It would also show that claims of inviolability of Russian airspace are, at least for small enough flying machines and covert enough operatives, a myth. 

In that sense, the May 3 drone incident has a lot in common with the May 1987 flight of Mathias Rust, an amateur pilot in Germany who safely flew a private plane into Moscow and landed it in Red Square, right near the Kremlin. Rust’s flight ended without bloodshed or explosions, and took place in a peacetime environment, but it demonstrated the hollowness of the fortress state whose skies he flew through.

The post Stunt or sinister: The Kremlin drone incident, unpacked appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Researchers built a ‘SoftZoo’ to virtually test animal-inspired robots https://www.popsci.com/technology/softzoo-animal-robots/ Fri, 05 May 2023 17:00:00 +0000 https://www.popsci.com/?p=539279
Young panda eating branch while sitting in tree.
Yes, there's a pandabot option. Deposit Photos

The open-source testing ground could help engineers envision future soft robotic designs.

The post Researchers built a ‘SoftZoo’ to virtually test animal-inspired robots appeared first on Popular Science.

]]>
Young panda eating branch while sitting in tree.
Yes, there's a pandabot option. Deposit Photos

There are so many animal-inspired soft robots out there at this point that you could easily pack an entire zoo with them. Although an adorable idea, it’s unlikely any such program will find its way into the real world soon—that said, a virtual zoo filled with digital soft robot prototypes will soon become available to researchers hoping to design and optimize their own creations.

A team at MIT recently unveiled SoftZoo, an open framework platform that simulates a variety of 3D model animals performing specific tasks in multiple environmental settings. “Our framework can help users find the best configuration for a robot’s shape, allowing them to design soft robotics algorithms that can do many different things,” MIT PhD student and project lead researcher Tsun-Hsuan Wang said in a statement. “In essence, it helps us understand the best strategies for robots to interact with their environments.”

While MIT notes similar platforms already exist, SoftZoo reportedly goes further by simulating design and control algorithms atop virtual biomes like snow, water, deserts, or wetlands. For instance, instead of a program only offering animal models like seals and caterpillars moving in certain directions, SoftZoo can place these designs in numerous settings via what’s known as a “differentiable multiphysics engine.”

[Related: Watch this robotic dog use one of its ‘paws’ to open doors.]

Soft robotics have quickly shown themselves to be extremely promising in navigating natural, real-world environments. Unlike laboratory settings, everyday clutter can prove extremely challenging for traditional robots. Soft variants’ malleability and adaptability, however, make them well suited for difficult situations such as volatile search-and-rescue scenarios like collapsed buildings and swift moving waters. The MIT team’s open-source SoftZoo program allows designers to simultaneously optimize their own works’ body and brain instead of relying on multiple expensive, complicated systems.

SoftZoo animal robot model examples
OpenZoo soft robot models. Credit: MIT/CSAIL

“This computational approach to co-designing the soft robot bodies and their brains (that is, their controllers) opens the door to rapidly creating customized machines that are designed for a specific task,” added Daniela Rus, paper co-author and director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor in the MIT Department of Electrical Engineering and Computer Science (EECS).

Of course, it’s one thing to simulate a soft robot, and another thing entirely to actualize it in the real world. “The muscle models, spatially varying stiffness, and sensorization in SoftZoo cannot be straightforwardly realized with current fabrication techniques, so we are working on these challenges,” explained Wang. Still, offering an open source program like SoftZoo allows researchers to experiment and test out their robot ideas in an extremely accessible way. From there, they can move on to making their best and most promising designs a reality.

The post Researchers built a ‘SoftZoo’ to virtually test animal-inspired robots appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robot plants could be used to grow infrastructure in space from scratch https://www.popsci.com/science/plant-inspired-robots-colonize-mars/ Thu, 04 May 2023 01:00:00 +0000 https://www.popsci.com/?p=538662
A variable-stiffness tendril-like soft robot (polyethylene terephthalate (PET) tube) based on reversible osmotic actuation. An osmosis-driven system that controls its turgidity and performs sophisticated tasks.
A variable-stiffness tendril-like soft robot (polyethylene terephthalate (PET) tube) based on reversible osmotic actuation. An osmosis-driven system that controls its turgidity and performs sophisticated tasks. IIT-Istituto Italiano di Tecnologia

Barbara Mazzolai’s roboplants could analyze and enrich soil, search for water and other chemicals, and more.

The post Robot plants could be used to grow infrastructure in space from scratch appeared first on Popular Science.

]]>
A variable-stiffness tendril-like soft robot (polyethylene terephthalate (PET) tube) based on reversible osmotic actuation. An osmosis-driven system that controls its turgidity and performs sophisticated tasks.
A variable-stiffness tendril-like soft robot (polyethylene terephthalate (PET) tube) based on reversible osmotic actuation. An osmosis-driven system that controls its turgidity and performs sophisticated tasks. IIT-Istituto Italiano di Tecnologia

This article was original featured on MIT Press.This article is excerpted from Dario Floreano and Nicola Nosengo’s book “Tales From a Robotic World.”

In the early 2010s, a new trend in robotics began to emerge. Engineers started creating robotic versions of salamanders, dragonflies, octopuses, geckos, and clams — an ecosystem of biomimicry so diverse the Economist portrayed it as “Zoobotics.” And yet Italian biologist-turned-engineer Barbara Mazzolai raised eyebrows when she proposed looking beyond animals and building a robot inspired by a totally different biological kingdom: plants. As fluid as the definition of the word robot can be, most people would agree that a robot is a machine that moves. But movement is not what plants are famous for, and so a robotic plant might at first sound, well, boring.

But plants, it turns out, are not static and boring at all; you just have to look for action in the right place and at the right timescale. When looking at the lush vegetation of a tropical forest or marveling at the colors of an English garden, it’s easy to forget that you are actually looking at only half of the plants in front of you. The best-looking parts, maybe, but not necessarily the smartest ones. What we normally see are the reproductive and digestive systems of a plant: the flowers and fruits that spread pollen and seeds and the leaves that extract energy from sunlight. But the nervous system, so to speak, that explores the environment and makes decisions is in fact underground, in the roots.

Roots may be ugly and condemned to live in darkness, but they firmly anchor the plant and constantly collect information from the soil to decide in which direction to grow to find nutrients, avoid salty soil, and prevent interference with the roots of other plants. They may not be the fastest diggers, but they’re the most efficient ones, and they can pierce the ground using only a fraction of the energy that worms, moles, or manufactured drills require. Plant roots are, in other words, a fantastic system for underground exploration — which is what inspired Mazzolai to create a robotic version of them.

“It forced us to rethink everything, from materials to sensing and control of robots.”

Mazzolai’s intellectual path is a case study in interdisciplinarity. Born and raised in Tuscany, in the Pisa area that is one of Italy’s robotic hot spots, she was fascinated early on by the study of all things living, graduating in biology from the University of Pisa and focusing on marine biology. She then became interested in monitoring the health of ecosystems, an interest that led her to get her doctorate in microengineering and eventually to be offered by Paolo Dario, a biorobotics pioneer at Pisa’s Scuola Superiore Sant’Anna, the possibility of opening a new research line on robotic technologies for environmental sensing.

It was there, in Paolo Dario’s group, that the first seeds of her plant-inspired robots were planted. Mazzolai got in touch with a group at the European Space Agency (ESA) in charge of exploring innovative technologies that looked interesting but were still far away from applications, she recalls. While brainstorming with them, she realized space engineers were struggling with a problem that plants brilliantly solved several hundred million years ago.

“In real plants, roots have two functions,” says Mazzolai. “They explore the soil in search of water and nutrients, but even more important, they anchor the plant, which would otherwise collapse and die.” Anchoring happens to be an unsolved problem when designing systems that have to sample and study distant planets or asteroids. In most cases, from the moon to Mars and distant comets and asteroids, the force of gravity is weak. Unlike on Earth, the weight of the spacecraft or rover is not always enough to keep it firmly on the ground, and the only available option is to endow the spacecraft with harpoons, extruding nails, and drills. But these systems become unreliable over time if the soil creeps, provided they work in the first place. They didn’t work for Philae, for example, the robotic lander that arrived at the 67P/Churyumov–Gerasimenko comet in 2014 after a 10-year trip only to fail to anchor at the end of its descent, bouncing away from the ground and collecting just a portion of the planned measurements.

In a brief feasibility study carried out between 2007 and 2008 for ESA, Mazzolai and her team let their imagination run free and described an anchoring system for spacecrafts inspired by plant roots. The research group also included Stefano Mancuso, a Florence-based botanist who would later gain fame for his idea that plants display “intelligent” behavior, although of a completely different sort from that of animals. Mazzolai and her team described an ideal system that would reproduce, and transfer to other planets, the ability of Earth plants to dig through the soil and anchor to it.

In the ESA study, Mazzolai imagined a spacecraft descending on a planet with a really hard landing: The impact would dig a small hole in the planetary surface, inserting a “seed” just deep enough in the soil, not too different from what happens to real seeds. From there, a robotic root would start to grow by pumping water into a series of modular small chambers that would expand and apply pressure on the soil. Even in the best-case scenario, such a system could only dig through loose and fine dust or soil. The root would have to be able to sense the underground environment and turn away from hard bedrock. Mazzolai suggested Mars as the most suitable place in the solar system to experiment with such a system — better than the moon or asteroids because of the Red Planet’s low gravity and atmospheric pressure at surface level (respectively, 1/3 and 1/10 of those found on Earth). Together with a mostly sandy soil, these conditions would make digging easier because the forces that keep soil particles together and compact them are weaker than on Earth.

At the time, ESA did not push forward with the idea of a plant-like planetary explorer. “It was too futuristic,” Mazzolai admits. “It required technology that was not yet there, and in fact still isn’t.” But she thought that others beyond the space sector would find the idea intriguing. After transitioning to the Italian Institute of Technology, in 2012, Mazzolai convinced the European Commission to fund a three-year study that would result in a plant-inspired robot, code-named Plantoid. “It was uncharted territory,” says Mazzolai. “It meant creating a robot without a predefined shape that could grow and move through soil — a robot made of independent units that would self-organize and make decisions collectively. It forced us to rethink everything, from materials to sensing and control of robots.”

The project had two big challenges: on the hardware side, how to create a growing robot, and on the software side, how to enable roots to collect and share information and use it to make collective decisions. Mazzolai and her team tackled hardware first and designed the robot’s roots as flexible, articulated, cylindrical structures with an actuation mechanism that can move their tip in different directions. Instead of the elongation mechanism devised for that initial ESA study, Mazzolai ended up designing an actual growth mechanism, essentially a miniature 3D printer that can continuously add material behind the root’s tip, thus pushing it into the soil.

It works like this. A plastic wire is wrapped around a reel stored in the robot’s central stem and is pulled toward the tip by an electric motor. Inside the tip, another motor forces the wire into a hole heated by a resistor, then pushes it out, heated and sticky, behind the tip, “the only part of the root that always remains itself,” Mazzolai explains. The tip, mounted on a ball bearing, rotates and tilts independent of the rest of the structure, and the filament is forced by metallic plates to coil around it, like the winding of a guitar string. At any given time, the new plastic layer pushes the older layer away from the tip and sticks to it. As it cools down, the plastic becomes solid and creates a rigid tubular structure that stays in place even when further depositions push it above the metallic plates. Imagine winding a rope around a stick and the rope becomes rigid a few seconds after you’ve wound it. You could then push the stick a bit further, wind more rope around it, and build a longer and longer tube with the same short stick as a temporary support. The tip is the only moving part of the robot; the rest of the root only extends downward, gently but relentlessly pushing the tip against the soil.

The upper trunk and branches of the plantoid robot are populated by soft, folding leaves that gently move toward light and humidity. Plantoid leaves cannot yet transform light into energy, but Michael Graetzel, a chemistry professor at EPFL in Lausanne, Switzerland, and one of the world’s most cited scientists, has developed transparent and foldable films filled with synthetic chlorophyll capable of converting and storing electricity from light that one day could be formed into artificial leaves powering plantoid robots. “The fact that the root only applies pressure to the soil from the tip is what makes it fundamentally different from traditional drills, which are very destructive. Roots, on the contrary, look for existing soil fractures to grow into, and only if they find none, they apply just enough pressure to create a fracture themselves,” Mazzolai explains.

This new project may one day result in robot explorators that can work in dark environments with a lot of empty space, such as caves or wells.

The plantoid project has attracted a lot of attention in the robotics community because of the intriguing challenges that it combines — growth, shape shifting, collective intelligence — and because of possible new applications. Environmental monitoring is the most obvious one: The robotic roots could measure changing concentrations of chemicals in the soil, especially toxic ones, or they could prospect for water in arid soils, as well as for oil and gas — even though, by the time this technology is mature, we’d better have lost our dependence on them as energy sources on planet Earth. They could also inspire new medical devices, such as safer endoscopes that move in the body without damaging tissue. But space applications remain on Mazzolai’s radar.

Meanwhile, Mazzolai has started another plant-inspired project, called Growbot. This time the focus is on what happens over the ground, and the inspiration comes from climbing trees. “The invasiveness of climbing plants shows how successful they are from an evolutionary point of view,” she notes. “Instead of building a solid trunk, they use the extra energy for growing and moving faster than other plants. They are very efficient at using clues from the environment to find a place to anchor. They use light, chemical signals, tactile perception. They can sense if their anchoring in the soil is strong enough to support the part of the plant that is above the ground.” Here the idea is to build another growing robot, similar to the plantoid roots, that can overcome void spaces and attach to existing structures. “Whereas plantoids must face friction, grow-bots work against gravity,” she notes. This new project may one day result in robot explorators that can work in dark environments with a lot of empty space, such as caves or wells.

But for all her robots, Mazzolai is still keeping an eye on the visionary idea that started it all: planting and letting them grow on other planets. “It was too early when we first proposed it; we barely knew how to study the problem. Now I hope to start working with space agencies again.” Plant-inspired robots, she says, could not only sample the soil but also release chemicals to make it more fertile — whether on Earth or a terraformed Mars. And in addition to anchoring, she envisions a future where roboplants could be used to grow entire infrastructure from scratch. “As they grow, the roots of plantoids and the branches of a growbot would build a hollow structure that can be filled with cables or liquids,” she explains. This ability to autonomously grow the infrastructure for a functioning site would make a difference when colonizing hostile environments such as Mars, where a forest of plant-inspired robots could analyze the soil and search for water and other chemicals, creating a stable structure complete with water pipes, electrical wiring, and communication cables: the kind of structure astronauts would like to find after a year-long trip to Mars.


Dario Floreano is Director of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology Lausanne (EPFL). He is the co-author, with Nicola Nosengo, of “Tales From a Robotic World: How Intelligent Machines Will Shape Our Future,” from which this article is excerpted.

Nicola Nosengo is a science writer and science communicator at EPFL. His work has appeared in Nature, the Economist, Wired, and other publications. He is the Chief Editor of Nature Italy

The post Robot plants could be used to grow infrastructure in space from scratch appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Seals provided inspiration for a new waddling robot https://www.popsci.com/technology/seal-soft-robot/ Mon, 01 May 2023 16:00:00 +0000 https://www.popsci.com/?p=537958
Two seals laying on shore near water.
Pinnipeds are getting robotic cousins. Deposit Photos

Fin-footed mammals, aka pinnipeds, provided the template for a new soft robot.

The post Seals provided inspiration for a new waddling robot appeared first on Popular Science.

]]>
Two seals laying on shore near water.
Pinnipeds are getting robotic cousins. Deposit Photos

It might be difficult to see at first, but if you squint just right, you can tell the latest animal-inspired robot owes its ungainly waddle to seals. Researchers at Chicago’s DePaul University looked at the movements of the aquatic mammal and its relatives for their new robot prototype—and while it may look a bit silly, the advances could one day help in extremely dire situations.

According to their paper’s abstract, the team writes they aimed to build a robot featuring “improved degrees of freedom, gait trajectory diversity, limb dexterity, and payload capabilities.” To do this, they studied the movements of pinnipeds—the technical term given to fin-footed mammals such as seals, walruses, and sea lions—as an alternative to existing quadrupedal and soft-limbed robots. Their final result is a simplified, three-limbed device that propels itself via undulating motions and is supported by a rigid “backbone” like those of their mammalian inspirations.

As also detailed last week via TechXplore, the robot’s soft limbs are each roughly 9.5 inches long by 1.5 inches wide, and encased in a protective outer casing. Each arm is driven by pneumatic actuators filled with liquid to obtain varying degrees of stiffness. Changing the limbs’ rigidness controls the robot’s directional abilities, something researchers say is generally missing from similar crawling machines.

[Related: Robot jellyfish swarms could soon help clean the oceans of plastic.]

Interestingly, the team realized that their pinniped product actually moves faster when walking “backwards.” While in reverse, the robot waddled at a solid 6.5 inches per second, compared to just 4.5 inches per second during forward motion. “Pinnipeds use peristaltic body movement to propel forward since the bulk of the body weight is distributed towards the back,” explains the team in its research paper. “But, the proposed soft robot design has a symmetric weight distribution and thus it is difficult to maintain stability while propelling forward. As a consequence, the robot shows limited frontal movements. Conversely, when propelling backward, the torque imbalance is countered by the body.”

But despite the reversal and slightly ungainly stride, the DePaul University team believes soft robots such as their seal-inspired creation could one day come in handy for dangerous tasks, including nuclear site inspections, search and rescue efforts, and even future planetary explorations. It might be one small step for robots, but it may prove one giant waddle for pinniped propulsion tech.

The post Seals provided inspiration for a new waddling robot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This agile robotic hand can handle objects just by touch https://www.popsci.com/technology/robot-hand-sixth-sense/ Fri, 28 Apr 2023 18:15:00 +0000 https://www.popsci.com/?p=537548
A robotic hand manipulates a reflective disco ball in dim lighting.
The hand can spin objects like this disco ball without the need of 'eyes'. Columbia University ROAM Lab

Researchers designed a robot that doesn't need visual data to get a handle on objects.

The post This agile robotic hand can handle objects just by touch appeared first on Popular Science.

]]>
A robotic hand manipulates a reflective disco ball in dim lighting.
The hand can spin objects like this disco ball without the need of 'eyes'. Columbia University ROAM Lab

The human hand is amazingly complex—so much so that most modern robots and artificial intelligence systems have a difficult time understanding how they truly work. Although machines are now pretty decent at grasping and replacing objects, actual manipulation of their targets (i.e. assembly, reorienting, and packaging) remains largely elusive. Recently, however, researchers created an impressively dextrous robot after realizing it needed less, not more, sensory inputs.

A team at Columbia Engineering has just unveiled a five-digit robotic “hand” that relies solely on its advanced sense of touch, alongside motor learning algorithms, to handle difficult objects—no visual data required. Because of this, the new proof-of-concept is completely immune to common optical issues like dim lighting, occlusion, and even complete darkness.

[Related: Watch a robot hand only use its ‘skin’ to feel and grab objects.]

Each of the new robot’s digits are equipped with highly sensitive touch sensors alongside 15 independently actuating joints. Irregularly shaped objects such as a miniature disco ball were then placed into the hand for the robot to rotate and maneuver without dropping them. Alongside “submillimeter” tactile data, the robot relied on what’s known as “proprioception.” Often referred to as the “sixth sense,” proprioception includes abilities like physical positionality, force, and self-movement. These data points were then fed into a deep reinforcement learning program, which was able to simulate roughly one year of practice time in only a few hours via “modern physics simulators and highly parallel processors,” according to a statement from Columbia Engineering.

In their announcement, Matei Ciocarlie, an associate professor in the departments of mechanical engineering and computer science, explained that “the directional goal for the field remains assistive robotics in the home, the ultimate proving ground for real dexterity.” While Ciocarlie’s team showed how this was possible without any visual data, they plan to eventually incorporate that information into their systems. “Once we also add visual feedback into the mix along with touch, we hope to be able to achieve even more dexterity, and one day start approaching the replication of the human hand,” they added.

[Related: AI is trying to get a better handle on hands.]

Ultimately, the team hopes to combine this dexterity and understanding alongside more abstract, semantic and embodied intelligence. According to Columbia Engineering researchers, their new robotic hand represents the latter capability, while recent advances in large language modeling through OpenAI’s GPT-4 and Google Bard could one day supply the former.

The post This agile robotic hand can handle objects just by touch appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tony Stark would love this new experimental materials lab https://www.popsci.com/technology/a-lab-materials-discovery/ Fri, 28 Apr 2023 14:21:08 +0000 https://www.popsci.com/?p=537487
Berkeley Lab researcher Yan Zeng looks over the starting point at A-Lab.
Berkeley Lab researcher Yan Zeng looks over the starting point at A-Lab. (Credit: Marilyn Sargent/Berkeley Lab), © 2023 The Regents of the University of California, Lawrence Berkeley National Laboratory

It’s operated by robotic arms and AI, and it runs around the clock.

The post Tony Stark would love this new experimental materials lab appeared first on Popular Science.

]]>
Berkeley Lab researcher Yan Zeng looks over the starting point at A-Lab.
Berkeley Lab researcher Yan Zeng looks over the starting point at A-Lab. (Credit: Marilyn Sargent/Berkeley Lab), © 2023 The Regents of the University of California, Lawrence Berkeley National Laboratory

Lawrence Berkeley National Laboratory has recently announced the completion of its ‘A-Lab,’ where the ‘A’ stands for artificial intelligence, automated, and accelerated. The $2 million lab is complete with three robotic arms, eight furnaces, and lab equipment all controlled by AI software, and it works around the clock. 

If it seems like a real-life replica of Marvel character Tony Stark’s lab, well, it’s not far off. It’s an entirely autonomous lab that can create and test up to 200 samples of new materials a day, accelerating materials science discoveries at an unprecedented rate and easing the workload off researchers.

Researchers at the A-lab are currently working on materials for improved batteries and energy storage devices, hoping to meet urgent needs for sustainable energy use. The lab could spur innovation in many other industries as well.

“Materials development, which is so important for society, is just too slow,”  says Gerd Ceder, the principal investigator for A-Lab. 

Materials science is a field that identifies, develops, and tests materials and their application for everything from aerospace to clean energy to medicine.

Materials scientists typically use computers to predict novel, not-seen-in-nature, materials that are stable enough to be used. Though a computer can generate theoretical inorganic compounds, identifying which novel compounds to make, figuring out how to synthesize them, and then evaluating their performance is a time-consuming process to do manually. 

[Related: This tiny AI-powered robot is learning to explore the ocean on its own]

Additionally, computational tools have made designing materials virtually so much easier, which means that there is a surplus of novel materials that still need to be tested, creating a bottleneck effect.

“Sometimes you’re lucky and in two weeks of trying, you’ve made it and sometimes six months in the lab and you’re nowhere.” Ceder says. “So developing chemical synthesis routes to actually make that compound that you would like to get so much can be extremely time consuming.”

A-Lab works with The Materials Project, a database of hundreds of thousands predicted materials, run by founding director Kristin Persson. They provide free access to thousands of computationally predicted novel materials, together with information on the compounds’ structures and some of their chemical properties, that researchers can use.

“In order to actually design new materials, we can’t just predict them in the computer,” Persson says. “We have to show that this is real.”

Experienced researchers can only vet a handful of samples in a working day. A-Lab would in theory be able to produce hundreds of samples quickly, more accurately. With the help of A-Lab, researchers can allocate more of their time to big-picture projects instead of doing grunt work. 

Yan Zeng, a staff scientist leading the A-lab, compares the lab’s process to cooking a new dish, where the lab is given a new dish, which in this case is the target compound, to find a recipe for. Once researchers identify a novel compound with the required qualities, they send it to the lab. The AI system creates new recipes with various combinations of over 200 ingredients, or precursor powders like metal oxides containing iron, copper, manganese, and nickel. 

The robot arms mix the slurry of powders together with a solvent, and then bake the new sample in furnaces to stimulate a chemical reaction that may or may not yield the intended compound. Following trial and error, the AI system can then learn and tweak the recipe until it creates a successful compound. 

[Related: A simple guide to the expansive world of artificial intelligence]

AI software controls the movement of three robotic arms that work with lab equipment, and weigh and mix different combinations of starting ingredients. And the lab itself is also autonomous. That means it can make new decisions about what to do following failures, independently working through new synthesis recipes faster than a human can.

“I had not expected that it would do so well on the synthesis of novel compounds,” Ceder says. “And that was kind of the maiden voyage.” 

The speed bump from human scientists is not only due to the AI-controlled robots, but because the software can draw knowledge from  around 100,000 synthesis recipes across five million research papers. 

Like a human scientist, A-lab also records details from every experiment, even documenting the failures. 

Researchers do not publish data from failed experiments for many reasons, including limited time and funding, lack of public interest, and the perception that failure is less informative than success. However, failed experiments do have a valuable place in research. They rule out false hypotheses and unsuccessful approaches. With easy access to data from hundreds of failed samples created each day, they can better understand what works, and what does not.

The post Tony Stark would love this new experimental materials lab appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Marines are getting supersized drones for battlefield resupply https://www.popsci.com/technology/marines-large-resupply-drones/ Thu, 27 Apr 2023 20:40:51 +0000 https://www.popsci.com/?p=537422
A TRV-150 seen on April 20, 2023.
A TRV-150 seen on April 20, 2023. Raymond Valdez / US Army

The big flying machines are designed to carry about 150 pounds and can fly at about 67 miles per hour.

The post The Marines are getting supersized drones for battlefield resupply appeared first on Popular Science.

]]>
A TRV-150 seen on April 20, 2023.
A TRV-150 seen on April 20, 2023. Raymond Valdez / US Army

On April 11, the Department of Defense announced that it was allocating just over $8 million for 21 new delivery drones. These flying machines, officially called the TRV-150C Tactical Resupply Unmanned Aircraft Systems, are made by Survice Engineering in partnership with Malloy Aeronautics

The TRV-150C is a four-limbed drone that looks like a quadcopter on stilts. Its tall landing legs allow it to take off with a load of up to 150 pounds of cargo slung underneath. The drone’s four limbs each mount two rotors, making the vehicle more of an octocopter than a quadcopter. 

The TRV drone family also represents the successful evolution of a long-running drone development program, one that a decade ago promised hoverbikes for humans and today is instead delivering uncrewed delivery drones.

The contract award is through the Navy and Marine Corps Small Tactical Unmanned Aircraft Systems program office, which is focused on ensuring the people doing the actual fighting on the edge of combat or action get the exact robotic assistance they need. For Marines, this idea has been put into practice and not just theorized, with an exercise involving drone resupply taking place at Quantico, Virginia, at the end of March.

The Tactical Resupply Unmanned Aircraft System (TRUAS), as the TRV-150C is referred to in use, “is designed to provide rapid and assured, highly automated aerial distribution to small units operating in contested environments; thereby enabling flexible and rapid emergency resupply, routine distribution, and a constant push and pull of material in order to ensure a constant state of supply availability,” said Master Sergeant Chris Genualdi in a release about the event. Genualdi already works in the field of airborne and air delivery, so the delivery drone became an additional tool to meet familiar problems.

Malloy Aeronautics boasts that the drone has a range of over 43 miles; in the Marines’ summary from Quantico, the drone is given a range of 9 miles for resupply missions. Both numbers can be accurate: Survice gives the unencumbered range of the TRV-150 at 45 miles, while carrying 150 pounds of cargo that range is reduced to 8 miles. 

With a speed of about 67 mph and a flight process that is largely automated, the TRV-150C is a tool that can get meaningful quantities of vital supplies where they are needed, when they are needed. Malloy also boasts that drones in the TRV-150 family have batteries that can be easily swapped, allowing for greater operational tempo as the drones themselves do not have to wait for a recharge before being sent on their next mission.

These delivery drones use “waypoint navigation for mission planning, which uses programmed coordinates to direct the aircraft’s flight pattern,” the Marines said in a release, with Genualdi noting “that the simplicity of operating the TRUAS is such that a Marine with no experience with unmanned aircraft systems can be trained to operate and conduct field level maintenance on it in just five training days.”

Reducing the complexity of the drone to essentially a flying cart that can autonomously deliver gear where needed is huge. The kinds of supplies needed in battle are all straightforward—vital tools like more bullets, more meals, or even more blood and medical equipment—so attempts at life-saving can be made even if it’s unsafe for the soldiers to move towards friendly lines for more elaborate care.

Getting the drone down to just a functional delivery vehicle comes after years of work. In 2014, Malloy debuted a video of a reduced scale hoverbike designed for a human to ride on, using four rotors and a rectangular body. En route to becoming the basis for the delivery drone seen today, the hoverbike was explored by the US Army as a novel way to fly scouts around. This scout ultimately moved to become a resupply tool, which the Army tested in January 2017.

In 2020, the US Navy held a competition for a range of delivery drones at the Yuma Proving Grounds in Arizona. The entry by Malloy and Survice came in first place, and cemented the TRV series as the drones to watch for battlefield delivery. In 2021, British forces used TRV drones in an exercise, with the drones tasked with delivering blood to the wounded. 

“This award represents a success story in the transition of technology from U.S. research laboratories into the hands of our warfighters,” said Mark Butkiewicz, a vice president at SURVICE Engineering, in a release. “We started with an established and proven product from Malloy Aeronautics and integrated the necessary tech to provide additional tactical functionality for the US warfighter. We then worked with research labs to conduct field experiments with warfighters to refine the use of autonomous unmanned multirotor drones to augment logistical operations at the forward most edge of the battlefield.”

The 21 drones awarded by the initial contract will provide a better start, alongside the drones already used for training, in teaching the Marines how to rely on robots doing resupply missions in combat. Genualdi expects the Marines to create a special specialty to support the use of drones, with commanders dispatching members to learn how to work alongside the drone.

The drones could also see life as exportation and rescue tools, flying through small gaps in trees, buildings, and rubble in order to get people the aid they need. In both peace and wartime uses, the drone’s merit is its ability to get cargo where it is needed without putting additional humans at risk of catching a bullet. 

The post The Marines are getting supersized drones for battlefield resupply appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robot jellyfish swarms could soon help clean the oceans of plastic https://www.popsci.com/technology/jellyfish-robot-ocean-pollution/ Wed, 26 Apr 2023 17:00:00 +0000 https://www.popsci.com/?p=536873
The Jellyfish-Bot is small, energy efficient, and virtually noiseless.
The Jellyfish-Bot is small, energy efficient, and virtually noiseless. MPI-IS

By simulating jellyfish movement with artificial muscles, the robots can safely kick up ocean trash for recycling.

The post Robot jellyfish swarms could soon help clean the oceans of plastic appeared first on Popular Science.

]]>
The Jellyfish-Bot is small, energy efficient, and virtually noiseless.
The Jellyfish-Bot is small, energy efficient, and virtually noiseless. MPI-IS

The oceans are inundated with plastic. Despite the numerous flashy proposed solutions, there unfortunately still isn’t any surefire way to clean it all up. One of the most buzzed about ideas—underwater vacuuming—has recently come up against intense scrutiny for its potential collateral damage to marine ecosystems and wildlife. Meanwhile, even the more delicate alternatives often hinge upon large, cumbersome surface skimmers. To tackle some of these issues, scientists at Germany’s Max Planck Institute for Intelligent Systems (MPI-IS) have created a robotic trash collector inspired by some of the oceans’ oldest and most resilient residents—jellyfish.

Recently detailed in the research journal Scientific Advances, the team’s “Jellyfish-Bot” already shows promise in helping cleanup the copious amounts of human-generated trash littering the planets’ aquatic environments. But unlike many other underwater cleaners, the prototype is incredibly small, energy-efficient, and nearly noiseless. Additionally, the hand-sized device doesn’t need to actually physically interact with its cleanup targets. Instead, the robot takes a cue from jellyfishes’ graceful movements via six limbs employing artificial muscles called hydraulically amplified self-healing electrostatic actuators, or HASELs.

As New Atlas explains, HASELs are ostensibly electrode-covered sacs filled with oils. When the electrodes receive a small current—in this case, about 100 mW—they become positively charged, then safely discharge the current into the negatively charged water around them. Alternating this current forces the oil in the sacs to move back and forth, thus making the actuators flap in a way that generates momentum to move trash particles upward. From there, humans or other gathering tools can scoop up the detritus.

“When a jellyfish swims upwards, it can trap objects along its path as it creates currents around its body,” study author and postdoc in the MPI-IS Physical Intelligence Department Tianlu Wang explained in a statement. “In this way, it can also collect nutrients.”

Wang went on to describe how their robot similarly circulates water around it. “This function is useful in collecting objects such as waste particles,” Wang adds. “It can then transport the litter to the surface, where it can later be recycled.”

[Related: Ocean plastic ‘vacuums’ are sucking up marine life along with trash.]

Apart from generating currents, the Jellyfish-Bots’ actuators could also be divided up into separate responsibilities. In the team’s demonstrations, the prototypes could use all six of its limbs for propulsion, or rely on two of them as claws to lightly grasp targets like an N95 face mask.

The biggest drawback at the moment is simply the fact that a controlled Jellyfish-Bot still requires a wired connection for power, thus hampering its scope. Although researchers have been able to incorporate battery and wireless communications modules into the robots, the untethered versions cannot currently be directed in a desired path. Still, it’s easy to envision future iterations of the Jellyfish-Bot clearing this relatively small hurdle. If that is accomplished, then fleets of the cute cleanup machines may soon be deployed as a safe, efficient, and environmentally harmless way to help tackle one of the environment’s most pressing threats.

The post Robot jellyfish swarms could soon help clean the oceans of plastic appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness https://www.popsci.com/technology/fish-disco-arctic-ocean/ Mon, 24 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=536004
northern lights over the Arctic ocean
Northern lights over the Arctic ocean. Oliver Bergeron / Unsplash

It's one of the many tools they use to measure artificial light’s impact on the Arctic ocean's sunless world.

The post Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness appeared first on Popular Science.

]]>
northern lights over the Arctic ocean
Northern lights over the Arctic ocean. Oliver Bergeron / Unsplash

During the winter, the Arctic doesn’t see a sunrise for months on end. Although completely immersed in darkness, life in the ocean goes on. Diurnal animals like humans would be disoriented by the lack of daylight, having been accustomed to regular cycles of day and night. 

But to scientists’ surprise, it seems that even the photosynthetic plankton—microorganisms that normally derive their energy from sunlight—have found a way through the endless night. These marine critters power the region’s ecosystem, through the winter and into the spring bloom. Even without the sun, daily patterns of animals migrating from surface to the depths and back again (called the diel vertical migration) remain unchanged. 

However, scientists are concerned that artificial light could have a dramatic impact on this uniquely adapted ecosystem. The Arctic is warming fast, and the ice is getting thinner—that means there’s more ships, cruises, and coastal developments coming in, all of which can add light pollution to the underwater world. We know that artificial light is harmful to terrestrial animals and birds in flight. But its impact on ocean organisms is still poorly understood. 

A research team called Deep Impact is trying to close this knowledge gap, as reported in Nature earlier this month. Doing the work, though, is no easy feat. Mainly, there’s a bit of creativity involved in conducting experiments in the darkness—researchers need to understand what’s going on without changing the behaviors of the organisms. Any illumination, even from the research ship itself, can skew their observations. This means that the team has to make good use of a range of tools that allow them to “see” where the animals are and how they’re behaving, even without light. 

One such invention is a specially designed circular steel frame called a rosette, which contains a suite of optical and acoustic instruments. It is lowered into the water to survey how marine life is moving under the ship. During data collection, the ship will make one pass across an area of water without any light, followed by another pass with the deck lights on. 

[Related: Boaty McBoatface has been a very busy scientific explorer]

There are a range of different rosettes, made up of varying instrument compositions. One rosette called Frankenstein can measure light’s effect on where zooplankton and fish move to in the water column. Another, called Fish Disco, “emits sequences of multicolored flashes to measure how they affect the behavior of zooplankton,” according to Nature

And of course, robots that can operate autonomously can come in handy for occasions like these. Similar robotic systems have already been deployed on other aquatic missions like exploring the ‘Doomsday glacier,’ scouring for environmental DNA, and listening for whales. In absence of cameras, they can use acoustic-based tech, like echosounders (a sonar system) to detect objects in the water. 

In fact, without the element of sight, sound becomes a key tool for perceiving without seeing. It’s how most critters in the ocean communicate with one another. And making sense of the sound becomes an important problem to solve. To that end, a few scientists on the team are trying to see if machine learning can be used to identify what’s in the water through the pattern of the sound frequencies they reflect. So far, an algorithm currently being tested has been able to discern two species of cod.

The post Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robot trash cans have survived a New York City field test https://www.popsci.com/technology/new-york-robot-trash-can/ Sat, 22 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=535976
A treat for a very good bot.
A treat for a very good bot. Cornell University

In a recent study, people in New York interacted with robotic trash cans on wheels. Here’s how it went.

The post Robot trash cans have survived a New York City field test appeared first on Popular Science.

]]>
A treat for a very good bot.
A treat for a very good bot. Cornell University

Throwing out trash can be an icky, and sometimes even confusing, experience. To better understand how humans interact with robots, Cornell University researchers recently created and released two trash and recycling bots to do some dirty work in a Manhattan plaza. And for most of the people who interacted with the adorable barrel bots, the robots’ helpful interceptions of waste were welcomed.

The study involved two robots. One was blue, and one was gray, and they were mounted on recycled hoverboard parts and equipped with 360-degree cameras. The bots received all sorts of reactions, from onlookers expressing their appreciation to treating it like a playful dog with a treat. Some of them even felt compelled to “feed” the robots, according to a Cornell press release. 

The scientists behind the creation recently presented their study, called “Trash Barrel Robots in the City,” in the video program at the ACM/IEEE International Conference on Human-Robot Interaction. This isn’t the first time the trashbots have made their debut in the real world—the robot was deployed at Stanford a few years ago and was met by bystanders who quickly began to dote on the trashbot. According to The Verge in 2016, people became so smitten with the bot that “when it falls over they race to pick it up, even asking if it’s OK.” 

[Related: Meet Garmi, a robot nurse and companion for Germany’s elderly population.]

Team leader Wendy Ju, an associate professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion, originally planned to turn chairs and tables in New York City into bots, but the trash can inevitably won out. “When we shared with them the trash barrel videos that we had done at Stanford, all discussions of the chairs and tables were suddenly off the table,” Ju said in a statement. “It’s New York! Trash is a huge problem!”

Of course, you can’t win over everybody, even if you’re a cute trash can. Some folks found it creepy, raised concerns about surveillance, gave it the middle finger, or even knocked it over. Now, the team hopes to send the trash can out to explore the rest of New York City, hopefully to be met with adoration and not animosity.

“Everyone is sure that their neighborhood behaves very differently,” Ju said. “So, the next thing that we’re hoping to do is a five boroughs trash barrel robot study.”

Watch more about these trash cans on wheels, below:

The post Robot trash cans have survived a New York City field test appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Giving drones inflatable suits could help them survive crash landings https://www.popsci.com/technology/bird-inspired-collision-drone/ Fri, 21 Apr 2023 17:00:00 +0000 https://www.popsci.com/?p=535966
Perfectly perched.
Perfectly perched. Arizona State University

Birds once again inspire robots to nimbly navigate the skies and obstacles.

The post Giving drones inflatable suits could help them survive crash landings appeared first on Popular Science.

]]>
Perfectly perched.
Perfectly perched. Arizona State University

When entering into disaster scenarios, robots still have a major downside—their inability to recover when they inevitably crash into things. Scientists, however, have taken a page out of biology’s playbook, as they often do, to create a drone that can bounce back when met with various obstacles. 

Think of a bird landing on a tree branch—in order to do so, they likely have to collide with a few smaller branches or leaves in the process of touching down. But, their joints and soft tissues cushion these bumps along the way, and their feet are built precisely to lock themselves in place without straining a muscle. When a drone opts for a similar route, taking on a bunch of collisions on the way to their destination, it’s a little bit more dramatic. “They don’t recover; they crash,” Wenlong Zhang, an associate professor and robotics expert at Arizona State University said in a release

“We see drones used to assess damage from high in the sky, but they can’t really navigate through collapsed buildings,” Zhang added. “Their rigid frames compromise resilience to collision, so bumping into posts, beams, pipes or cables in a wrecked structure is often catastrophic.” 

Zhang is an author of a recent paper published in Soft Robotics wherein a team of scientists designed and tested a quadrotor drone with an inflatable frame, apparently the first of its kind. The inflatable frame acts almost like a blow-up suit, protecting the drone from any harsh consequences of banging into a wall or another obstacle. It also provides the kind of soft tissue absorption necessary for perching—the team’s next task.

[Related: Watch this bird-like robot make a graceful landing on its perch.]

After studying how birds land and grip onto branches with their taloned feet, the team developed a fabric-based bistable grasper for the inflatable drone. The grasper had two unpowered “resting states,” meaning it can remain open or closed without using energy, and reacts to impact of landing by closing its little feet and gripping hard onto a nearby object.

“It can perch on pretty much anything. Also, the bistable material means it doesn’t need an actuator to provide power to hold its perch. It just closes and stays like that without consuming any energy,” Zhang said in the release. “Then when needed, the gripper can be pneumatically retracted and the drone can just take off.”

A more resilient type of drone is crucial for search and rescue scenarios when the path forward may be filled with debris, but the authors could also see this kind of creation being useful in monitoring forest fires or even exploration on other planets.

The post Giving drones inflatable suits could help them survive crash landings appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Terranaut is a new mine-hunting bot designed for beaches https://www.popsci.com/technology/terranaut-robot-mine-clearing/ Fri, 21 Apr 2023 14:25:55 +0000 https://www.popsci.com/?p=535906
Marines during an exercise in Hawaii on April 10, 2023. The Terranaut robot, not pictured, is designed to cope with explosives in these kinds of environments.
Marines during an exercise in Hawaii on April 10, 2023. The Terranaut robot, not pictured, is designed to cope with explosives in these kinds of environments. Clayton Baker / US Marines

The autonomous robot is intended for the dangerous work of dealing with explosives in areas where Marines would typically tread.

The post The Terranaut is a new mine-hunting bot designed for beaches appeared first on Popular Science.

]]>
Marines during an exercise in Hawaii on April 10, 2023. The Terranaut robot, not pictured, is designed to cope with explosives in these kinds of environments.
Marines during an exercise in Hawaii on April 10, 2023. The Terranaut robot, not pictured, is designed to cope with explosives in these kinds of environments. Clayton Baker / US Marines

On April 19, Nauticus Robotics announced that its work on the Terranaut, an amphibious machine designed to defeat explosive mines for the Defense Innovation Unit, had cleared its initial phase and was progressing to further development. The machine builds on Nauticus’ previous work with aquatic uncrewed vehicles. It fits into a holistic picture of untethered, autonomous underwater operations, where tools developed for commercial underwater work inform machines specifically built to tackle the special military needs below the ocean’s surface.

Nauticus teased this announcement of Terranaut on social media with a picture of tread lines on a beach leading into the ocean surface.

DIU, or the Defense Innovation Unit, is an organization within the larger Department of Defense designed to pull innovations from the commercial tech world into military use. Rather than reinventing the wheel, it is built to look at wagon wheels it could simply buy for its chariots.

“DIU gets intrigued when you have some commercial-facing technologies that they think they could orient towards a defense mission,” Nauticus CEO Nicolaus Radford tells Popular Science. “A lot of people focus on our big orange robots. But what’s between our robots’ ears is more important.” 

“So DIU is like, all right, you guys have made some commercial progress,” Radford adds. “You’ve got a commercial platform both in software and hardware. Maybe we can modify it a little bit towards some of these other missions that we’re interested in.”

In Nauticus’ announcement, they emphasized that Terranaut is being developed as an autonomous mine countermeasure robot, which can work in beaches and surf zones. These are the exact kind of areas where Marines train and plan to fight, especially in Pacific island warfare. Terranaut, as promised, will both swim and crawl, driven by an autonomous control system that can receive human direction through acoustic communication.

The Terranaut can navigate on treads and with powerful thrusters, with plans for manipulator arms that can emerge from the body to tackle any tasks, like disassembling an underwater mine.

The Terranaut robot.
The Terranaut robot. Nauticus Robotics

“It’s able to fly through the water column and then also change its buoyancy in a way that it can get appreciable traction,” says Radford. “Let’s say you’re driving on the sub-sea bed and you encounter a rock. Well, you don’t know how long the rock is, it could take you a while to get around it, right?” The solution in that case would be to go above it. 

Much of the work that informed the creation and design of Terranaut comes from Nauticus’ work on Aquanaut, which is a 14.5-foot-long submersible robot that can operate at depths of almost 10,000 feet, and in regular versions at distances of up to 75 miles. Powered by an electric motor and carrying over 67 kilowatt hours of battery power, the aquanaut moves at a baseline speed of 3 knots, or almost 3.5 mph, underwater, and it can last on its battery power for over four days continuously. But what most distinguishes Aquanaut is its retractable manipulator arms that fold into its body when not needed, and its ability to operate without the direct communications control through an umbilical wire like another undersea robot.

The Aquanaut can perceive its environment thanks to sonar, optical sensors in stereo, native 3D cloud point imagery, and other sensors. This data can be collected at a higher resolution than is transmittable while deep undersea, with the Aquanaut able to surface or dock and transmit higher volumes and density of data faster

Like the Aquanaut, the Terranaut does not have an umbilical connecting it to a boat.

Typically, boats have umbilicals connecting them to robots “because you have to have an operator with joysticks looking at HD monitors, being able to drive this thing,” says Radford. “What we said is ‘let’s throw all that out.’ We can create a hybrid machine that doesn’t need an umbilical that can swim really far, but as it turns out, people just don’t want to take pictures. They want to pick something up, drop something off, cut something, plug something in, and we developed a whole new class of subsea machines that allows you to do manipulation underwater without the necessity of an umbilical.”

Removing the umbilical frees up the design for what kind of ships can launch and manage underwater robotics. It also comes with a whole new set of problems, like how to ensure that the robot is performing the tasks asked of it by a human operator, now that the operator is not driving but directing the machine. Communication through water is hard, as radio signals and light signals are both limited in range and efficacy, especially below the ocean’s surface.

Solving these twin problems means turning to on-board autonomy, and acoustic controls.  

“We have data rates akin to dial up networking in 1987,” says Radford. “You’re not gonna be streaming HD video underwater with a Netflix server, but there are ways in which you can send representative information in the 3D environment around you back to an operator, and then the operator flies the autopilot of the robot around.”

That means, in essence, that the robot itself is largely responsible for managing the specifics of its ballast and direction, and following commands transmitted acoustically through the water. In return it sends information back, allowing a human to select actions and behaviors already loaded onto the robot.

Like the Aquanaut before it, the Terranaut will come preloaded with the behaviors needed to navigate its environment and perform the tasks assigned to it. Once the Terranaut rolls through surfy shallows, onto beaches, and into visual range, it will apply those tools, adaptive autonomy and remote human guidance, to taking apart deadly obstacles, like underwater explosives.

“I think this is the beginning of a very vibrant portfolio of aquatic drones that I hope captures the public’s imagination on what’s possible underwater. I think it’s just as fascinating as space, if not more so, because it’s so much more near to us,” said Radford. “You know, five percent of the ocean seabed has been explored on any level. We live on an ocean planet stupidly called Earth.”

The post The Terranaut is a new mine-hunting bot designed for beaches appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new robotic seed can wriggle into soil to harvest climate data https://www.popsci.com/technology/seed-robot-soil/ Thu, 20 Apr 2023 20:00:00 +0000 https://www.popsci.com/?p=535681
When tested in a soil sample, the robot was able to shimmy about, adapt its shape to cracks, and burrow into holes in the ground much like the natural seed.
When tested in a soil sample, the robot was able to shimmy about, adapt its shape to cracks, and burrow into holes in the ground much like the natural seed. Unsplash

The nature-inspired device could help improve our soddy communication with sod.

The post A new robotic seed can wriggle into soil to harvest climate data appeared first on Popular Science.

]]>
When tested in a soil sample, the robot was able to shimmy about, adapt its shape to cracks, and burrow into holes in the ground much like the natural seed.
When tested in a soil sample, the robot was able to shimmy about, adapt its shape to cracks, and burrow into holes in the ground much like the natural seed. Unsplash

Soil is one of the most crucial, if not underrated, elements of daily life—it’s essential for growing the food and resources we rely on, combats drought, protects against flooding, and can sequester carbon dioxide for years to come. But, the dirt beneath our feet is constantly under threat due to rising temperatures and biodiversity loss thanks to climate change. And despite how simple we may think soil is, it’s pretty hard to know what’s really going on deep in the ground from the surface.

Scientists in Italy, however, think they may have a robotic solution—a seed-inspired robot. Scientists at the Bioinspired Soft Robotics (BSR) Lab, a part of the Istituto Italiano di Tecnologia (IIT-Italian Institute of Technology) in Genoa, have developed the first 4D printed seed-inspired soft robot, which they claim can help act as sensors for monitoring pollutants, CO2 levels, temperature and humidity in soil. They published their findings earlier this year in Advanced Science. The research is part of the EU-funded I-Seed project aimed at making robots that can detect environmental changes in air and soil. 

What they’ve got here is an artificial seed inspired by the structure of a South African geranium, or the Pelargonium appendiculatum. The seeds of the tuberous, hairy-leafed plant have the ability to change shape in response to how humid their environment is. When the time comes for the seeds to leave the plant, they detach and can move independently to “penetrate” soil fractures, according to the study. This almost looks like crawling and burning action, which is due its helical shape changing according to changes in the environment. In a way. The curly seeds can find a home for themselves simply by expanding and shrinking due to changes in water content of the air.

[Related: This heat-seeking robot looks and moves like a vine.]

The team at IIT-BSR mimicked these seeds by combining 3D printing and electrospinning, using materials that also absorb and expand when exposed to humidity. Using fused deposition modeling, the authors printed a substrate layer of polycaprolactone, a biodegradable thermoplastic polyester activated using oxygen plasma to increase water-attracting abilities. Next, they added electrospun hygroscopic fibers made of a polyethylene oxide shell and a cellulose nanocrystal core. 

When tested in a soil sample, the robot was able to shimmy about, adapt its shape to cracks, and burrow into holes in the ground much like the natural seed. Not to mention, it was capable of lifting about 100 times its own weight. First author Luca Cecchini, a PhD student at IIT, said in a statement that the biodegradable and energy-autonomous robots could be used as “wireless, battery-free tools for surface soil exploration and monitoring.”

Land photo
The first I-Seed created at IIT is inspired by the seed structure of a South African geranium, the Pelargonium appendiculatum. Credit: IIT-Istituto Italiano di Tecnologia

“With this latest research,” Barbara Mazzolai, associate director for robotics of the IIT and coordinator of the I-Seed Project, said in the statement, “we have further proved that it is possible to create innovative solutions that not only have the objective of monitoring the well-being of our planet, but that do so without altering it.”

The post A new robotic seed can wriggle into soil to harvest climate data appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robot dog learned a new trick—balancing like a cat https://www.popsci.com/technology/robot-dog-balance-beam/ Wed, 19 Apr 2023 14:00:00 +0000 https://www.popsci.com/?p=535177
Just a step at a time.
Just a step at a time. Carnegie Mellon University

Without a tail and a bendy spine, nonetheless.

The post This robot dog learned a new trick—balancing like a cat appeared first on Popular Science.

]]>
Just a step at a time.
Just a step at a time. Carnegie Mellon University

We’ve seen how a quadruped robot dog can dribble a ball, climb walls, run on sand, and open doors with its “paws.” The latest test isn’t that of motion, necessarily, but of balance. This time, researchers at Carnegie Mellon University’s Robotics Institute have found a way to make an off-the-shelf quadruped robot agile and stable enough to walk across a balance beam.

Even for humans, the balance beam is quite a feat to conquer—something that leaves even gymnasts nervous. “It’s the great equalizer,” Michigan women’s gymnastics coach Beverly Plocki told the Chicago Tribune in 2016. “No other event requires the same mental focus. You stumble on the floor, it’s a minor deduction. The beam is the event of perfection. No room for error.”

[Related: A new tail accessory propels this robot dog across streams.]

But in robot dogs, their legs aren’t exactly coordinated. If three feet can touch the ground, generally they are fine, but reduce that to one or two robot feet and you’re in trouble. “With current control methods, a quadruped robot’s body and legs are decoupled and don’t speak to one another to coordinate their movements,” Zachary Manchester, an assistant professor in the Robotics Institute and head of the Robotic Exploration Lab, said in a statement. “So how can we improve their balance?”

How CMU’s scientists managed to get a robot to daintily scale a narrow beam—the first time this has been done, so the researchers claim—is by leveraging hardware often used on spacecrafts: a reaction wheel actuator. This system helps the robot balance wherever its feet are, which is pretty helpful in lieu of something like a tail or a flexible spine which helps actual four-legged animals catch their balance. 

[Related: This bumblebee-inspired bot can bounce back after injuring a wing.]

“You basically have a big flywheel with a motor attached,” said Manchester. “If you spin the heavy flywheel one way, it makes the satellite spin the other way. Now take that and put it on the body of a quadruped robot.”

The team mounted two reaction wheel actuators on the pitch and roll axis of a commercial Unitree A1 robot, making it so the little bot could balance itself no matter where its feet were. Then, they did two dexterity tests—the first dropping it upside down from about half a meter in the air. Like a cat, the robot was able to flip itself over and land on its feet. 

Second came the balance beam test, this time making the robot walk along a six-centimeter-wide balance beam, which the bot did with ballerina-like gracefulness. This could come in handy in the future, not only for purely entertainment value, but maneuvering tricky scenarios in the case of search-and-rescue, which is often a goal for development across all sorts of robots. The team will be showing off their latest endeavor at the 2023 International Conference on Robotics and Automation this summer in London.

The post This robot dog learned a new trick—balancing like a cat appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet xenobots, tiny machines made out of living parts https://www.popsci.com/technology/xenobots/ Mon, 17 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=534352
A xenobot, or a living robot, in culture, under a microscope.
Xenobots can work together to gather particulate matter into a pile. Douglas Blackiston and Sam Kriegman

The starting ingredient for these bio-robots: frog cells.

The post Meet xenobots, tiny machines made out of living parts appeared first on Popular Science.

]]>
A xenobot, or a living robot, in culture, under a microscope.
Xenobots can work together to gather particulate matter into a pile. Douglas Blackiston and Sam Kriegman

You may or may not have heard of xenobots, a kind of Frankenfrog creation that involves researchers turning frog embryo cells into tiny bio-machines that can move around, push or carry objects, and work together. These ephemeral beings were first made by a team of scientists from Tufts University and the University of Vermont in 2020. 

The goal behind building these “bots” was to understand how cells communicate with one another. Here’s a breakdown of the hard facts behind how xenobots actually work, and what they are currently used for. 

What are xenobots?

A “living robot” can sound like a scary sci-fi term, but they are not anything like the sentient androids you may have seen on screen.

“At the most basic level, this is a platform or way to build with cells and tissues, the way we can build robots out of mechanical components,” says Douglas Blackiston, a senior scientist at Tufts University. “You can almost think of it as Legos, where you can combine different Legos together, and with the same set of blocks you can make a bunch of different things.” 

Biology photo
Xenobots are tiny. Here they are against a dollar bill for size. Douglas Blackiston and Sam Kriegman

But why would someone want to build robots out of living components instead of traditional materials, like metal and plastic? One advantage is that having a bio-robot of sorts means that it is biodegradable. In environmental applications, that means if the robot breaks, it won’t contaminate the environment with garbage like metal, batteries, or plastic. Researchers can also program xenobots to fall apart naturally at the end of their lives. 

How do you make a xenobot?

The building blocks for xenobots come from the eggs laid by the female African clawed frog, which goes by the scientific name Xenopus laevis

Just like with a traditional robot, they need other essential components: a power source, a motor or actuator for movement, and sensors. But with xenobots, all of these components are biological.

A xenobot’s energy comes from the yolk that’s a part of all amphibian eggs, which can power these machines for about two weeks with no added food. To get them to move, scientists can add biological “motors” like muscle or cardiac tissue. They can arrange the motors in different configurations to get the xenobots to move in certain directions or with a certain speed.  

“We use cardiac tissue because cardiac cells pulse at a regular rate, and that gives you sort of an inchworm type of movement if you build with it,” says Blackiston. “The other types of movement we get are from cilia. These are small hair-like structures that beat on the outside of different types of tissues. And this is a type of movement that dominates the microscopic world. If you take some pond water and look, most of what you see will move around with cilia.” 

Biology photo
Swimming xenobots with cilia covering their surface. Douglas Blackiston and Sam Kriegman

Scientists can also add components like optogenetic muscle tissues or chemical receptors to allow these biobots to respond to light or other stimuli in their environment. Depending on how the xenobots are programmed, they can autonomously navigate through their surroundings or researchers can add stimulus to “drive” them around. 

“There’s also a number of photosynthetic algae that have light sensors that directly hook onto the motors, and that allows them to swim towards sunlight,” says Blackiston. “There’s been a lot of work on the genetic level to modify these to respond to different types of chemicals or different types of light sources and then to tie them to specific motors.”

[Related: Inside the lab that’s growing mushroom computers]

Even in their primitive form, xenobots can still convey some type of memory, or relay information back to the researchers about where they went and what they did. “You can pretty easily hook activation of these different sensors into fluorescent molecules that either turn on or change color when they’re activated,” Blackiston explains. For example, when the bots swim through a blue light, they might change color from green to red permanently. As they move through mazes with blue lights in certain parts of it, they will glow different colors depending on the choices they’ve made in the maze. The researcher can walk away while the maze-solving is in progress, and still be in the know about how the xenobot navigated through it.  

They can also, for example, release a compound that changes the color of the water if they sense something.  

These sensors make the xenobot easy to manage. In theory, scientists can make a system in which the xenobots are drawn to a certain wavelength of light. They could then shine this at an area in the water to collect all of the bots. And the ones that slip through can still harmlessly break down at the end of their life. 

A xenobot simulator

Blackiston, along with collaborators at Northwestern and University of Vermont, are using an AI simulator they built to design different types of xenobots. “It looks sort of like Minecraft, and you can simulate cells in a physics environment and they will behave like cells in the real world,” he says. “The red ones are muscle cells, blue ones are skin cells, and green ones are other cells. You can give the computer a goal, like: ‘use 5,000 cells and build me a xenobot that will walk in a straight line or pick something up,’ and it will try hundreds of millions of combinations on a supercomputer and return to you blueprints that it thinks will be extremely performant.”

Most of the xenobots he’s created have come from blueprints that have been produced by this AI. He says this speeds up a process that would have taken him thousands of years otherwise. And it’s fairly accurate as well, although there is a bit of back and forth between playing with the simulator and modeling the real-world biology. 

Biology photo
Xenobots of different shapes crafted using computer-simulated blueprints. Douglas Blackiston and Sam Kriegman

The xenobots that Blackiston and his colleagues use are not genetically modified. “When we see the xenobots doing kinematic self-replication and making copies of themselves, we didn’t program that in. We didn’t have to design a circuit that tells the cells how to do kinematic self replication,” says Michael Levin, a professor of biology at Tufts. “We triggered something where they learned to do this, and we’re taking advantage of the native problem-solving capacity of cells by giving it the right stimuli.” 

What can xenobots help us do?

Xenobots are not just a blob of cells congealing together—they work like an ecosystem and can be used as tools to explore new spaces, in some cases literally, like searching for cadmium contamination in water. 

“We’re jamming together cells in configurations that aren’t natural. Sometimes it works, sometimes the cells don’t cooperate,” says Blackiston. “We’ve learned about a lot of interesting disease models.”

For example, with one model of xenobot, they’ve been able to examine how cilia in lung cells may work to push particles out of the airway or spread mucus correctly, and see that if the cilia don’t work as intended, defects can arise in the system.

The deeper application is using these biobots to understand collective intelligence, says Levin. That could be a groundbreaking discovery for the space of regenerative medicine. 

“For example, cells are not hardwired to do these specific things. They can adapt to changes and form different configurations,” he adds. “Once we figure out how cells decide together what structures they’re going to form, we can take advantages of those computations and build new organs, regenerate after injury, reprogram tumors—all of that comes from using these biobots as a way to understand how collective decision-making works.” 

The post Meet xenobots, tiny machines made out of living parts appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cyborg cockroaches could one day scurry to your rescue https://www.popsci.com/technology/cockroach-cyborg/ Thu, 13 Apr 2023 20:00:00 +0000 https://www.popsci.com/?p=533937
Madagascar hissing cockroach balanced on human finger against green backdrop
Imagine this, but with a tiny computer strapped to its back. Deposit Photos

Here's how hacking bug brains could one day help save lives.

The post Cyborg cockroaches could one day scurry to your rescue appeared first on Popular Science.

]]>
Madagascar hissing cockroach balanced on human finger against green backdrop
Imagine this, but with a tiny computer strapped to its back. Deposit Photos

Imagine yourself trapped in a building’s rubble following an earthquake. It’s a terrifying prospect, especially if time is of the essence for search and rescue operations. Now imagine  one of your rescuers turns out to be a cyborg cockroach. 

Regardless of how you feel about insects, a team of scientists at Osaka University in Japan apparently believe these resilient little bugs can come in handy in times of disaster. According to the researchers’ paper recently published within the journal Cyborg and Bionic Systems, society is closer than it’s ever been to deploying cybernetically augmented bugs to aid in real world scenarios such as natural disasters and extreme environment explorations. And everyone owes it all to their legion of semi-controllable cyborg Madagascar hissing cockroaches.

[Related: Spider robots could soon be swarming Japan’s aging sewer systems.]

Insects are increasingly inspiring robotic advancements, but biomimicry still often proves immensely complex. As macabre as it may seem, researchers have found augmenting instead of mechanically replicating six-legged creatures can offer simpler, cost-effective alternatives. In this most recent example, scientists implanted tiny, stimulating electrodes into the cockroaches’ brains and peripheral nervous systems, which were subsequently connected to a machine learning program. The system was then trained to recognize the insects’ locomotive states—if a cockroach paused at an obstacle or hunkered down in a dark, cold environment (as cockroaches are evolutionarily prone to do), the electrodes directed them to continue moving in an alternative route. To prevent excess fatigue, researchers even fine-tuned the stimulating currents to make them as minimal as possible.

Insects photo
Cyborg cockroaches could help save lives. Credit: Osaka University

Importantly, the setup didn’t reduce the insects to zombie cockroaches, but instead simply influenced their movement decisions.  “We don’t have to control the cyborg like controlling a robot. They can have some extent of autonomy, which is the basis of their agile locomotion,” Keisuke Morishima, a roboticist and one of the study’s authors, said in a statement. “For example, in a rescue scenario, we only need to stimulate the cockroach to turn its direction when it’s walking the wrong way or move when it stops unexpectedly.”

[Related: This bumblebee-inspired bot can bounce back after injuring a wing.]

While the scientists currently can’t yet control their cockroaches’ exact directions this way, their paper concludes the setup “successfully increased [their] average search rate and traveled distance up to 68 and 70 percent, respectively, while the stop time was reduced by 78 percent.” Going forward, they hope to improve these accuracy rates, as well as develop means to intentionally direct their enhanced cockroaches. Once that’s achieved, then you can start worrying about the zombie cyborg cockroach invasion.

The post Cyborg cockroaches could one day scurry to your rescue appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>