Toggle light / dark theme

Artificial intelligence controls robotic arm to pack boxes and cut costs

Rutgers computer scientists used artificial intelligence to control a robotic arm that provides a more efficient way to pack boxes, saving businesses time and money.

“We can achieve low-cost, automated solutions that are easily deployable. The key is to make minimal but effective hardware choices and focus on robust algorithms and software,” said the study’s senior author Kostas Bekris, an associate professor in the Department of Computer Science in the School of Arts and Sciences at Rutgers University-New Brunswick.

Bekris, Abdeslam Boularias and Jingjin Yu, both assistant professors of computer science, formed a team to deal with multiple aspects of the robot packing problem in an integrated way through hardware, 3D perception and robust motion.

Researchers grow active mini-brain-networks

Cerebral organoids are artificially grown, 3D tissue cultures that resemble the human brain. Now, researchers from Japan report functional neural networks derived from these organoids in a study publishing June 27 in the journal Stem Cell Reports. Although the organoids aren’t actually “thinking,” the researchers’ new tool—which detects neural activity using organoids—could provide a method for understanding human brain function.

“Because they can mimic cerebral development, can be used as a substitute for the to study complex developmental and neurological disorders,” says corresponding author Jun Takahashi, a professor at Kyoto University.

However, these studies are challenging, because current cerebral organoids lack desirable supporting structures, such as blood vessels and surrounding tissues, Takahashi says. Since researchers have a limited ability to assess the organoids’ neural activities, it has also been difficult to comprehensively evaluate the function of neuronal networks.

The Rise of a New Generation of AI Avatars

I recently discovered it’s possible for someone in their 20s to feel old—just mention Microsoft’s Clippy to anyone born after the late 90s. Weirdly, there is an entire generation of people who never experienced that dancing wide-eyed paper-clip interrupting a Word doc writing project.

For readers who never knew him, Clippy was an interactive virtual assistant that took the form of an animated paperclip designed to be helpful in guiding users through Microsoft Word. As an iconic symbol of its decade, Clippy was also famously terrible. Worldwide consensus decided that Clippy was annoying, intrusive, and Time magazine even named it among the 50 worst inventions of all time (squeezed between ‘New Coke’ and Agent Orange. Not a fun list).

Though Clippy was intended to help users navigate their software lives, it may have been 20 or so years ahead of its time.

What Could Possibly Be Cooler Than RoboBee? RoboBee X-Wing

They used to call it RoboBee—a flying machine half the size of a paperclip that could flap its pair of wings 120 times a second. It was always tethered to a power source, limiting its freedom. Now, though, RoboBee becomes RoboBee X-Wing, as Harvard researchers have added solar cells and an extra pair of wings, freeing the robot to blast off to a galaxy far, far away. Or at least partway across the room, as it can sustain flight for only half a second, and only indoors.

But hey, baby steps. The teeniest of quadrotors measure a few inches across and weigh a third of an ounce. RoboBee X-Wing is about the same size as those untethered fliers, but weighs a hundredth of an ounce, which earns it the distinction of being the lightest aerial vehicle to manage sustained untethered flight. One day that could make it ideal for navigating tight, sensitive spaces in a galaxy very, very near.

You’ve read your last complimentary article this month. To read the full article, SUBSCRIBE NOW. If you’re already a subscriber, please sign in and and verify your subscription.

This camera app uses AI to erase people from your photographs

Bye Bye Camera is an iOS app built for the “post-human world,” says Damjanski, a mononymous artist based in New York City who helped create the software. Why post-human? Because it uses AI to remove people from images and paint over their absence.

“One joke we always make about it is: ‘finally, you can take a selfie without yourself,’” Damjanski tells The Verge.

The app costs $2.99 from the App Store, and, fair warning here, it’s not very good — or at least, it’s not flawless. The app is slow and removes people with a great deal of mess, leaving behind a smear of pixels like an AI hit man sending a message. If you’re looking to edit out political opponents from your Instagram, you’d be better off using Photoshop. But if you want to mess around with machine learning magic, Bye Bye Camera is good fun.

PizzaGAN gets the picture on how to make a pizza

Is nothing sacred? Who would dare to even attempt to talk about a machine-learning experiment that results in the perfect (gasp) pizza? It is difficult to contemplate, but a research quintet did not shy away from trying, and they worked to teach a machine how to make a great pie.

Say hello to PizzaGAN, a compositional layer-based generative model that was aimed to mirror the step-by-step procedure of pizza-making.

Their goal was to teach the machine by building a generative model that mirrors an ordered set of instructions. How they proceeded: “Each operator is designed as a Generative Adversarial Network (GAN). Given only weak image-level supervision, the operators are trained to generate a visual layer that needs to be added to or removed from the existing image. The proposed model is able to decompose an image into an ordered sequence of layers by applying sequentially in the right order the corresponding removing modules.”