Toggle light / dark theme

Mars 360: 1.2 billion pixel panorama of Mars — Sol 3060 (360video 8K)

1.2 billion pixel panorama of Mars by Curiosity rover at Sol 3060 (March 152021)

🎬 360VR video 8K: 🔎 360VR photo 85K: http://bit.ly/sol3060

NASA’s Mars Exploration Program Source images credit: NASA / JPL-Caltech / MSSS Stitching and retouching: Andrew Bodrov / 360pano.eu.

Music in video Song: Gates Of Orion Artist: Dreamstate Logic (http://www.dreamstatelogic.com​)

#Mars360​ #Video360​ #360VR​ #Mars​ #Sol3060​ #Gigapixel


Researchers’ algorithm designs soft robots that sense

There are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.

MIT researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. “The system not only learns a given task, but also how to best design the robot to solve that task,” says Alexander Amini. “Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.”

The research will be presented during April’s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.

Tech companies predict the (economic) future

Welcome back to The TechCrunch Exchange, a weekly startups-and-markets newsletter. It’s broadly based on the daily column that appears on Extra Crunch, but free, and made for your weekend reading. Want it in your inbox every Saturday morning? Sign up here.

Earnings season is coming to a close, with public tech companies wrapping up their Q4 and 2020 disclosures. We don’t care too much about the bigger players’ results here at TechCrunch, but smaller tech companies we knew when they were wee startups can provide startup-related data points worth digesting. So, each quarter The Exchange spends time chatting with a host of CEOs and CFOs, trying to figure what’s going on so that we can relay the information to private companies.

Sometimes it’s useful, as our chat with recent fintech IPO Upstart proved after we got to noodle with the company about rising acceptance of AI in the conservative banking industry.

Identifying Cells to Better Understand Healthy and Diseased Behavior

Summary: Using a range of tools from machine learning to graphical models, researchers have discovered a new way to identify cells and explore the mechanisms behind neurodegenerative diseases.

Source: Georgia Institute of Technology

In researching the causes and potential treatments for degenerative conditions such as Alzheimer’s or Parkinson’s disease, neuroscientists frequently struggle to accurately identify cells needed to understand brain activity that gives rise to behavior changes such as declining memory or impaired balance and tremors.

Deep learning model advances how robots can independently grasp objects

Robots are unable to perform everyday manipulation tasks, such as grasping or rearranging objects, with the same dexterity as humans. But Brazilian scientists have moved this research a step further by developing a new system that uses deep learning algorithms to improve a robot’s ability to independently detect how to grasp an object, known as autonomous robotic grasp detection.

In a paper published Feb. 24 in Robotics and Autonomous Systems, a team of engineers from the University of São Paulo addressed existing problems with the visual perception phase that occurs when a robot grasps an object. They created a model using deep learning neural networks that decreased the time a robot needs to process visual data, perceive an object’s location and successfully grasp it.

Deep learning is a subset of machine learning, in which computer algorithms are trained how to learn with data and to improve automatically through experience. Inspired by the structure and function of the human brain, deep learning uses a multilayered structure of algorithms called neural networks, operating much like the human brain in identifying patterns and classifying different types of information. Deep learning models are often based on convolutional neural networks, which specialize in analyzing visual imagery.