Toggle light / dark theme

And it can balance perfectly on power lines.

Scientists at UC Berkeley developed an experimental drone called the Midair Reconfigurable Quadcopter. As the name implies, the drone can shape-shift in midair, a report from NewAtlas reveals.

The team, from UC Berkeley’s High Performance Robotics Laboratory (HiPeRLab), used passive unactuated hinges, meaning that no extra battery-sapping actuators or sensors are required. In other words, each of the hinges folds inwards when its rotor stops or goes in reverse, and outwards when the rotor is powered up.

The quadcopter is able to fold any two of its arms using this method and still maintain stable flight. That means the drone can shift into a number of different shapes. The researchers say that it could, for example, squeeze through a narrow opening, and its folded-down arms can also be used to grasp objects.

Musicians have been experimenting with artificial intelligence for a few years now. For example, in 2019, an AI trained on Schubert’s music completed his Unfinished Symphony and last October the Beethoven Orchestra in Bonn performed an AI-generated version of Beethoven’s last symphony.

But what are the limits of AI music? Can an AI really be considered creative? And is it possible for an AI to improvise with musicians live on stage?

To find out, researchers from France, the USA and Japan are collaborating on a study to explore the role of AI in creativity, using a combination of machine learning and social science research. The project recently received funding from the European Research Council.

One part of the study involves teaching AI how to improvise, and find out if it can be used for example in live performance with (human) musicians.

A common concern surrounding automation in recent years is that it will result in widescale job losses as the work previously done by people is taken over by technology. Of course, the reality doesn’t really support this narrative, and indeed, companies that invest in technology often end up employing more people as a result of the improvement in their fortunes heralded by the investment.

The leadership team of the fintech company Kashat highlight the reality of investing in technology. They reveal that microfinance has traditionally been highly labor intensive, with many of the skills the same as those used in the sector for years. With the introduction of AI, new skills have been introduced into the underwriting process in order to serve at scale, while enabling employees to further expand their skillset and become even more valuable in the future.

The impact of this distinction is clearly visible in the growth rates across the sector, with those more tech-enabled firms growing far faster, and therefore employing more people, than their more traditional peers.

Last week saw an announcement that Optimus Ride, an autonomous shuttle company in Boston was purchased in an acqui-hire by Magna, the Ontario based Tier One Automotive company. In an acqui-hire, the company has generally failed, but a buyer pays to pick up the assets and to hire the team, which took time to create. Usually it’s only enough to reward the preferred investors, the team gets options in their new employer.

Optimus Ride built a shuttle on top of the GEM 6-seater electric shuttle platform, adding sensors and their autonomy tools. It evolved out of MIT.

Also announced as shutting its doors was Local Motors, maker of the Olli shuttle. Local Motors began with a focus on 3D printing to make smaller volume vehicles. The started with the Rally Fighter, a vehicle that was crowd designed after contests. Over time, founder Jay Rogers believed that 3D printing could bring a vehicle to production faster and at lower cost than conventional methods than conventional methods, and entered the Shuttle market, partnering with various partners to make them autonomous. Recently, we reported how an Olli shuttle in Whitby, Ontario had a crash resulting in serious injuries. Early reports suggested it was in autonomous mode, but it was later revealed it was being manually driven at the time. That made it mostly a non-story, but the real story of Olli did not go so well, either.

Full Story:


Kawasaki has shoehorned the supercharged 1,000cc engine from its wild H2R hyperbike into a heavy-lift autonomous cargo helicopter, and has now demonstrated a robotic system for loading and unloading it without exposing humans to those big blades.

The K-Racer X1 is a beast of a drone, roughly the size of a small car. It rises vertically on a helicopter-style top rotor, but where there’s normally a tail rotor to balance out torque, this machine uses two forward-facing props mounted at the end of stubby wings. These props double as forward propulsion, with the wings providing some lift.

Kawasaki is yet to specify how fast this thing will fly, but we doubt it’ll be too quick; assuming that top rotor keeps spinning in cruise flight, top speed will be limited by retreating blade stall, which causes asymmetric lift.

Poor Artificial Intelligence (AI). For years, it has had to sit there (like a dormant Skynet) listening to its existence being debated, without getting to have a say. A recent debate held at the University of Oxford tried to put that right by including an AI participant in a debate on the topic of whether AI can ever be ethical.

The debate involved human participants, as well as the Megatron Transformer, an AI created by the Applied Deep Research team at computer-chip maker Nvidia. The Megatron has been trained on a dataset called “the pile”, which includes the whole of Wikipedia, 63 million English news articles, and 38 gigabytes of Reddit conversations — more than enough to break the mind of any human forced to do likewise.

“In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime,” Oxford’s Professor Andrew Stephen wrote in a piece on the debate published in The Conversation. “After such extensive research, it forms its own views.”

This is lecture 3 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017. This lecture introduces computer vision, convolutional neural networks, and end-to-end learning of the driving task.

INFO:
Slides: http://bit.ly/2HdXYvf.
Website: https://deeplearning.mit.edu.
GitHub: https://github.com/lexfridman/mit-deep-learning.
Playlist: https://goo.gl/SLCb1y.

Links to individual lecture videos for the course:

Lecture 1: Introduction to Deep Learning and Self-Driving Cars.

SEOUL — Using a high-performance artificial intelligence (AI) chip, South Korean researchers have established a system that can accelerate the process of learning data and yielding results. The system capable of performing five thousand trillion operations per second is ideal for autonomous vehicles and AI servers because its chipset is about the size of a coin.