During a conference call following Tesla’s Q1 2020 results, CEO Elon Musk and CFO Zachary Kirkhorn that Tesla plans to offer the package as a subscription service by the end of the year.
Russia continues it’s pursuit of Killer Robots. Battlefield deployment can be expected soon. Civilian deaths caused by the erroneous decisions of a robot are imminent.
Ban Killer Robots!
While some Russian robots have underperformed expectations in combat, the Ministry of Defence is working on a new generation of combat machines for training and possible future use. At the center of this design is the Marker UGV, or uncrewed ground vehicle. Resembling a miniature tank with treads and turrets, the Marker is as much a test bed as it is a machine expected to see battle.
“The Ministry of Defence is discussing the eventual use of robotic swarms in combat— and Marker is definitely the platform to test that out,” says Bendett, an Adjunct Senior Fellow at CNAS. “As envisioned, it will be able to launch swarms of UAVs or loitering munitions, making it a truly versatile robotic platform.”
Researchers at Technische Universität München in Germany have recently developed an electronic skin that could help to reproduce the human sense of touch in robots. This e-skin, presented in a paper published in MDPI’s Sensors journal, requires far less computational power than other existing e-skins and can thus be applied to larger portions of a robot’s body.
“Our main motivation for developing the e-skin stems from nature and is centered on the question of how we humans interact with our surrounding environment,” Florian Bergner, one of the researchers who carried out the study, told TechXplore. “While humans predominantly depend on vision, our sense of touch is important as soon as contacts are involved in interactions. We believe that giving robots a sense of touch can extend the range of interactions between robots and humans—making robots more collaborative, safe and effective.”
Bergner and other researchers led by Prof. Gordon Cheng have been developing e-skins for approximately ten years now. Initially, they tried to realize e-skin systems with multi-modal sensing capabilities resembling those of human skin. In other words, they tried to create an artificial skin that could sense light touch, pressure, temperature, and vibrations, while effectively distributing its sensing across different places where tactile interactions occurred.
The problem of common-sense reasoning has plagued the field of artificial intelligence for over 50 years. Now a new approach, borrowing from two disparate lines of thinking, has made important progress.
Artificial intelligence is getting down in the weeds. An AI-powered robot that can distinguish weeds from crops and remove them could eventually be used as an alternative to chemical insecticides.
Kevin Patel and Nihar Chaniyara at tech start-up AutoRoboCulture in Gandhinagar, India, have created a prototype device, called Nindamani, specifically for cauliflower crops.
I was confident enough to turn it in. However, I then was looking online and found out there’s a really easy way to find out if an essay was written by GPT-2. It’s to feed it to GPT-2 and if it’s able to predict the next words, then it was written by the AI. It’s easier to find out than normal plagiarism.
I knew that the business school had software that they were using to look out for plagiarism in all the essays that are turned in to their online platform, which is how I turned in my essays. So I was slightly worried that the company that sold them the anti-plagiarism software would have made an update.
I don’t think the professors even considered the possibility of GPT-2 writing the essays, but I was slightly worried that the company making the software added a module. But not that much.
If you followed the world of pop-culture or tech for some time now, then you know that advances in artificial intelligence are heating up. In reality, AI has been the talk of mainstream pop-culture and sci-fi since the first Terminator movie came out in 1984. These movies present an example of something called “Artificial General Intelligence.” So how close are we to that?
No, not how close are we to when the terminators take over, but how close are we to having an AI capable of navigating nearly any problem it’s presented with.