Click on photo to start video.
In Moscow, the Colonel’s recipe is now being served by robotic arms!
The audio of the fascinating talks & panel at the Future Day Melbourne 2020 / Machine Understanding event:
Kevin Korb — https://archive.org/searchresults.php Wilkins — https://archive.org/details/john-wilkins-humans-as-machines (John, sorry about the audio — also do you have the slides for this?) Hugo de Garis — https://archive.org/details/hugo-de-garis-future-day-2020 Panel — https://archive.org/…/future-day-panel-kevin-korb-hugo-de-g…
The video will be uploaded at a later date.
There is much public concern nowadays about when an AGI (Artificial General Intelligence) might appear and what it might go and do. The expert community is less concerned, because they know we’re a long ways off yet. More fundamentally though: we’re a long ways off of API (Artificial Primitive Intelligence). In fact, we have no idea what an API might even look like. AI took off without ever reflecting seriously on what I, either NI or AI, really is. So, it’s been streaking along in myriad directions without any goal in sight.
Are you for Ethical Ai Eric Klien?
Jess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics for urgency.
For Whittlestone, this means anticipating problems before they happen, finding better ways to build safety and reliability into AI systems, and emphasizing technical expertise at all levels of the technology’s development and use. At the core of these recommendations is the idea that ethics needs to become simply a part of how AI is made and used, rather than an add-on or afterthought.
Ultimately, AI will be quicker to deploy when needed if it is made with ethics built in, she argues. I asked her to talk me through what this means.
Tesla’s Full Self-Driving suite continues to improve with a recent video showing a Model 3 safely shifting away from a makeshift lane of construction cones while using Navigate on Autopilot.
Tesla owner-enthusiast Jeremy Greenlee was traveling through a highway construction zone in his Model 3. The zone contained a makeshift lane to the vehicle’s left that was made up of construction cones.
In an attempt to avoid the possibility of any collision with the cones from taking place, the vehicle utilized the driver-assist system and automatically shifted one lane to the right. This maneuver successfully removed any risk of coming into contact with the dense construction cones that were to the left of the car, which could have caused hundreds of dollars in cosmetic damage to the vehicle.
The article is providing an answer to the following questions that are relevant to the study of intelligence:
What is “understanding”?
How can we tell whether a system has it or not? https://bit.ly/3fynqYu
#AI #MechineLearning #neuroscience #consciousness #philosophy #technology #cognition #intelligence
A collective of more than 1,000 researchers, academics and experts in artificial intelligence are speaking out against soon-to-be-published research that claims to use neural networks to “predict criminality.” At the time of writing, more than 50 employees working on AI at companies like Facebook, Google and Microsoft had signed on to an open letter opposing the research and imploring its publisher to reconsider.
The controversial research is set to be highlighted in an upcoming book series by Springer, the publisher of Nature. Its authors make the alarming claim that their automated facial recognition software can predict if a person will become a criminal, citing the utility of such work in law enforcement applications for predictive policing.
“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” Harrisburg University professor and co-author Nathaniel J.S. Ashby said.
A startup recently emerging from stealth is aiming to automate much of the process for feature development with the goal of making data scientists more self-reliant.
Circa 2010
Updated at 18:30 EST to correct timeline of prediction to 2030 from 2020 Reverse-engineering the human brain so we can simulate it using computers may be just two decades away, says Ray Kurzweil, artificial intelligence expert and author of the best-selling book The Singularity is Near. It would be the first step toward creating machines \[…\].