Science_Hightech — operanewsapp.
Category: robotics/AI – Page 1,359
Quantum computing startups are all the rage, but it’s unclear if they’ll be able to produce anything of use in the near future.
As a buzzword, quantum computing probably ranks only below AI in terms of hype. Large tech companies such as Alphabet, Amazon, and Microsoft now have substantial research and development efforts in quantum computing. A host of startups have sprung up as well, some boasting staggering valuations. IonQ, for example, was valued at $2 billion when it went public in October through a special-purpose acquisition company. Much of this commercial activity has happened with baffling speed over the past three years.
I am as pro-quantum-computing as one can be: I’ve published more than 100 technical papers on the subject, and many of my PhD students and postdoctoral fellows are now well-known quantum computing practitioners all over the world. But I’m disturbed by some of the quantum computing hype I see these days, particularly when it comes to claims about how it will be commercialized.
DARPA, the innovation arm of the U.S. military, wants artificial intelligence to make battlefield medical decisions, raising red flags from some experts and ethicists.
On top of the environmental concerns, Japan has an added motivation for this push towards automation —its aging population and concurrent low birth rates mean its workforce is rapidly shrinking, and the implications for the country’s economy aren’t good.
Thus it behooves the Japanese to automate as many job functions as they can (and the rest of the world likely won’t be far behind, though they won’t have quite the same impetus). According to the Nippon Foundation, more than half of Japanese ship crew members are over the age of 50.
In partnership with Misui OSK Lines Ltd., the foundation recently completed two tests of autonomous ships. The first was a 313-foot container ship called the Mikage, which sailed 161 nautical miles from Tsuruga Port, north of Kyoto, to Sakai Port near Osaka. Upon reaching its destination port the ship was even able to steer itself into its designated bay, with drones dropping its mooring line.
What most people define as common sense is actually common learning, and much of that is biased.
The biggest short term problem in AI: as mentioned in the video clip, an over-emphasis on data set size, irrelevant of accuracy, representation or accountability.
The biggest long term problem in AI: Instead of trying to replace us we should be seeking to complement us. Merge is not necessary nor advisable.
If we think about it, building a machine to think like a human is like buying a race horse and insisting for it to function like a camel. And it is doomed to fail. Cos there are only two scenarios: Either humans are replaced or they are not. If we are, then we have failed. If we are not replaced, then the AI development has failed.
Time for a change of direction.
Which, to me, sounds both unimaginably complex and sublimely simple.
Sort of like, perhaps, like our brains.
Building chips with analogs of biological neurons and dendrites and neural networks like our brains is also key to the massive efficiency gains Rain Neuromorphics is claiming: 1,000 times more efficient than existing digital chips from companies like Nvidia.
Sanctuary, a startup developing human-like robots, has raised tens of millions in capital. But experts are skeptical it can deliver on its promises.
Introducing a novel visual tool for explaining the results of classification algorithms, with examples in R and Python.
Classification algorithms aim to identify to which groups a set of observations belong. A machine learning practitioner typically builds multiple models and selects a final classifier to be one that optimizes a set of accuracy metrics on a held-out test set. Sometimes, practitioners and stakeholders want more from the classification model than just predictions. They may wish to know the reasons behind a classifier’s decisions, especially when it is built for high-stakes applications. For instance, consider a medical setting, where a classifier determines a patient to be at high risk for developing an illness. If medical experts can learn the contributing factors to this prediction, they could use this information to help determine suitable treatments.
Some models, such as single decision trees, are transparent, meaning that they show the mechanism for how they make decisions. More complex models, however, tend to be the opposite — they are often referred to as “black boxes”, as they provide no explanation for how they arrive at their decisions. Unfortunately, opting for transparent models over black boxes does not always solve the explainability problem. The relationship between a set of observations and its labels is often too complex for a simple model to suffice; transparency can come at the cost of accuracy [1].
The increasing use of black-box models in high-stakes applications, combined with the need for explanations, has lead to the development of Explainable AI (XAI), a set of methods that help humans understand the outputs of machine learning models. Explainability is a crucial part of the responsible development and use of AI.
A four-legged robot called Spot has been deployed to wander around the ruins of ancient Pompeii, identifying structural and safety issues while delving underground to inspect tunnels dug by relic thieves.
The dog-like robot is the latest in a series of technologies used as part of a broader project to better manage the archaeological park since 2013, when Unesco threatened to add Pompeii to a list of world heritage sites in peril unless Italian authorities improved its preservation.