Recently, a team of researchers from Facebook AI and Tel Aviv University proposed an AI system that solves the multiple-choice intelligence test, Raven’s Progressive Matrices. The proposed AI system is a neural network model that combines multiple advances in generative models, including employing multiple pathways through the same network.
Raven’s Progressive Matrices, also known as Raven’s Matrices, are multiple-choice intelligence tests. The test is used to measure abstract reasoning and is regarded as a non-verbal estimate of fluid intelligence.
In this test, a person tries to finish the missing location in a 3X3 grid of abstract images. According to the researchers, there have been various similar researches, where the main focus entirely on choosing the right answer out of the various choices. However, in this research, the researchers focussed on generating a correct answer given the grid, without seeing the choices.
If Facebook’s AI research objectives are successful, it may not be long before home assistants take on a whole new range of capabilities. Last week the company announced new work focused on advancing what it calls “embodied AI”: basically, a smart robot that will be able to move around your house to help you remember things, find things, and maybe even do things.
Robots That Hear, Home Assistants That See
In Facebook’s blog post about audio-visual navigation for embodied AI, the authors point out that most of today’s robots are “deaf”; they move through spaces based purely on visual perception. The company’s new research aims to train AI using both visual and audio data, letting smart robots detect and follow objects that make noise as well as use sounds to understand a physical space.
Intel, in its latest acquisition spree, has acquired Israel-based Cnvrg.io. The deal, like most of the deals in the past, is aimed at strengthening its machine learning and AI operations. The 2016-founded startup provides a platform for data scientists to build and run machine learning models that can be used to train, run comparisons and recommendations, among others. Co-founded by Yochay Ettun and Leah Forkosh Kolben, Cnvrg was valued at around $17 million in its last round.
According to a statement by Intel spokesperson, Cnvrg will be an independent Intel company and will continue to serve its existing and future customers after the acquisition. However, there is no information on the financial terms of the deal or who will join Intel from the startup.
The deal comes merely a week after Intel’s announcement of acquiring San Francisco-based software optimisation startup SigOpt, which it did to leverage SigOpt’s technologies across its products to accelerate, amplify and scale AI software tools. SigOpt’s software technologies combined with Intel hardware products could give it a major competitive advantage providing differentiated value for data scientists and developers.
Japanese researches control Gundam robot using their mind.
Japanese scientists have created a device that controls a mini toy Gundam robot using the human mind, turning one of the anime’s most exciting technological concepts into reality.
The researchers customized a Zaku Gundam robot toy available through Bandai’s Zeonic Technics, but buyers have to manually program their robot using a smartphone app.
Real-time health monitoring and sensing abilities of robots require soft electronics, but a challenge of using such materials lie in their reliability. Unlike rigid devices, being elastic and pliable makes their performance less repeatable. The variation in reliability is known as hysteresis.
Guided by the theory of contact mechanics, a team of researchers from the National University of Singapore (NUS) came up with a new sensor material that has significantly less hysteresis. This ability enables more accurate wearable health technology and robotic sensing.
The research team, led by Assistant Professor Benjamin Tee from the Institute for Health Innovation & Technology at NUS, published their results in the prestigious journal Proceedings of the National Academy of Sciences on 28 September 2020.
DiCarlo and Yamins, who now runs his own lab at Stanford University, are part of a coterie of neuroscientists using deep neural networks to make sense of the brain’s architecture. In particular, scientists have struggled to understand the reasons behind the specializations within the brain for various tasks. They have wondered not just why different parts of the brain do different things, but also why the differences can be so specific: Why, for example, does the brain have an area for recognizing objects in general but also for faces in particular? Deep neural networks are showing that such specializations may be the most efficient way to solve problems.
Neuroscientists are finding that deep-learning networks, often criticized as “black boxes,” can be good models for the organization of living brains.
In this video, I’m going to talk about how an AI Camera Mistakes Soccer Ref’s Bald Head For Ball. Technology and sports have a fairly mixed relationship already. Log on to Twitter during a soccer match (or football as it’s properly known• and as well as people tweeting ambiguous statements like “YESSS” and “oh no mate” to about 20,000 inexplicable retweets, you’ll likely see a lot of complaints about the video assistant referee (VAR) and occasionally goal-line technology not doing its job. Fans of Scottish football team Inverness Caledonian Thistle FC experienced a new hilarious technological glitch during a match last weekend, but in all honesty, you’d be hard-pressed to say it didn’t improve the viewing experience dramatically.
The club announced a few weeks ago it was moving from using human camera operators to cameras controlled by AI. The club proudly announced at the time the new “Pixellot system uses cameras with in-built, AI, ball-tracking technology” and would be used to capture HD footage of all home matches at Caledonian Stadium, which would be broadcast directly to season-ticket holders’ homes. The AI camera appeared to mistake the man’s bald head for the ball for a lot of the match, repeatedly swinging back to follow the linesman instead of the actual game. Many viewers complained they missed their team scoring a goal because the camera “kept thinking the Lino bald head was the ball,” and some even suggested the club would have to provide the linesman with a toupe or hat.
Researchers at Stanford University have developed a CRISPR-based “lab on a chip” to detect COVID-19, and are working with automakers at Ford to develop their prototype into a market-ready product.
This could provide an automated, hand-held device designed to deliver a coronavirus test result anywhere within 30 minutes.
In a study published this week in the Proceedings of the National Academy of Sciences, the test spotted active infections quickly and cheaply, using electric fields to purify fluids from a nasal swab sample and drive DNA-cutting reagents within the system’s tiny passages.
Boeing has hired a former SpaceX and Tesla executive with autonomous technology experience to lead its software development team.
Effective immediately, Jinnah Hosein is Boeing’s vice-president of software engineering, a new position that includes oversight of “software engineering across the enterprise”, Boeing says.
“Hosein will lead a new, centralised organisation of engineers who currently support the development and delivery of software embedded in Boeing’s products and services,” the Chicago-based airframer says. “The team will also integrate other functional teams to ensure engineering excellence throughout the product life cycle.”
Another argument for government to bring AI into its quantum computing program is the fact that the United States is a world leader in the development of computer intelligence. Congress is close to passing the AI in Government Act, which would encourage all federal agencies to identify areas where artificial intelligences could be deployed. And government partners like Google are making some amazing strides in AI, even creating a computer intelligence that can easily pass a Turing test over the phone by seeming like a normal human, no matter who it’s talking with. It would probably be relatively easy for Google to merge some of its AI development with its quantum efforts.
The other aspect that makes merging quantum computing with AI so interesting is that the AI could probably help to reduce some of the so-called noise of the quantum results. I’ve always said that the way forward for quantum computing right now is by pairing a quantum machine with a traditional supercomputer. The quantum computer could return results like it always does, with the correct outcome muddled in with a lot of wrong answers, and then humans would program a traditional supercomputer to help eliminate the erroneous results. The problem with that approach is that it’s fairly labor intensive, and you still have the bottleneck of having to run results through a normal computing infrastructure. It would be a lot faster than giving the entire problem to the supercomputer because you are only fact-checking a limited number of results paired down by the quantum machine, but it would still have to work on each of them one at a time.
But imagine if we could simply train an AI to look at the data coming from the quantum machine, figure out what makes sense and what is probably wrong without human intervention. If that AI were driven by a quantum computer too, the results could be returned without any hardware-based delays. And if we also employed machine learning, then the AI could get better over time. The more problems being fed to it, the more accurate it would get.