No, it’s not a sinister transformer. It’s Kubota’s new fully autonomous Dream Tractor.
Category: robotics/AI – Page 1,832
Serving GPT-2 at scale
Posted in robotics/AI
Over the last few years, the size of deep learning models has increased at an exponential pace (famously among language models):
And in fact, this chart is out of date. As of this month, OpenAI has announced GPT-3, which is a 175 billion parameter model—or roughly ten times the height of this chart.
As models grow larger, they introduce new infrastructure challenges. For my colleagues and I building Cortex (open source model serving infrastructure), these challenges are front and center, especially as the number of users deploying large models to production increases.
When Plato set out to define what made a human a human, he settled on two primary characteristics: We do not have feathers, and we are bipedal (walking upright on two legs). Plato’s characterization may not encompass all of what identifies a human, but his reduction of an object to its fundamental characteristics provides an example of a technique known as principal component analysis.
Now, Caltech researchers have combined tools from machine learning and neuroscience to discover that the brain uses a mathematical system to organize visual objects according to their principal components. The work shows that the brain contains a two-dimensional map of cells representing different objects. The location of each cell in this map is determined by the principal components (or features) of its preferred objects; for example, cells that respond to round, curvy objects like faces and apples are grouped together, while cells that respond to spiky objects like helicopters or chairs form another group.
The research was conducted in the laboratory of Doris Tsao (BS ‘96), professor of biology, director of the Tianqiao and Chrissy Chen Center for Systems Neuroscience and holder of its leadership chair, and Howard Hughes Medical Institute Investigator. A paper describing the study appears in the journal Nature on June 3.
Learning quantum error correction: the image visualizes the activity of artificial neurons in the Erlangen researchers’ neural network while it is solving its task. © Max Planck Institute for the Science of Light.
Neural networks enable learning of error correction strategies for computers based on quantum physics
Quantum computers could solve complex tasks that are beyond the capabilities of conventional computers. However, the quantum states are extremely sensitive to constant interference from their environment. The plan is to combat this using active protection based on quantum error correction. Florian Marquardt, Director at the Max Planck Institute for the Science of Light, and his team have now presented a quantum error correction system that is capable of learning thanks to artificial intelligence.
In the last few months, millions of people around the world stopped going into offices and started doing their jobs from home. These workers may be out of sight of managers, but they are not out of mind. The upheaval has been accompanied by a reported spike in the use of surveillance software that lets employers track what their employees are doing and how long they spend doing it.
Companies have asked remote workers to install a whole range of such tools. Hubstaff is software that records users’ keyboard strokes, mouse movements, and the websites that they visit. Time Doctor goes further, taking videos of users’ screens. It can also take a picture via webcam every 10 minutes to check that employees are at their computer. And Isaak, a tool made by UK firm Status Today, monitors interactions between employees to identify who collaborates more, combining this data with information from personnel files to identify individuals who are “change-makers.”
Changing Course
The Air Force announced an AI initiative called “Skyborg” last March with the goal of flying fighter jets without anyone at the controls. Now, Shanahan says that the Air Force may be more interested in swarm drones and other uses for AI than necessarily taking the pilot out of a fighter plane’s cockpit.
“Maybe I shouldn’t be thinking about a 65ft-wingspan, maybe it is a small autonomous swarming capability,” Shanahan told BBC News. “The last thing I would claim is that carriers and fighters and satellites are going away in the next couple of years.”
The current health crisis has snowballed into a world economic crisis, where every old business norm has been challenged. In such times, we cannot fall back on old ways of doing our business. Today, three technologies
Internet of Things(IoT), Artificial Intelligence (AI), and blockchain are poised to change every aspect of enterprises and our lives. Now more than ever, organisations realise the pertinent need for a robust digital foundation for their businesses as their future plans have been disrupted. “To achieve that level of business sophistication holistically it is imperative that there is a seamless flow of data across all the functions of an enterprise. That requires connected data that is secure and one that is driven by connected intelligence,” Guruprasad Gaonkar, JAPAC SaaS Leader for ERP & Digital Supply Chain, Oracle told Moneycontrol in an interview:
How is India reacting to emerging technologies as compared to other Asia Pacific (APAC) regions?
What is free will? And what are the key elements involved? We provide our own opinions on these questions in this article…
#AI #philosophy #neuroscience #technology
It turns out that you don’t need a computer to create an artificial intelligence. In fact, you don’t even need electricity.
In an extraordinary bit of left-field research, scientists from the University of Wisconsin–Madison have found a way to create artificially intelligent glass that can recognize images without any need for sensors, circuits, or even a power source — and it could one day save your phone’s battery life.
“We’re always thinking about how we provide vision for machines in the future, and imagining application specific, mission-driven technologies,” researcher Zongfu Yu said in a press release. “This changes almost everything about how we design machine vision.”