Toggle light / dark theme

Machines are learning things fast and replacing humans at a faster rate than ever before. Fresh development in this direction is a robot that can taste food. And not only it can taste the food, it can do so while making the dish it is preparing! This further leads to the robot having the ability to recognise taste of the food in various stages of chewing when a human eats the food.

The robot chef was made by Mark Oleynik, a Russian mathematician and computer scientist. Researchers at the Cambride University trained the robot to ‘taste’ the food as it cooks it.

The robot had already been trained to cook egg omelets. The researchers at Cambridge University added a sensor to the robot which can recognise different levels of saltiness.

Jack in the Box has become the latest American food chain to experiment with automation, as it seeks to handle staffing challenges and improve the efficiency of its service.

Jack in the Box is one of the largest quick service restaurant chains in America, with more than 2,200 branches. With continued staffing challenges impacting its operating hours and costs, Jack in the Box saw a need to revamp its technology and establish new systems – particularly in the back-of-house – that improve restaurant-level economics and alleviate the pain points of working in a high-volume commercial kitchen.

A virtual robot arm has learned to solve a wide range of different puzzles —stacking blocks, setting the table, arranging chess pieces—without having to be retrained for each task. It did this by playing against a second robot arm that was trained to give it harder and harder challenges.

Self play: Developed by researchers at OpenAI, the identical robot arms—Alice and Bob—learn by playing a game against each other in a simulation, without human input. The robots use reinforcement learning, a technique in which AIs are trained by trial and error what actions to take in different situations to achieve certain goals. The game involves moving objects around on a virtual tabletop. By arranging objects in specific ways, Alice tries to set puzzles that are hard for Bob to solve. Bob tries to solve Alice’s puzzles. As they learn, Alice sets more complex puzzles and Bob gets better at solving them.

Quantum machine learning is a field of study that investigates the interaction of concepts from quantum computing with machine learning.

For example, we would wish to see if quantum computers can reduce the amount of time it takes to train or assess a machine learning model. On the other hand, we may use machine learning approaches to discover quantum error-correcting algorithms, estimate the features of quantum systems, and create novel quantum algorithms.

But the bizarre invention could soon become a reality, as scientists have taken a major step forward in the development of electronic skin with integrated artificial hairs.

Hairs allow for ‘natural touch’ and let us detect different sensations such as rough and smooth, as well as the direction the touch is coming from.

Researchers from Chemnitz University of Technology say the ‘e-skin’ could have a range of uses in the future, including skin replacement for humans and artificial skin for humanoid robots.

What explains the dramatic progress from 20th-century to 21st-century AI, and how can the remaining limitations of current AI be overcome? The widely accepted narrative attributes this progress to massive increases in the quantity of computational and data resources available to support statistical learning in deep artificial neural networks. We show that an additional crucial factor is the development of a new type of computation. Neurocompositional computing adopts two principles that must be simultaneously respected to enable human-level cognition: the principles of Compositionality and Continuity. These have seemed irreconcilable until the recent mathematical discovery that compositionality can be realized not only through discrete methods of symbolic computing, but also through novel forms of continuous neural computing.

Consciousness defines our existence. It is, in a sense, all we really have, all we really are, The nature of consciousness has been pondered in many ways, in many cultures, for many years. But we still can’t quite fathom it.

web1Why consciousness cannot have evolved

Consciousness Cannot Have Evolved Read more Consciousness is, some say, all-encompassing, comprising reality itself, the material world a mere illusion. Others say consciousness is the illusion, without any real sense of phenomenal experience, or conscious control. According to this view we are, as TH Huxley bleakly said, ‘merely helpless spectators, along for the ride’. Then, there are those who see the brain as a computer. Brain functions have historically been compared to contemporary information technologies, from the ancient Greek idea of memory as a ‘seal ring’ in wax, to telegraph switching circuits, holograms and computers. Neuroscientists, philosophers, and artificial intelligence (AI) proponents liken the brain to a complex computer of simple algorithmic neurons, connected by variable strength synapses. These processes may be suitable for non-conscious ‘auto-pilot’ functions, but can’t account for consciousness.

Finally there are those who take consciousness as fundamental, as connected somehow to the fine scale structure and physics of the universe. This includes, for example Roger Penrose’s view that consciousness is linked to the Objective Reduction process — the ‘collapse of the quantum wavefunction’ – an activity on the edge between quantum and classical realms. Some see such connections to fundamental physics as spiritual, as a connection to others, and to the universe, others see it as proof that consciousness is a fundamental feature of reality, one that developed long before life itself.

To better automate reasoning, machines should ideally be able to systematically revise the view they have obtained about the world. Timotheus Kampik’s dissertation work presents mathematical reasoning approaches that strike a balance between retaining consistency with previously drawn conclusions and rejecting them in face of overwhelming new evidence.

When reasoning and when making decisions, humans are continuously revising what their view of the world is, by rejecting what they have previously considered true or desirable, and replacing it with an updated and ideally more useful perspective. Enabling machines to do so in a similar manner, but with logical precision, is a long-running line of artificial intelligence research.

In his dissertation, Timotheus advances this line of research by devising reasoning approaches that balance retaining previously drawn conclusions for the sake of ensuring consistency and revising them to accommodate new compelling evidence. To this end, he applies well-known from to formal argumentation, an approach to logic-based automated reasoning.

From search engines to voice assistants, computers are getting better at understanding what we mean. That’s thanks to language-processing programs that make sense of a staggering number of words, without ever being told explicitly what those words mean. Such programs infer meaning instead through statistics—and a new study reveals that this computational approach can assign many kinds of information to a single word, just like the human brain.

The study, published April 14 in the journal Nature Human Behavior, was co-led by Gabriel Grand, a graduate student in and computer science who is affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory, and Idan Blank Ph.D. ‘16, an assistant professor at the University of California at Los Angeles. The work was supervised by McGovern Institute for Brain Research investigator Ev Fedorenko, a cognitive neuroscientist who studies how the uses and understands language, and Francisco Pereira at the National Institute of Mental Health. Fedorenko says the rich knowledge her team was able to find within computational language models demonstrates just how much can be learned about the world through language alone.

The research team began its analysis of statistics-based language processing models in 2015, when the approach was new. Such models derive meaning by analyzing how often pairs of co-occur in texts and using those relationships to assess the similarities of words’ meanings. For example, such a program might conclude that “bread” and “apple” are more similar to one another than they are to “notebook,” because “bread” and “apple” are often found in proximity to words like “eat” or “snack,” whereas “notebook” is not.