Toggle light / dark theme

Human-level artificial intelligence is close to finally being achieved, according to a lead researcher at Google’s DeepMind AI division.

Dr Nando de Freitas said “the game is over” in the decades-long quest to realise artificial general intelligence (AGI) after DeepMind unveiled an AI system capable of completing a wide range of complex tasks, from stacking blocks to writing poetry.

Described as a “generalist agent”, DeepMind’s new Gato AI needs to just be scaled up in order to create an AI capable of rivalling human intelligence, Dr de Freitas said.

Never mind reading generic guides or practicing with friends — Google is betting that algorithms can get you ready for a job interview. The company has launched an Interview Warmup tool that uses AI to help you prepare for interviews across various roles. The site asks typical questions (such as the classic “tell me a bit about yourself”) and analyzes your voiced or typed responses for areas of improvement. You’ll know when you overuse certain words, for instance, or if you need to spend more time talking about a given subject.

Interview Warmup is aimed at Google Career Certificates users hoping to land work, and most of its role-specific questions reflect this. There are general interview questions, though, and Google plans to expand the tool to help more candidates. The feature is currently only available in the US.

AI has increasingly been used in recruitment. To date, though, it has mainly served companies during their selection process, not the potential new hires. This isn’t going to level the playing field, but it might help you brush up on your interview skills.

An international team which includes University of Manchester scientists has for the first time demonstrated that nerve signals are exchanged between clogged up arteries and the brain.

The discovery of the previously unknown electrical circuit is a breakthrough in our understanding of atherosclerosis, a potentially deadly disease where plaques form on the innermost layer of arteries.

The study of mice found that new nerve bundles are formed on the outer layer of where the artery is diseased, so the brain can detect where the damage is and communicate with it.

In the not-too-distant future, many of us may routinely use 3D headsets to interact in the metaverse with virtual iterations of companies, friends, and life-like company assistants. These may include Lily from AT&T, Flo from Progressive, Jake from State Farm, and the Swami from CarShield. We’ll also be interacting with new friends like Nestlé‘s Cookie Coach, Ruth, the World Health Organization’s Digital Health worker Florence, and many others.

Creating digital characters for virtual reality apps and in ecommerce is a fast-rising new segment of IT. San Francisco-based Soul Machines, a company that is rooted in both the animation and artificial intelligence (AI) sectors, is jumping at the opportunity to create animated digital avatars to bolster interactions in the metaverse. Customers are much more likely to buy something when a familiar face — digital or human — is involved.

Investors, understandably, are hot on the idea. This week, the 6-year-old company revealed an infusion of series B financing ($70 million) led by new investor SoftBank Vision Fund 2, bringing the company’s total funding to $135 million to date.

The human brain is often described in the language of tipping points: It toes a careful line between high and low activity, between dense and sparse networks, between order and disorder. Now, by analyzing firing patterns from a record number of neurons, researchers have uncovered yet another tipping point — this time, in the neural code, the mathematical relationship between incoming sensory information and the brain’s neural representation of that information. Their findings, published in Nature in June, suggest that the brain strikes a balance between encoding as much information as possible and responding flexibly to noise, which allows it to prioritize the most significant features of a stimulus rather than endlessly cataloging smaller details. The way it accomplishes this feat could offer fresh insights into how artificial intelligence systems might work, too.

A balancing act is not what the scientists initially set out to find. Their work began with a simpler question: Does the visual cortex represent various stimuli with many different response patterns, or does it use similar patterns over and over again? Researchers refer to the neural activity in the latter scenario as low-dimensional: The neural code associated with it would have a very limited vocabulary, but it would also be resilient to small perturbations in sensory inputs. Imagine a one-dimensional code in which a stimulus is simply represented as either good or bad. The amount of firing by individual neurons might vary with the input, but the neurons as a population would be highly correlated, their firing patterns always either increasing or decreasing together in the same overall arrangement. Even if some neurons misfired, a stimulus would most likely still get correctly labeled.

At the other extreme, high-dimensional neural activity is far less correlated. Since information can be graphed or distributed across many dimensions, not just along a few axes like “good-bad,” the system can encode far more detail about a stimulus. The trade-off is that there’s less redundancy in such a system — you can’t deduce the overall state from any individual value — which makes it easier for the system to get thrown off.

Hello 👋 guys check out our newest video #()

#destrorobotics Please don’t forget to like 👍and subscribe to our channel. #Empoweringtheworldthroughrobotics.

Support us through paypal: https://www.paypal.me/destrorobotics.

Video link👇:


In this video we will be looking at the top 20 awesome robot animals that will blow your mind please 🙏 subscribe to my youtube channel👌 https://youtube.com/channel/UCK9neHq3aT9dqN_ossHhLew.