Toggle light / dark theme

Tesla’s Director of Artificial Intelligence, Andrej Karpathy, says that he believes ‘Tesla Bot’ is “on track to become the most powerful AI development platform.”

Since Tesla AI Day last year, CEO Elon Musk has been slowly pushing the idea that Tesla is becoming more of an AI/robotics company.

Musk has been boasting about the company’s AI talent, led by Director of Artificial Intelligence, Andrej Karpathy, and believes that the company is in the best position to make advancements in AI due to the real-world applications in its vehicles.

Researchers have developed a new technique, called MonoCon, that improves the ability of artificial intelligence (AI) programs to identify three-dimensional (3D) objects, and how those objects relate to each other in space, using two-dimensional (2D) images. For example, the work would help the AI used in autonomous vehicles navigate in relation to other vehicles using the 2D images it receives from an onboard camera.

“We live in a 3D world, but when you take a picture, it records that world in a 2D image,” says Tianfu Wu, corresponding author of a paper on the and an assistant professor of electrical and computer engineering at North Carolina State University.

“AI programs receive visual input from cameras. So if we want AI to interact with the world, we need to ensure that it is able to interpret what 2D images can tell it about 3D space. In this research, we are focused on one part of that challenge: how we can get AI to accurately recognize 3D objects—such as people or cars—in 2D images, and place those objects in space.”

You may not be able to teach an old dog new tricks, but Cornell researchers have found a way to train physical systems, ranging from computer speakers and lasers to simple electronic circuits, to perform machine-learning computations, such as identifying handwritten numbers and spoken vowel sounds.

The experiment is no mere stunt or parlor trick. By turning these physical systems into the same kind of that drive services like Google Translate and online searches, the researchers have demonstrated an early but viable alternative to conventional electronic processors—one with the potential to be orders of magnitude faster and more energy efficient than the power-gobbling chips in data centers and server farms that support many artificial-intelligence applications.

“Many different physical systems have enough complexity in them that they can perform a large range of computations,” said Peter McMahon, assistant professor of applied and engineering physics in the College of Engineering, who led the project. “The systems we performed our demonstrations with look nothing like each other, and they seem to [be] having nothing to do with handwritten-digit recognition or vowel classification, and yet you can train them to do it.”

A few weeks ahead the Beijing 2022 Winter Olympics, Chinese engineers have presented a six-legged skiing robot that expertly slaloms down a snowy white slope in Shenyang, China. The team of engineers said that the robot stands on a pair of skis with four of its legs and grips poles using its two other limbs.

Researchers have put it to tests in both beginner and intermediate slopes and have proven to stay upright and avoid obstacles. The robot was developed by engineers from the Shanghai Jiao Tong University.

A robot has performed laparoscopic surgery on the soft tissue of a pig without the guiding hand of a human—a significant step in robotics toward fully automated surgery on humans. Designed by a team of Johns Hopkins University researchers, the Smart Tissue Autonomous Robot (STAR) is described today in Science Robotics.

“Our findings show that we can automate one of the most intricate and delicate tasks in surgery: the reconnection of two ends of an intestine. The STAR performed the procedure in four animals and it produced significantly better results than humans performing the same procedure,” said senior author Axel Krieger, an assistant professor of mechanical engineering at Johns Hopkins’ Whiting School of Engineering.

The robot excelled at intestinal anastomosis, a procedure that requires a high level of repetitive motion and precision. Connecting two ends of an intestine is arguably the most challenging step in gastrointestinal surgery, requiring a surgeon to suture with high accuracy and consistency. Even the slightest hand tremor or misplaced stitch can result in a leak that could have catastrophic complications for the patient.

🔔 Subscribe now for more Artificial Intelligence news, Data science news, Machine Learning news and more.
🦾 Support us NOW so we can create more videos: https://www.youtube.com/channel/UCItylrp-EOkBwsUT7c_Xkxg.

The United States military has a long record of being at the forefront of humankind’s technological achievements. For example, it was the U.S. Navy in the 1940s, led by Admiral Rickover, who pioneered the use of nuclear power as a propulsion device, and that eventually led to nuclear power plants for civilian use. Today, the military again leads the charge into the future with their innovations in robotics and their many applications across the entire infrastructure of the organization. We will talk about MAARS, Robobee, DOGO, SAFFiR and Gladiator!

#bostondynamics #bostondynamicsrobot #robotics.

📺 Fun fact: Smart people watch the entire video!

It looks whimsical, but it could be a blueprint for fixing our woeful infrastructure in the U.S.


After four long years of planning, the world’s first 3D-printed steel bridge debuted in Amsterdam last month. If it stands up to the elements, the bridge could be a blueprint for fixing our own structurally deficient infrastructure in the U.S.—and we sorely need the help.

Dutch Company MX3D built the almost 40-foot-long bridge for pedestrians and cyclists to cross the city’s Oudezijds Achterburgwal canal. It relied on four robots, fit with welding torches, to 3D-print the structure. To do it, the machines laid out 10,000 pounds of steel, heated to 2,732 degrees Fahrenheit, in an intricate layering process. The result? An award-winning design, pushing the boundaries of what steel can do.

China is trailblazing AI regulation, with the goal of being the AI leader by 2030. We look at its #AI ethics guidelines.


The best agile and lean development conferences of 2022.

The European Union had issued a preliminary draft of AI-related rules in April 2021, but we’ve seen nothing final. In the United States, the notion of ethical AI has gotten some traction, but there aren’t any overarching regulations or universally accepted best practices.

Scanning for Memories

At the time there was almost no evidence of this from neuron studies. But in 2006, Ma, Pouget and their colleagues at the University of Rochester presented strong evidence that populations of simulated neurons could perform optimal Bayesian inference calculations. Further work by Ma and other researchers over the past dozen years offered additional confirmations from electrophysiology and neuroimaging that the theory applies to vision by using machine learning programs called Bayesian decoders to analyze actual neural activity.

Neuroscientists have used decoders to predict what people are looking at from fMRI (functional magnetic resonance imaging) scans of their brains. The programs can be trained to find the links between a presented image and the pattern of blood flow and neural activity in the brain that results when people see it. Instead of making a single guess — that the subject is looking at an 85-degree angle, for instance — Bayesian decoders produce a probability distribution. The mean of the distribution represents the likeliest prediction of what the subject is looking at. The standard deviation, which describes the width of the distribution, is thought to reflect the subject’s uncertainty about the sight (is it 85 degrees or could it be 84 or 86?).