Toggle light / dark theme

AI doesn’t just want to eat your lunch — sometimes it wants to deliver it, too.

Driverless tech provider Motional and Uber Eats plan to add a dash of autonomy to food delivery next year in Santa Monica, serving up meal kits from select restaurants. The news was first reported by AiThority.

The plan is for food deliveries to come via Motional’s all-electric Hyundai IONIQ 5-based robotaxis. Motional said this will be the first time its vehicles are used to deliver food. It’s not clear whether humans or robots will bring the meal kits to customers’ doorsteps.

Roche and its Genentech subsidiary have committed up to $12 billion to Recursion in return for using its Recursion Operating System (OS) to advance therapies in 40 programs that include “key areas” of neuroscience and an undisclosed oncology indication.

Recursion OS applies machine learning and high-content screening methods in what the companies said would be a “transformational” model for tech-enabled target and drug discovery.

The integrated, multi-faceted OS is designed to generate, analyze and glean insights from large-scale proprietary biological and chemical datasets—in this case, extensive single-cell perturbation screening data from Roche and Genentech—by integrating wet-lab and dry-lab biology at scale to phenomically capture chemical and genetic alterations in neuroscience-related cell types and select cancer cell lines.

Referring to Tesla’s Autopilot and Full Self Driving features.

Elon Musk has claimed that no other CEO cares as much about safety as he does in an interview with Financial Times.

In the year that has seen his private wealth balloon like never before, Musk has also been showered with titles, beginning with the richest person in the world and more recently, the person of the year by Time Magazine. The Time accolade is probably one of the many titles Musk will receive as he embarks on his mission to send humanity to Moon with his space company, SpaceX.

Before we get there though, there are some issues with his other company Tesla that needs addressing. The company’s short history is peppered with incidents that have risked human lives as it pushes the boundaries of autonomous driving. The company offers features called Autopilot and Full Self-Driving (FSD) which are still in beta stages and have been involved in accidents. In August, this year, the U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) launched an investigation into the Autopilot feature that involves 750,000 Tesla vehicles.

Speaking to FT, Musk said that he hasn’t misled Tesla buyers about Autopilot or FSD. “Read what it says when you order a Tesla. Read what it says when you turn it on. It’s very, very clear,” said Musk during the interview. He also cited the high ratings Tesla cars have achieved on safety and also used SpaceX’s association with NASA to send humans into space to highlight his focus on safety. He also went a step further to say that he doesn’t see any other CEO on the planet care as much about safety as he does.

Controversial facial recognition company, Clearview AI, which has amassed a database of some 10 billion images by scraping selfies off the Internet so it can sell an identity-matching service to law enforcement, has been hit with another order to delete people’s data.

France’s privacy watchdog said today that Clearview has breached Europe’s General Data Protection Regulation (GDPR).

In an announcement of the breach finding, the CNIL also gives Clearview formal notice to stop its “unlawful processing” and says it must delete user data within two months.

Wasn’t it science-fiction writer, futurist, inventor, undersea explorer, and television series host, Arthur C. Clarke, who said, “Any sufficiently advanced technology is indistinguishable from magic?” Yes, in fact, it was! And the same is true today! Technology is magic! And the great thing about living in the future is we get to reap the benefits of all this technological advancement. Who doesn’t want lazer-precise internet? Why not take a vacation in outer space? Where is the driverless car taking us? These are the questions we face when we take a look at the future — up close… 15 Emerging Technologies That Will Change Our World.

For copyright issues, please feel free to e-mail me: [email protected]

Deepmind has just publically released their GPT-3 competitor called Gopher AI which is able to outcompete GPT by almost 10 times at a much better efficiency level. DeepMind said that larger models are more likely to generate toxic responses when provided with toxic prompts. They can also more accurately classify toxicity. The model scale does not significantly improve results for areas like logical reasoning and common-sense tasks. The research team found out that the capabilities of Gopher exceed existing language models for a number of key tasks. This includes the Massive Multitask Language Understanding benchmark, where Gopher demonstrates a significant advancement towards human expert performance over prior work.

TIMESTAMPS:
00:00 Deepmind’s Road to Human Intelligence.
02:09 The Dangers of Deepmind’s AI
04:46 How Gopher AI works.
06:17 How this AI could be used.
08:17 Last Words.

#ai #deepmind #futurism

That was a key takeaway from a conversation between economist Daniel Kahneman and MIT professor of brain and cognitive science Josh Tenenbaum at the Conference on Neural Information Processing Systems (NeurIPS) recently. The pair spoke during the virtual event about the shortcomings of humans and what we can learn from them while building A.I.

Kahneman, a Nobel Prize winner in economic sciences and the author of Thinking, Fast and Slow, noted an instance in which humans use judgment heuristics—shortcuts, essentially—to answer questions they don’t know the answer to. In the example, people are given a small amount of information about a student: She’s about to graduate, and she was reading fluently when she was 4 years old. From that, they’re asked to estimate her grade point average.

Using this information, many people will estimate the student’s GPA to be 3.7 or 3.8. To arrive there, Kahneman explained, they assign her a percentile on the intelligence scale—usually very high, given what they know about her reading ability at a young age. Then they assign her a GPA in what they estimate to be the corresponding percentile.

Most often, we recognize deep learning as the magic behind self-driving cars and facial recognition, but what about its ability to safeguard the quality of the materials that make up these advanced devices? Professor of Materials Science and Engineering Elizabeth Holm and Ph.D. student Bo Lei have adopted computer vision methods for microstructural images that not only require a fraction of the data deep learning typically relies on but can save materials researchers an abundance of time and money.

Quality control in materials processing requires the analysis and classification of complex material microstructures. For instance, the properties of some high strength steels depend on the amount of lath-type bainite in the material. However, the process of identifying bainite in microstructural images is time-consuming and expensive as researchers must first use two types of to take a closer look and then rely on their own expertise to identify bainitic regions. “It’s not like identifying a person crossing the street when you’re driving a car,” Holm explained “It’s very difficult for humans to categorize, so we will benefit a lot from integrating a .”

Their approach is very similar to that of the wider computer-vision community that drives facial recognition. The model is trained on existing material microstructure images to evaluate new images and interpret their classification. While companies like Facebook and Google train their models on millions or billions of images, materials scientists rarely have access to even ten thousand images. Therefore, it was vital that Holm and Lei use a “data-frugal method,” and train their model using only 30–50 microscopy images. “It’s like learning how to read,” Holm explained. “Once you’ve learned the alphabet you can apply that knowledge to any book. We are able to be data-frugal in part because these systems have already been trained on a large database of natural images.”

Stanford’s made a lot of progress over the years with its gecko-inspired robotic hand. In May, a version of the “gecko gripper” even found its way onto the International Space Station to test its ability to perform tasks like collecting debris and fixing satellites.

In a paper published today in Science Robotics, researchers at the university are demonstrating a far more terrestrial application for the tech: picking delicate objects. It’s something that’s long been a challenge for rigid robot hands, leading to a wide range of different solutions, including soft robotic grippers.

The team is showing off FarmHand, a four-fingered gripper inspired by both the dexterity of the human hand and the unique gripping capabilities of geckos. Of the latter, Stanford notes that the adhesive surface “creates a strong hold via microscopic flaps — Van der Waals force – a weak intermolecular force that results from subtle differences in the positions of electrons on the outsides of molecules.”