Toggle light / dark theme

Did you know TNW Conference has a track fully dedicated to exploring new design trends this year? Check out the full ‘Sprint’ program here.

There’s been a lot of discussion in recent times around the need to get more women into AI and the primary focus of this discussion has been on developing AI systems that represent both men and women in order to reduce bias. The necessity of including women in the development of AI is universally accepted as being a positive step, and of course, it extends beyond gender to ethnicity and nationality as well if we are to truly create anything without bias.

However, there’s a narrow focus within this discussion on including women with technical skills and we need to look beyond this. There’s a whole host of skills required to develop AI systems, from designing the user interface, user experience, user testing, product development, and the system testing and training required to successfully launch an AI solution.

Intel Israel announced that the project is the first of its kind which uses AI to create “female intelligence.” The experts who worked on the project, led by data scientist and researcher Shira Guskin, analyzed thousands of insights from “veteran career women.” Once the initial advice was submitted by many women across the Israeli work force, the researchers passed the data through three algorithm models: Topic Extraction, Grouping and Summarization. This led to an algorithm which “processed the tips pool and extracted the key tips and guidelines.”


The AI said that women should fully invest in their careers, be confident, network, love, and trust their guts.

When we open our eyes, we immediately see our surroundings in great detail. How the brain is able to form these richly detailed representations of the world so quickly is one of the biggest unsolved puzzles in the study of vision.

Scientists who study the brain have tried to replicate this phenomenon using models of vision, but so far, leading models only perform much simpler tasks such as picking out an object or a face against a cluttered background. Now, a team led by MIT cognitive scientists has produced a computer model that captures the human visual system’s ability to quickly generate a detailed scene description from an image, and offers some insight into how the brain achieves this.

“What we were trying to do in this work is to explain how perception can be so much richer than just attaching semantic labels on parts of an image, and to explore the question of how do we see all of the physical world,” says Josh Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM).

To date, teaching a robot to perform a task has usually involved either direct coding, trial-and-error tests or handholding the machine. Soon, though, you might just have to perform that task like you would any other day. MIT scientists have developed a system, Planning with Uncertain Specifications (PUnS), that helps bots learn complicated tasks when they’d otherwise stumble, such as setting the dinner table. Instead of the usual method where the robot receives rewards for performing the right actions, PUnS has the bot hold “beliefs” over a variety of specifications and use a language (linear temporal logic) that lets it reason about what it has to do right now and in the future.

How do human beings co-live with #AI in the #future? https://bit.ly/39BkOXe “Artificial intelligence (AI) has triggered many concerns and discussions in recent years, and through these discussions people are prompted to introspect on what it really means to be a human being …” #technology #innovation


Artificial intelligence (AI) has triggered many concerns and discussions in recent years, and through these discussions people are prompted to introspect on what it really means to be a human being. It provides us with plenty of food for thought regarding our science, society, family, work, etc., and all of these raise an important inquiry: how is life going to be with artificial intelligence being around us?

Artificial intelligence (AI) and machine learning are increasingly becoming a part of drug discovery and development beginning with identifying new compounds to structuring and designing clinical trials and targeting clinical trial populations.

A recent example came out of Linköping University in Sweden. The investigators utilized an artificial neural network to create maps of biological networks based on how different genes or proteins interact with each other. They leveraged a large database with information about the expression patterns of 20,000 genes in a large group of people. The AI was then taught to find patterns of gene expression.

And in mid-February, a drug developed using AI began testing in human clinical trials. The molecule, DSP-1181, is currently in Phase I clinical trials for obsessive-compulsive disorder. The compound is a long-acting potent serotonin 5-HT1A receptor agonist developed using AI that was part of a collaboration between Japan’s Sumitomo Dainippon Pharma and the UK’s Escientia. The AI developed the compound in about 12 months, compared to a more typical five-year process.