Robot Babe.
Rewriting the rules in GAN: Copy & paste features contextually.
GAN architecture has been the standard for generating content through AI, but can it actually invent new content outside what’s available in the training dataset? Or it’s just imitating the training data and mixing the features in new ways?
In recent years, researchers have been developing machine learning algorithms for an increasingly wide range of purposes. This includes algorithms that can be applied in healthcare settings, for instance helping clinicians to diagnose specific diseases or neuropsychiatric disorders or monitor the health of patients over time.
Researchers at Massachusetts Institute of Technology (MIT) and Massachusetts General Hospital have recently carried out a study investigating the possibility of using deep reinforcement learning to control the levels of unconsciousness of patients who require anesthesia for a medical procedure. Their paper, set to be published in the proceedings of the 2020 International Conference on Artificial Intelligence in Medicine, was voted the best paper presented at the conference.
“Our lab has made significant progress in understanding how anesthetic medications affect neural activity and now has a multidisciplinary team studying how to accurately determine anesthetic doses from neural recordings,” Gabriel Schamberg, one of the researchers who carried out the study, told TechXplore. “In our recent study, we trained a neural network using the cross-entropy method, by repeatedly letting it run on simulated patients and encouraging actions that led to good outcomes.”
Of all the AI models in the world, OpenAI’s GPT-3 has most captured the public’s imagination. It can spew poems, short stories, and songs with little prompting, and has been demonstrated to fool people into thinking its outputs were written by a human. But its eloquence is more of a parlor trick, not to be confused with realintelligence.
Nonetheless, researchers believe that the techniques used to create GPT-3 could contain the secret to more advanced AI. GPT-3 trained on an enormous amount of text data. What if the same methods were trained on both text and images?
Now new research from the Allen Institute for Artificial Intelligence, AI2, has taken this idea to the next level. The researchers have developed a new text-and-image model, otherwise known as a visual-language model, that can generate images given a caption. The images look unsettling and freakish—nothing like the hyperrealistic deepfakes generated by GANs —but they might demonstrate a promising new direction for achieving more generalizable intelligence, and perhaps smarter robots as well.
This giant 60-foot ‘Gundam’ robot has made its first test moves — look at the size of it! 😲 🤖.
Financial crime as a wider category of cybercrime continues to be one of the most potent of online threats, covering nefarious activities as diverse as fraud, money laundering and funding terrorism. Today, one of the startups that has been building data intelligence solutions to help combat that is announcing a fundraise to continue fueling its growth.
Ripjar, a U.K. company founded by five data scientists who previously worked together in British intelligence at the Government Communications Headquarters (GCHQ, the U.K.’s equivalent of the NSA), has raised $36.8 million (£28 million) in a Series B, money that it plans to use to continue expanding the scope of its AI platform — which it calls Labyrinth — and scaling the business.
Labyrinth, as Ripjar describes it, works with both structured and unstructured data, using natural language processing and an API-based platform that lets organizations incorporate any data source they would like to analyse and monitor for activity. It automatically and in real time checks these against other data sources like sanctions lists, politically exposed persons (PEPs) lists and transaction alerts.
People’s brainwaves have been converted into speech using electrodes on the brain. The method could one day help people speak who have lost the ability.
By Chelsea Whyte.
Electrodes on the brain have been used to translate brainwaves into words spoken by a computer – which could be useful in the future to help people who have lost the ability to speak.
When you speak, your brain sends signals from the motor cortex to the muscles in your jaw, lips and larynx to coordinate their movement and produce a sound.
Microsoft announced that it has “exclusively licensed” OpenAI’s sophisticated GPT-3 language model that can generate disturbingly human-like text in applications ranging from commercial bots to creative writing. After investing $1 billion in the San Francisco startup last year to become OpenAI’s exclusive cloud partner, Microsoft will get access to the language tech for itself and its Azure cloud customers.
OpenAI released GPT-3 just a couple of months ago to a limited group of developers, but its capabilities have already generated massive amounts of buzz. It’s the largest language model ever trained, and is capable of not just mundane tasks like auto-generating business correspondence, but also creative or technical chores like poetry, memes and computer code.
The trimmed-down pQRNN extension to Google AI’s projection attention neural network PRADO compares to BERT on text classification tasks for on-device use.
Microsoft has made several quirky and useful apps that can help you with daily problems and its new app seeks to help you with math.
Microsoft Math Solver — available on both iOS and Android — can solve various math problems including quadratic equations, calculus, and statistics. The app can also show graphs for the equation to enhance your understanding of the subject.