The company will pause hiring soon, and expects up to 30% of non-customer-facing roles will be replaced by automation in the next five years.
Category: robotics/AI – Page 1,009
An AI-based decoder that can translate brain activity into a continuous stream of text has been developed, in a breakthrough that allows a person’s thoughts to be read non-invasively for the first time.
The decoder could reconstruct speech with uncanny accuracy while people listened to a story – or even silently imagined one – using only fMRI scan data. Previous language decoding systems have required surgical implants, and the latest advance raises the prospect of new ways to restore speech in patients struggling to communicate due to a stroke or motor neurone disease.
“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences.”
What if someone could listen to your thoughts? Sure, that’s highly improbable, you might say. Sounds very much like fiction. And we could have agreed with you, until yesterday.
Researchers at The University of Texas at Austin have decoded a person’s brain activity while they’re listening to a story or imagining telling a story into a stream of text, thanks to artificial intelligence and MRI scans.
The AI model employs radiomics, a technique for extracting critical information from medical images that are not always visible to the naked eye.
An artificial intelligence (AI) model that accurately identifies cancer has been developed by a team of scientists, doctors, and researchers from Imperial College London, the Institute of Cancer Research in London, and the Royal Marsden NHS Foundation Trust.
Reportedly, this new AI model uses radiomics, a technique that extracts critical information from medical images that may not be visible to the naked eye. This, in turn, aids in determining whether the abnormal growths detected on CT scans are cancerous.
Applying machine learning models, a type of artificial intelligence (AI), to data collected passively from wearable devices can identify a patient’s degree of resilience and well-being, according to investigators at the Icahn School of Medicine at Mount Sinai in New York.
The findings, reported in the May 2 issue of JAMIA Open, support wearable devices, such as the Apple Watch, as a way to monitor and assess psychological states remotely without requiring the completion of mental health questionnaires.
The paper, titled “A machine learning approach to determine resilience utilizing wearable device data: analysis of an observational cohort,” points out that resilience, or an individual’s ability to overcome difficulty, is an important stress mitigator, reduces morbidity, and improves chronic disease management.
That’s when major clean energy projects are also due to come online, including the country’s largest offshore wind farm, which comes at a price of $9.8 billion. Once built off the Virginia coast, this project could save the state’s customers as much as $6 billion during its first 10 years in operation.
Focusing on efficiency now will help avoid overbuilding renewable generation and allow such large-scale projects to make great strides toward a greener grid when they finally are welcomed online.
While making energy efficiency improvements isn’t a new idea, AI is enabling real-time data analysis and energy intelligence that can maximize efforts in a variety of ways that are chipping away at carbon emissions now.
New research from the University of California, Berkeley, shows that artificial intelligence (AI) systems can process signals in a way that is remarkably similar to how the brain interprets speech, a finding scientists say might help explain the black box of how AI systems operate.
Using a system of electrodes placed on participants’ heads, scientists with the Berkeley Speech and Computation Lab measured brain waves as participants listened to a single syllable— bah. They then compared that brain activity to the signals produced by an AI system trained to learn English.
“The shapes are remarkably similar,” said Gasper Begus, assistant professor of linguistics at UC Berkeley and lead author on the study published recently in the journal Scientific Reports. “That tells you similar things get encoded, that processing is similar.”
He thinks about Robert Oppenheimer and the Manhattan Project that led to the atomic bomb, Hiroshima, and Nagasaki, and the current state of mutually assured destruction (MAD). It started with a science experiment to split the atom and soon the genie was released from the bottle.
I think of the arrival of generalized AI like ChatGPT as being equivalent to the revolution brought on by the invention of movable type and the printing press. Would the Reformation in Europe have happened without it? Would Europe’s rise to world dominance in the 18th and 19th centuries have resulted? The printing press genie uncorked led to a generalized knowledge revolution with both good and bad consequences.
The future uncorked AI genie with no guidance from us could, in answering the question I asked at the beginning of this posting, see humanity as the greatest threat to life on the planet and act accordingly if we don’t gain control over it.
Joshua Browder, the CEO of robo-lawyer startup DoNotPay, says that he handed over his entire financial life to OpenAI’s GPT-4 large language model in an attempt to save money.
“I decided to outsource my entire personal financial life to GPT-4 (via the DoNotPay chat we are building),” Browder tweeted. “I gave AutoGPT access to my bank, financial statements, credit report, and email.”
According to the CEO, the AI was able to save him $217.85 by automating tasks that would’ve cost him precious time.
Augmenting the artificial intelligence GPT-4 with extra chemistry knowledge made it much better at planning chemistry experiments, but it refused to make heroin or sarin gas.
By Alex Wilkins