Toggle light / dark theme

Researchers used AI and MRI scans to decode thoughts — and they were mostly accurate

“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences.”

What if someone could listen to your thoughts? Sure, that’s highly improbable, you might say. Sounds very much like fiction. And we could have agreed with you, until yesterday.

Researchers at The University of Texas at Austin have decoded a person’s brain activity while they’re listening to a story or imagining telling a story into a stream of text, thanks to artificial intelligence and MRI scans.

New AI-based tool shows promise in accurately identifying lung cancer

The AI model employs radiomics, a technique for extracting critical information from medical images that are not always visible to the naked eye.

An artificial intelligence (AI) model that accurately identifies cancer has been developed by a team of scientists, doctors, and researchers from Imperial College London, the Institute of Cancer Research in London, and the Royal Marsden NHS Foundation Trust.

Reportedly, this new AI model uses radiomics, a technique that extracts critical information from medical images that may not be visible to the naked eye. This, in turn, aids in determining whether the abnormal growths detected on CT scans are cancerous.

Wearable devices may be able to capture well-being through effortless data collection using AI

Applying machine learning models, a type of artificial intelligence (AI), to data collected passively from wearable devices can identify a patient’s degree of resilience and well-being, according to investigators at the Icahn School of Medicine at Mount Sinai in New York.

The findings, reported in the May 2 issue of JAMIA Open, support , such as the Apple Watch, as a way to monitor and assess psychological states remotely without requiring the completion of mental health questionnaires.

The paper, titled “A machine learning approach to determine utilizing wearable device data: analysis of an observational cohort,” points out that resilience, or an individual’s ability to overcome difficulty, is an important stress mitigator, reduces morbidity, and improves chronic disease management.

A Clean Energy Future Starts With An Efficient Grid

That’s when major clean energy projects are also due to come online, including the country’s largest offshore wind farm, which comes at a price of $9.8 billion. Once built off the Virginia coast, this project could save the state’s customers as much as $6 billion during its first 10 years in operation.

Focusing on efficiency now will help avoid overbuilding renewable generation and allow such large-scale projects to make great strides toward a greener grid when they finally are welcomed online.

While making energy efficiency improvements isn’t a new idea, AI is enabling real-time data analysis and energy intelligence that can maximize efforts in a variety of ways that are chipping away at carbon emissions now.

‘Raw’ data show AI signals mirror how the brain listens and learns

New research from the University of California, Berkeley, shows that artificial intelligence (AI) systems can process signals in a way that is remarkably similar to how the brain interprets speech, a finding scientists say might help explain the black box of how AI systems operate.

Using a system of electrodes placed on participants’ heads, scientists with the Berkeley Speech and Computation Lab measured as participants listened to a single syllable— bah. They then compared that brain activity to the signals produced by an AI system trained to learn English.

“The shapes are remarkably similar,” said Gasper Begus, assistant professor of linguistics at UC Berkeley and lead author on the study published recently in the journal Scientific Reports. “That tells you similar things get encoded, that processing is similar.”

Ask a Generalized AI What The Greatest Threat Is to Our Planet and You Likely Won’t Like the Answer

He thinks about Robert Oppenheimer and the Manhattan Project that led to the atomic bomb, Hiroshima, and Nagasaki, and the current state of mutually assured destruction (MAD). It started with a science experiment to split the atom and soon the genie was released from the bottle.

I think of the arrival of generalized AI like ChatGPT as being equivalent to the revolution brought on by the invention of movable type and the printing press. Would the Reformation in Europe have happened without it? Would Europe’s rise to world dominance in the 18th and 19th centuries have resulted? The printing press genie uncorked led to a generalized knowledge revolution with both good and bad consequences.

The future uncorked AI genie with no guidance from us could, in answering the question I asked at the beginning of this posting, see humanity as the greatest threat to life on the planet and act accordingly if we don’t gain control over it.

Man Gives All His Financial Info to AI and Lets It Make Decisions

Joshua Browder, the CEO of robo-lawyer startup DoNotPay, says that he handed over his entire financial life to OpenAI’s GPT-4 large language model in an attempt to save money.

“I decided to outsource my entire personal financial life to GPT-4 (via the DoNotPay chat we are building),” Browder tweeted. “I gave AutoGPT access to my bank, financial statements, credit report, and email.”

According to the CEO, the AI was able to save him $217.85 by automating tasks that would’ve cost him precious time.

How to extract quantitative data from your image using the AI pixel classifier

See how you can extract quantitative data from your image using the AI pixel classifier.

More about Mica: https://fcld.ly/mica-yt-tut.

▬ Transcript ▬▬▬▬▬▬▬▬▬▬▬▬
Mica radically simplifies your workflows, but your workflow is not completed unless you have extracted quantitative information from that image.
Let me show you how easy it is to extract quantitative information with Mica.
First record your multicolor image.
In this case, we want to count the nuclei that we see in that image.
Go to Learn and load the image of interest.
Now you have two classes, the background and the nuclei that we want to quantify.
First of all, draw a background region.
Secondly, draw the object of interest.
Once you are done with that let Mica first create a preview of the annotation that you have created.
If you are happy with that, then do the full training.
Now you have trained an AI model that uses pixel classification in order to segment your nuclei.
Save that model and you can use that model also for all the experiments that you are doing in the future.
Simply go to Results, select the image to quantify, switch to Analysis and you will have access to all the different models that you have trained.
Select the one that you are interested in and Start.
As an output can display the data as histograms, boxplots or even scatterplots.

▬ Mica ▬▬▬▬▬▬▬▬▬▬▬▬
Mica, the world’s first wholly integrated imaging Microhub, brings widefield imaging, confocal imaging and with AI-supported analysis seamlessly together. All united in one sample-protecting incubator environment.

All researchers, regardless of expertise, can now work in one easy-to use digital imaging platform, moving confidently from setup to beautifully visualized results, allowing true access for all.

At last, users can capture 4 x more data with 100% correlation in only one exposure, whether using widefield or confocal imaging. They can freely switch from widefield to confocal mode with just a simple click without ever moving the sample and explore unexpected paths with no constraints.