Toggle light / dark theme

How to extract quantitative data from your image using the AI pixel classifier

See how you can extract quantitative data from your image using the AI pixel classifier.

More about Mica: https://fcld.ly/mica-yt-tut.

▬ Transcript ▬▬▬▬▬▬▬▬▬▬▬▬
Mica radically simplifies your workflows, but your workflow is not completed unless you have extracted quantitative information from that image.
Let me show you how easy it is to extract quantitative information with Mica.
First record your multicolor image.
In this case, we want to count the nuclei that we see in that image.
Go to Learn and load the image of interest.
Now you have two classes, the background and the nuclei that we want to quantify.
First of all, draw a background region.
Secondly, draw the object of interest.
Once you are done with that let Mica first create a preview of the annotation that you have created.
If you are happy with that, then do the full training.
Now you have trained an AI model that uses pixel classification in order to segment your nuclei.
Save that model and you can use that model also for all the experiments that you are doing in the future.
Simply go to Results, select the image to quantify, switch to Analysis and you will have access to all the different models that you have trained.
Select the one that you are interested in and Start.
As an output can display the data as histograms, boxplots or even scatterplots.

▬ Mica ▬▬▬▬▬▬▬▬▬▬▬▬
Mica, the world’s first wholly integrated imaging Microhub, brings widefield imaging, confocal imaging and with AI-supported analysis seamlessly together. All united in one sample-protecting incubator environment.

All researchers, regardless of expertise, can now work in one easy-to use digital imaging platform, moving confidently from setup to beautifully visualized results, allowing true access for all.

At last, users can capture 4 x more data with 100% correlation in only one exposure, whether using widefield or confocal imaging. They can freely switch from widefield to confocal mode with just a simple click without ever moving the sample and explore unexpected paths with no constraints.

MRI scans and AI to decode what we think? This study has answers

Is mind reading possible? An age-old question with multiple unproven answers. Those who study psychology often claim that they can understand what the other person is saying as they study mental processes, brain functions, and behaviour, but even they can be 100 per cent accurate.

A study, published in the journal Nature Neuroscience, attempts to address it as scientists have said that they have come up with a way to decode a stream of words in the brain using MRI scans and artificial intelligence.

The study titled — “Semantic reconstruction of continuous language from non-invasive brain recordings” — noted that the system won’t replicate each word but it reconstructs the brief of what a person hears or imagines. The study was published in the journal Nature Neuroscience.

Hacking with ChatGPT: Five A.I. Based Attacks for Offensive Security

ChatGPT may represent one of the biggest disruptions in modern history with it’s powerful A.I based chatbot. But within weeks of ChatGPT’s release, security researchers discovered several cases of people using ChatGPT for everything from malware development to exploit coding. In this video, take a look at the five ways attackers are utilizing ChatGPT for wrong doing.

0:14 Intro to ChatGPT / Natural Language Processing (NLP) & GPT
1:28 Using ChatGPT for Vulnerability Discovery.
1:56 Vulnerability Prompts to Utilize.
3:10 Writing Exploits.
3:35 Exploit Prompts to Utilize.
4:33 Malware Development.
5:00 Malware Examples (Stealers, Command & Control)
5:42 Polymorphic Malware Development Using ChatGPT
6:21 A.I. Based Phishing using NLP (Natural Language Processing)
7:20 ChatGPT Advantages over Traditional Phishing Messages.
7:41 Custom Messages Using GPT-3
8:04 Using Macros and LOLBINs.
9:33 GPT-3 vs GPT-4 (Coming Soon)
9:56 Cybersecurity Considerations and Predictions.

Deep learning pioneer Geoffrey Hinton warns against rapid AI development as he quits Google

Geoffrey Hinton who won the ‘Nobel Prize of computing’ for his trailblazing work on neural networks is now free to speak about the risks of AI.

Man often dubbed the ‘Godfather of AI’ says part of him now regrets his life’s work.

One of the pioneers in the development of deep learning models that have become the basis for tools like ChatGPT and Bard, has quit Google to warn against the dangers of scaling AI technology too fast.


Alina Grubnyak / Unsplash.

In an interview with the New York Times on Monday, Geoffrey Hinton — a 2018 recipient of the Turing Award — said he had quit his job at Google to speak freely about the risks of AI.

Drake’s AI clone is here — and Drake might not be able to stop him

Major record labels are going after AI-generated songs, arguing copyright infringement. Legal experts say the approach is far from straightforward.

A certain type of music has been inescapable on TikTok in recent weeks: clips of famous musicians covering other artists’ songs, with combinations that read like someone hit the randomizer button. There’s Drake covering singer-songwriter Colbie Caillat, Michael Jackson covering The Weeknd, and Pop Smoke covering Ice Spice’s “In Ha Mood.” The artists don’t actually perform the songs — they’re all generated using artificial intelligence tools. And the resulting videos have racked up tens of millions of views.


Can human Drake stop his AI voice clone?

OpenAI wants to shut a student’s repository over GPT4 access

Xtekky, a European computer science student, finds out what happens if you write a program that runs queries through these freely accessible sites and returns you the answer.

In other news, a big company with a non-profit and for-profit subsidiary sues an independent creator trying to make things easier for everyday folks. Even though it technically doesn’t violate its terms and conditions? Perhaps we’ll leave that for the experts to decide.

OpenAI, the creators of ChatGPT, a scourge on teachers and succor to everyone else, opted against free access when it released its newer GPT4 model.

New AI decoder can translate brainwaves into text

This is an important step on the way to develop brain–computer interfaces that can decode continuous language through non-invasive recordings of thoughts.

Results were published in a recent study in the peer-reviewed journal Nature Neuroscience, led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.

Tang and Huth’s semantic decoder isn’t implanted in the brain directly; instead, it uses fMRI machine scans to measure brain activity. For the study, participants in the experiment listened to podcasts while the AI attempted to transcribe their thoughts into text.