Toggle light / dark theme

How to extract quantitative data from your image using the AI pixel classifier

See how you can extract quantitative data from your image using the AI pixel classifier.

More about Mica: https://fcld.ly/mica-yt-tut.

▬ Transcript ▬▬▬▬▬▬▬▬▬▬▬▬
Mica radically simplifies your workflows, but your workflow is not completed unless you have extracted quantitative information from that image.
Let me show you how easy it is to extract quantitative information with Mica.
First record your multicolor image.
In this case, we want to count the nuclei that we see in that image.
Go to Learn and load the image of interest.
Now you have two classes, the background and the nuclei that we want to quantify.
First of all, draw a background region.
Secondly, draw the object of interest.
Once you are done with that let Mica first create a preview of the annotation that you have created.
If you are happy with that, then do the full training.
Now you have trained an AI model that uses pixel classification in order to segment your nuclei.
Save that model and you can use that model also for all the experiments that you are doing in the future.
Simply go to Results, select the image to quantify, switch to Analysis and you will have access to all the different models that you have trained.
Select the one that you are interested in and Start.
As an output can display the data as histograms, boxplots or even scatterplots.

▬ Mica ▬▬▬▬▬▬▬▬▬▬▬▬
Mica, the world’s first wholly integrated imaging Microhub, brings widefield imaging, confocal imaging and with AI-supported analysis seamlessly together. All united in one sample-protecting incubator environment.

All researchers, regardless of expertise, can now work in one easy-to use digital imaging platform, moving confidently from setup to beautifully visualized results, allowing true access for all.

At last, users can capture 4 x more data with 100% correlation in only one exposure, whether using widefield or confocal imaging. They can freely switch from widefield to confocal mode with just a simple click without ever moving the sample and explore unexpected paths with no constraints.

Viral: ChatGPT-AI, Google tech, give voice to Boston Dynamics’ robot dog

“We can now ask the robots about past and future missions and get an answer in real-time.”

A team of programmers has outfitted Boston Dynamics’ robot dog, Spot, with OpenAI’s ChatGPT and Google’s Text-to-Speech speech modulation in a viral video.

Santiago Valdarrama, a machine learning engineer, tweeted about the successful integration, which allows the robot to answer inquiries about its missions in real-time, considerably boosting data query efficiency, in a viral video on Twitter.

/* */