Toggle light / dark theme

Get ready for a lot of math…!

We have sort of an intuitive understanding of a big need in artificial intelligence and machine learning, which has to do with making sure that systems converge well, and that data is oriented the right way. Also, that we understand what these tools are doing, that we can look under the hood.

A lot of us have already heard of the term “curse of dimensionality,” but Tomaso Armando Poggio invokes this frightening trope with a good bit of mathematics attached… (Poggio is the Eugene McDermott professor in the Department of Brain and Cognitive Sciences, a researcher at the McGovern Institute for Brain Research, and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)

O.o!!!!!


In the last 28-day period (July 17 to August 13), over 1.4 million new COVID-19 cases and over 2,300 deaths were reported from the World Health Organization’s (WHO) six regions, an increase of 63% and a decrease of 56%, respectively, compared to the previous 28 days, noted the latest WHO report.

As of August 13, over 769 million confirmed cases and over 6.9 million deaths have been reported globally. While four WHO regions have reported decreases in the number of both cases and deaths, the Western Pacific Region has reported an increase in cases and a decrease in deaths.

Also, WHO stressed the increase in cases of contracting the new Covid variant Eris or EG.5, noting that as of August 17, Eris was detected in 50 countries.

Google DeepMind researchers have finally found a way to make life coaching even worse: infuse it with generative AI.

According to internal documents obtained by The New York Times reports, Google and the Google-owned DeepMind AI lab are working with “generative AI to perform at least 21 different types of personal and professional tasks.” And among those tasks, apparently, is an effort to use generative AI to build a “life advice” tool. You know, because an inhuman AI model knows everything there is to know about navigating the complexities of mortal human existence.

As the NYT points out, the news of the effort notably comes months after AI safety experts at Google said, back in just December, that users of AI systems could suffer “diminished health and well-being” and a “loss of agency” as the result of taking AI-spun life advice. The Google chatbot Bard, meanwhile, is barred from providing legal, financial, or medical advice to its users.

Warren Buffett missed a trick when he passed on Tesla early on, Elon Musk said.

“He could’ve invested in Tesla when we were worth basically nothing and didn’t,” the SpaceX and Tesla CEO posted on X, the website formerly called Twitter, on Thursday.

He was responding to a post highlighting Buffett’s vast wealth and the enormous value of his Berkshire Hathaway conglomerate.

The price of bitcoin plunged about 10% hours after it was revealed that Elon Musk’s SpaceX sold the cryptocurrency.

The Wall Street Journal reported on Thursday that SpaceX, which first purchased bitcoin in 2021, wrote down the value of its bitcoin holdings by a total of $373 million in 2021 and 2022 and has sold the crypto.

The write-down coincides with a steep drop in bitcoin’s price, which crashed in late 2021, setting off a “crypto winter” that extended through most of 2022.

Scientists working in connectomics, a research field occupied with the reconstruction of neuronal networks in the brain, are aiming at completely mapping of the millions or billions of neurons found in mammalian brains. In spite of impressive advances in electron microscopy, the key bottleneck for connectomics is the amount of human labor required for the data analysis. Researchers at the Max Planck Institute for Brain Research in Frankfurt, Germany, have now developed reconstruction software that allows researchers to fly through the brain tissue at unprecedented speed. Together with the startup company scalable minds they created webKnossos, which turns researchers into brain pilots, gaining an about 10-fold speedup for data analysis in connectomics.

Billions of nerve cells are working in parallel inside our brains in order to achieve behaviours as impressive as hypothesizing, predicting, detecting, thinking. These neurons form a highly complex network, in which each nerve cell communicates with about one thousand others. Signals are sent along ultrathin cables, called axons, which are sent from each neuron to its about one thousand “followers.”

Only thanks to recent developments in , researchers can aim at mapping these networks in detail. The analysis of such image data, however, is still the key bottleneck in connectomics. Most interestingly, human annotators are still outperforming even the best computer-based analysis methods today. Scientists have to combine human and machine analysis to make sense of these huge image datasets obtained from the electron microscopes.

Seemingly countless self-help books and seminars tell you to tap into the right side of your brain to stimulate creativity. But forget the “right-brain” myth—a new study suggests it’s how well the two brain hemispheres communicate that sets highly creative people apart.

For the study, statisticians David Dunson of Duke University and Daniele Durante of the University of Padova analyzed the network of white matter connections among 68 separate brain regions in healthy college-age volunteers.

The brain’s white matter lies underneath the outer grey matter. It is composed of bundles of wires, or axons, which connect billions of neurons and carry electrical signals between them.

Another concern was the dissipation of electrical power on the Enchilada Trap, which could generate significant heat, leading to increased outgassing from surfaces, a higher risk of electrical breakdown and elevated levels of electrical field noise. To address this issue, production specialists designed new microscopic features to reduce the capacitance of certain electrodes.

“Our team is always looking ahead,” said Sandia’s Zach Meinelt, the lead integrator on the project. “We collaborate with scientists and engineers to learn about the kind of technology, features and performance improvements they will need in the coming years. We then design and fabricate traps to meet those requirements and constantly seek ways to further improve.”

Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Recent advancements in deep learning have significantly impacted computational imaging, microscopy, and holography-related fields. These technologies have applications in diverse areas, such as biomedical imaging, sensing, diagnostics, and 3D displays. Deep learning models have demonstrated remarkable flexibility and effectiveness in tasks like image translation, enhancement, super-resolution, denoising, and virtual staining. They have been successfully applied across various imaging modalities, including bright-field and fluorescence microscopy; deep learning’s integration is reshaping our understanding and capabilities in visualizing the intricate world at microscopic scales.

In computational imaging, prevailing techniques predominantly employ supervised learning models, necessitating substantial datasets with annotations or ground-truth experimental images. These models often rely on labeled training data acquired through various methods, such as classical algorithms or registered image pairs from different imaging modalities. However, these approaches have limitations, including the laborious acquisition, alignment, and preprocessing of training images and the potential introduction of inference bias. Despite efforts to address these challenges through unsupervised and self-supervised learning, the dependence on experimental measurements or sample labels persists. While some attempts have used labeled simulated data for training, accurately representing experimental sample distributions remains complex and requires prior knowledge of sample features and imaging setups.

To address these inherent issues, researchers from the UCLA Samueli School of Engineering introduced an innovative approach named GedankenNet, which, on the other hand, presents a revolutionary self-supervised learning framework. This approach eliminates the need for labeled or experimental training data and any resemblance to real-world samples. By training based on physics consistency and artificial random images, GedankenNet overcomes the challenges posed by existing methods. It establishes a new paradigm in hologram reconstruction, offering a promising solution to the limitations of supervised learning approaches commonly utilized in various microscopy, holography, and computational imaging tasks.