Toggle light / dark theme

German scientists present a method by which AI could be trained much more efficiently.

In the last couple of years, research institutions have been working on finding new concepts of how computers can process data in the future. One of these concepts is known as neuromorphic computing. Neuromorphic computing models may sound similar to artificial neural networks but have little to do with them.

Compared to traditional artificial intelligence algorithms, which require significant amounts of data to be trained on before they can be effective, neuromorphic computing systems can learn and adapt on the fly.

A machine-learning algorithm demonstrated the capability to process data that exceeds a computer’s available memory by identifying a massive data set’s key features and dividing them into manageable batches that don’t choke computer hardware. Developed at Los Alamos National Laboratory, the algorithm set a world record for factorizing huge data sets during a test run on Oak Ridge National Laboratory’s Summit, the world’s fifth-fastest supercomputer.

Equally efficient on laptops and supercomputers, the highly scalable solves hardware bottlenecks that prevent processing information from data-rich applications in , , social media networks, national security science and earthquake research, to name just a few.

“We developed an ‘out-of-memory’ implementation of the non-negative matrix factorization method that allows you to factorize larger than previously possible on a given hardware,” said Ismael Boureima, a computational physicist at Los Alamos National Laboratory. Boureima is first author of the paper in The Journal of Supercomputing on the record-breaking algorithm.

This talk is about how you can use wireless signals and fuse them with vision and other sensing modalities through AI algorithms to give humans and robots X-ray vision to see objects hidden inside boxes or behind other object.

Tara Boroushaki is a Ph.D student at MIT. Her research focuses on fusing radio frequency (RF) sensing with vision through artificial intelligence. She designs algorithms and builds systems that leverage such fusion to enable capabilities that were not feasible before in applications spanning augmented reality, virtual reality, robotics, smart homes, and smart manufacturing. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

The discourse around Artificial Intelligence (AI) often hinges on the paradoxical duality of its nature. While it mirrors human cognition to an extraordinary extent, its capacity to transcend our limitations is awe-inspiring and unsettling. The heart of this growing field lies in the use of algorithms and the people who control these powerful computational tools.

This brings us to TIME’s recent endeavor—the TIME100 Most Influential People in AI. This meticulously curated list casts light on the people pushing AI’s boundaries and shaping its ethical framework. So when TIME magazine drops a list… More.


Source: TIME

Generative AI and other easy-to-use software tools can help employees with no coding background become adept programmers, or what the authors call citizen developers. By simply describing what they want in a prompt, citizen developers can collaborate with these tools to build entire applications—a process that until recently would have required advanced programming fluency.

Information technology has historically involved builders (IT professionals) and users (all other employees), with users being relatively powerless operators of the technology. That way of working often means IT professionals struggle to meet demand in a timely fashion, and communication problems arise among technical experts, business leaders, and application users.

Citizen development raises a critical question about the ultimate fate of IT organizations. How will they facilitate and safeguard the process without placing too many obstacles in its path? To reject its benefits is impractical, but to manage it carelessly may be worse. In this article the authors share a road map for successfully introducing citizen development to your employees.

Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics? Or one in which AI fuels an arms race between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?

Decades of research show that most democratic societies struggle to hold nuanced debates about new technologies. These discussions need to be informed not only by the best available science but also the numerous ethical, regulatory and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already… More.


Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We study the public and political aspects of emerging science. In 2022, our research group at the University of Wisconsin-Madison interviewed almost 2,200 researchers who had published on the topic of AI. Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.

Who gets a say on AI?

Industry leaders, policymakers and academics have been slow to adjust to the rapid onset of powerful AI technologies. In 2017, researchers and scholars met in Pacific Grove for another small expert-only meeting, this time to outline principles for future AI research. Senator Chuck Schumer plans to hold the first of a series of AI Insight Forums on Sept. 13, 2023, to help Beltway policymakers think through AI risks with tech leaders like Meta’s Mark Zuckerberg and X’s Elon Musk.

A team of scientists from Ames National Laboratory has developed a new machine learning model for discovering critical-element-free permanent magnet materials. The model predicts the Curie temperature of new material combinations. It is an important first step in using artificial intelligence to predict new permanent magnet materials. This model adds to the team’s recently developed capability for discovering thermodynamically stable rare earth materials. The work is published in Chemistry of Materials.

High performance magnets are essential for technologies such as , , electric vehicles, and magnetic refrigeration. These magnets contain critical materials such as cobalt and rare earth elements like neodymium and dysprosium. These materials are in high demand but have limited availability. This situation is motivating researchers to find ways to design new magnetic materials with reduced critical materials.

Machine learning (ML) is a form of . It is driven by computer algorithms that use data and trial-and-error algorithms to continually improve its predictions. The team used experimental data on Curie temperatures and theoretical modeling to train the ML algorithm. Curie temperature is the maximum temperature at which a material maintains its magnetism.

Using a standardized assessment, researchers in the UK compared the performance of a commercially available artificial intelligence (AI) algorithm with human readers of screening mammograms. Results of their findings were published in Radiology.

Mammographic does not detect every . False-positive interpretations can result in women without cancer undergoing unnecessary imaging and biopsy. To improve the sensitivity and specificity of screening mammography, one solution is to have two readers interpret every mammogram.

According to the researchers, double reading increases cancer detection rates by 6 to 15% and keeps recall rates low. However, this strategy is labor-intensive and difficult to achieve during reader shortages.

Please see my new FORBES article:

Thanks and please follow me on Linkedin for more tech and cybersecurity insights.


More remarkably, the advent of artificial intelligence (AI) and machine learning-based computers in the next century may alter how we relate to ourselves.

The digital ecosystem’s networked computer components, which are made possible by machine learning and artificial intelligence, will have a significant impact on practically every sector of the economy. These integrated AI and computing capabilities could pave the way for new frontiers in fields as diverse as genetic engineering, augmented reality, robotics, renewable energy, big data, and more.

Three important verticals in this digital transformation are already being impacted by AI: 1) Healthcare, 2) Cybersecurity, and 3) Communications.