Deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are designed to partly emulate the functioning and structure of biological neural networks. As a result, in addition to tackling various real-world computational problems, they could help neuroscientists and psychologists to better understand the underpinnings of specific sensory or cognitive processes.
Researchers at Osnabrück University, Freie Universität Berlin and other institutes recently developed a new class of artificial neural networks (ANNs) that could mimic the human visual system better than CNNs and other existing deep learning algorithms. Their newly proposed, visual system-inspired computational techniques, dubbed all-topographic neural networks (All-TNNs), are introduced in a paper published in Nature Human Behaviour.
“Previously, the most powerful models for understanding how the brain processes visual information were derived off of AI vision models,” Dr. Tim Kietzmann, senior author of the paper, told Tech Xplore.
A new algorithm opens the door for using artificial intelligence and machine learning to study the interactions that happen on the surface of materials.
Scientists and engineers study the atomic interactions that happen on the surface of materials to develop more energy efficient batteries, capacitors, and other devices. But accurately simulating these fundamental interactions requires immense computing power to fully capture the geometrical and chemical intricacies involved, and current methods are just scratching the surface.
“Currently it’s prohibitive and there’s no supercomputer in the world that can do an analysis like that,” says Siddharth Deshpande, an assistant professor in the University of Rochester’s Department of Chemical Engineering. “We need clever ways to manage that large data set, use intuition to understand the most important interactions on the surface, and apply data-driven methods to reduce the sample space.”
Recently, text-based image generation models can automatically create high-resolution, high-quality images solely from natural language descriptions. However, when a typical example like the Stable Diffusion model is given the text “creative,” its ability to generate truly creative images remains limited.
KAIST researchers have developed a technology that can enhance the creativity of text-based image generation models such as Stable Diffusion without additional training, allowing AI to draw creative chair designs that are far from ordinary.
Professor Jaesik Choi’s research team at KAIST Kim Jaechul Graduate School of AI, in collaboration with NAVER AI Lab, developed this technology to enhance the creative generation of AI generative models without the need for additional training. The work is published on the arXiv preprint server the code is available on GitHub.
Cybersecurity researchers have exposed the inner workings of an Android malware called AntiDot that has compromised over 3,775 devices as part of 273 unique campaigns.
“Operated by the financially motivated threat actor LARVA-398, AntiDot is actively sold as a Malware-as-a-Service (MaaS) on underground forums and has been linked to a wide range of mobile campaigns,” PRODAFT said in a report shared with The Hacker News.
AntiDot is advertised as a “three-in-one” solution with capabilities to record the device screen by abusing Android’s accessibility services, intercept SMS messages, and extract sensitive data from third-party applications.
Microsoft is investigating a known OneDrive issue that is causing searches to appear blank for some users or return no results even when searching for files they know they’ve already uploaded.
In a support document updated this week, the company shared that this bug impacts Windows, Android, iOS, and web users.
“Some OneDrive personal account users may notice that search results appear blank or don’t return files they know exist. While the files are still present and accessible, they don’t appear in search results,” Microsoft explains in a support document published this week.
🔥 Witness the AI Revolution in Biotechnology & Biology! In this video, know how Artificial Intelligence is transforming the world of biotech — from AI-powered drug discovery and CRISPR gene editing to precision medicine and bioinformatics!
💡 Get a glimpse of future labs, real-world breakthroughs, and career trends that are reshaping life sciences as we know it.
🚀 Whether you’re a student, researcher, or biotech enthusiast — this is your gateway to the future!
👉 Subscribe to Biotecnika for powerful insights, expert guidance, and the latest in biotech innovation:
Stay updated with the latest in AI, Biotechnology, and Career Insights – Subscribe to Biotecnika now!” Subscribe to Biotecnika for more biotech breakthroughs!“ “Don’t miss out – Hit subscribe for biotech + AI updates!”
AI ML in Biology, Bioinformatics & Computational Biology Summer Training Program With Project Work + Paper Publication Assistance.
A team of neurologists and AI specialists at MIT’s Media Lab has led a study looking into the brain impacts of large language model (LLM) use among people who engage with them for study or work. They report evidence that the use of LLMs may lead to an erosion of critical thinking skills. In their study, posted on the arXiv preprint server, the researchers asked groups of volunteers to write essays while connected to EEG monitors.
Over the past few years, the use of LLMs such as ChatGPT has become commonplace. Some use them for fun, while others use them to help with school or work responsibilities, and the team at MIT wondered what sort of impact LLM use might have on the brain.
To find out, they recruited 54 volunteers. The initial group was then split into three small groups, all of whom were asked to write a 20-minute essay on the topic of philanthropy—one group was asked to use ChatGPT for help, the second was asked to use Google Search, and the third “Brain-only” group was given no tools or resources at all. The participants remained in these same groups for three writing sessions.
Does the use of computer models in physics change the way we see the universe? How far reaching are the implications of computation irreducibility? Are observer limitations key to the way we conceive the laws of physics? In this episode we have the difficult yet beautiful topic of trying to model complex systems like nature and the universe computationally to get into; and how beyond a low level of complexity all systems, seem to become equally unpredictable. We have a whole episode in this series on Complexity Theory in biology and nature, but today we’re going to be taking a more physics and computational slant. Another key element to this episode is Observer Theory, because we have to take into account the perceptual limitations of our species’ context and perspective, if we want to understand how the laws of physics that we’ve worked out from our environment, are not and cannot be fixed and universal but rather will always be perspective bound, within a multitude of alternative branches of possible reality with alternative possible computational rules. We’ll then connect this multi-computational approach to a reinterpretation of Entropy and the 2nd law of thermodynamics. The fact that my guest has been building on these ideas for over 40 years, creating computer language and AI solutions, to map his deep theories of computational physics, makes him the ideal guest to help us unpack this topic. He is physicist, computer scientist and tech entrepreneur Stephen Wolfram. In 1987 he left academia at Caltech and Princeton behind and devoted himself to his computer science intuitions at his company Wolfram Research. He’s published many blog articles about his ideas, and written many influential books including “A New kind of Science”, and more recently “A Project to Find the Fundamental Theory of Physics”, and “Computer Modelling and Simulation of Dynamic Systems”, and just out in 2023 “The Second Law” about the mystery of Entropy. One of the most wonderful things about Stephen Wolfram is that, despite his visionary insight into reality, he really loves to be ‘in the moment’ with his thinking, engaging in socratic dialogue, staying open to perspectives other than his own and allowing his old ideas to be updated if something comes up that contradicts them; and given how quickly the fields of physics and computer science are evolving I think his humility and conceptual flexibility gives us a fine example of how we should update how we do science as we go.
What we discuss: 00:00 Intro. 07:45 The history of scientific models of reality: structural, mathematical and computational. 14:40 Late 2010’s: a shift to computational models of systems. 20:20 The Principle of Computational Equivalence (PCE) 24:45 Computational Irreducibility — the process that means you can’t predict the outcome in advance. 27:50 The importance of the passage of time to Consciousness. 28:45 Irreducibility and the limits of science. 33:30 Godel’s Incompleteness Theorem meets Computational Irreducibility. 42:20 Observer Theory and the Wolfram Physics Project. 45:30 Modelling the relations between discrete units of Space: Hypergraphs. 47:30 The progress of time is the computational process that is updating the network of relations. 50:30 We ’make’ space. 51:30 Branchial Space — different quantum histories of the world, branching and merging. 54:30 We perceive space and matter to be continuous because we’re very big compared to the discrete elements. 56:30 Branchial Space VS Many Worlds interpretation. 58:50 Rulial Space: All possible rules of all possible interconnected branches. 01:07:30 Wolfram Language bridges human thinking about their perspective with what is computationally possible. 01:11:00 Computational Intelligence is everywhere in the universe. e.g. the weather. 01:19:30 The Measurement problem of QM meets computational irreducibility and observer theory. 01:20:30 Entanglement explained — common ancestors in branchial space. 01:32:40 Inviting Stephen back for a separate episode on AI safety, safety solutions and applications for science, as we did’t have time. 01:37:30 At the molecular level the laws of physics are reversible. 01:40:30 What looks random to us in entropy is actually full of the data. 01:45:30 Entropy defined in computational terms. 01:50:30 If we ever overcame our finite minds, there would be no coherent concept of existence. 01:51:30 Parallels between modern physics and ancient eastern mysticism and cosmology. 01:55:30 Reductionism in an irreducible world: saying a lot from very little input.
References: “The Second Law: Resolving the Mystery of the Second Law of Thermodynamics”, Stephen Wolfram.