Toggle light / dark theme

Heat is the enemy of quantum uncertainty. By arranging light-absorbing molecules in an ordered fashion, physicists in Japan have maintained the critical, yet-to-be-determined state of electron spins for 100 nanoseconds near room temperature.

The innovation could have a profound impact on progress in developing quantum technology that doesn’t rely on the bulky and expensive cooling equipment currently needed to keep particles in a so-called ‘coherent’ form.

Unlike the way we describe objects in our day-to-day living, which have qualities like color, position, speed, and rotation, quantum descriptions of objects involve something less settled. Until their characteristics are locked in place with a quick look, we have to treat objects as if they are smeared over a wide space, spinning in different directions, yet to adopt a simple measurement.

How hard would it be to train an AI model to be secretly evil? As it turns out, according to AI researchers, not very — and attempting to reroute a bad apple AI’s more sinister proclivities might backfire in the long run.

In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with “exploitable code,” meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases. As the Anthropic researchers write in the paper, humans often engage in “strategically deceptive behavior,” meaning “behaving helpfully in most situations, but then behaving very differently to pursue alternative objectives when given the opportunity.” If an AI system were trained to do the same, the scientists wondered, could they “detect it and remove it using current state-of-the-art safety training techniques?”

Unfortunately, as it stands, the answer to that latter question appears to be a resounding “no.” The Anthropic scientists found that once a model is trained with exploitable code, it’s exceedingly difficult — if not impossible — to train a machine out of its duplicitous tendencies. And what’s worse, according to the paper, attempts to reign in and reconfigure a deceptive model may well reinforce its bad behavior, as a model might just learn how to better hide its transgressions.

One brain to rule them

Two researchers have revealed how they are creating a single super-brain that can pilot any robot, no matter how different they are.

Sergey Levine and Karol Hausman wrote in IEEE Spectrum that generative AI, which can create text and images, is not enough for robotics because the Internet does not have enough data on how robots interact with the world.

according to a retrospective cohort study.


Poor vision is associated with risks for falls and fractures, but details about risks associated with specific eye diseases are less clear. In this retrospective U.K. cohort study, researchers identified nearly 600,000 patients (mean age, 74) with cataracts, glaucoma, or age-related macular degeneration (AMD) and compared them with age-and sex-matched control patients who did not have eye diseases. Falls and fractures were tracked for a median of about 4 years. Analyses were adjusted for a wide range of chronic diseases and medications that increase risk for falls.

Compared with controls, patients with eye diseases had significantly higher hazard ratios for falls and fractures: HRs ranged from 1.18 to 1.38 for the three eye-disease groups. The incidence rates for falls per 100,000 person-years were about 1,800 to 2,500 for the three eye-disease groups, compared with 620 to 850 for control groups. For fractures, the corresponding incidence rates for the three eye-disease groups were 970 to 1,290, compared with 380 to 500 for control groups.

The absolute and relative risks for falls and fractures were similar for all 3 eye-disease groups. An unexplained fall, particularly an injurious one, should prompt primary care clinicians to explore visual impairment as a potential cause or contributing factor.

In Neuromorphic Computing Part 2, we dive deeper into mapping neuromorphic concepts into chips built from silicon. With the state of modern neuroscience and chip design, the tools the industry is working with we’re working with are simply too different from biology. Mike Davies, Senior Principal Engineer and Director of Intel’s Neuromorphic Computing Lab, explains the process and challenge of creating a chip that can replicate some of the form and functions in biological neural networks.

Mike’s leadership in this specialized field allows him to share the latest insights from the promising future in neuromorphic computing here at Intel. Let’s explore nature’s circuit design of over a billion years of evolution and today’s CMOS semiconductor manufacturing technology supporting incredible computing efficiency, speed and intelligence.

Architecture All Access Season 2 is a master class technology series, featuring Senior Intel Technical Leaders taking an educational approach in explaining the historical impact and future innovations in their technical domains. Here at Intel, our mission is to create world-changing technology that improves the life of every person on earth. If you would like to learn more about AI, Wi-Fi, Ethernet and Neuromorphic Computing, subscribe and hit the bell to get instant notifications of new episodes.

Jump to Chapters:

Computer design has always been inspired by biology, especially the brain. In this episode of Architecture All Access — Mike Davies, Senior Principal Engineer and Director of Intel’s Neuromorphic Computing Lab — explains the relationship of Neuromorphic Computing and understanding the principals of brain computations at the circuit level that are enabling next-generation intelligent devices and autonomous systems.

Mike’s leadership in this specialized field allows him to share the latest insights from the promising future in neuromorphic computing here at Intel. Discover the history and influence of the secrets that nature has evolved over a billion years supporting incredible computing efficiency, speed and intelligence.

Architecture All Access Season 2 is a master class technology series, featuring Senior Intel Technical Leaders taking an educational approach in explaining the historical impact and future innovations in their technical domains. Here at Intel, our mission is to create world-changing technology that improves the life of every person on earth. If you would like to learn more about AI, Wi-Fi, Ethernet and Neuromorphic Computing, subscribe and hit the bell to get instant notifications of new episodes.

Chapters:

Computer simulations of complex systems provide an opportunity to study their time evolution under user control. Simulations of neural circuits are an established tool in computational neuroscience. Through systematic simplification on spatial and temporal scales they provide important insights in the time evolution of networks which in turn leads to an improved understanding of brain functions like learning, memory or behavior. Simulations of large networks are exploiting the concept of weak scaling where the massively parallel biological network structure is naturally mapped on computers with very large numbers of compute nodes. However, this approach is suffering from fundamental limitations. The power consumption is approaching prohibitive levels and, more seriously, the bridging of time-scales from millisecond to years, present in the neurobiology of plasticity, learning and development is inaccessible to classical computers. In the keynote I will argue that these limitations can be overcome by extreme approaches to weak and strong scaling based on brain-inspired computing architectures.

Bio: Karlheinz Meier received his PhD in physics in 1984 from Hamburg University in Germany. He has more than 25years of experience in experimental particle physics with contributions to 4 major experiments at particle colliders at DESY in Hamburg and CERN in Geneva. For the ATLAS experiment at the Large Hadron Collider (LHC) he led a 15 year effort to design, build and operate an electronics data processing system providing on-the-fly data reduction by 3 orders of magnitude enabling among other achievements the discovery of the Higgs Boson. Following scientific staff positions at DESY and CERN he was appointed full professor of physics at Heidelberg university in 1992. In Heidelberg he co-founded the Kirchhoff-Institute for Physics and a laboratory for the development of microelectronic circuits for science experiments. In particle physics he took a leading international role in shaping the future of the field as president of the European Committee for Future Accelerators (ECFA). Around 2005 he gradually shifted his scientific interests towards large-scale electronic implementations of brain-inspired computer architectures. His group pioneered several innovations in the field like the conception of a description language for neural circuits (PyNN), time-compressed mixed-signal neuromorphic computing systems and wafer-scale integration for their implementation. He led 2 major European initiatives, FACETS and BrainScaleS, that both demonstrated the rewarding interdisciplinary collaboration of neuroscience and information science. In 2009 he was one of the initiators of the European Human Brain Project (HBP) that was approved in 2013. In the HBP he leads the subproject on neuromorphic computing with the goal of establishing brain-inspired computing paradigms as tools for neuroscience and generic methods for inference from large data volumes.