North Korea claims it has again tested a hydrogen bomb underground and that it “successfully” loaded it onto the tip of an intercontinental ballistic missile, a claim that if true, crosses a “red line” drawn by South Korea’s president last month.
In a state media announcement, North Korea confirmed the afternoon tremors in its northeast were indeed caused by the test of a nuclear device, and that leader Kim Jong Un personally signed off on the test.
“North Korea has conducted a major Nuclear Test. Their words and actions continue to be very hostile and dangerous to the United States,” President Trump tweeted Sunday morning in response. “North Korea is a rogue nation which has become a great threat and embarrassment to China, which is trying to help but with little success.”
Heat is the enemy of quantum uncertainty. By arranging light-absorbing molecules in an ordered fashion, physicists in Japan have maintained the critical, yet-to-be-determined state of electron spins for 100 nanoseconds near room temperature.
The innovation could have a profound impact on progress in developing quantum technology that doesn’t rely on the bulky and expensive cooling equipment currently needed to keep particles in a so-called ‘coherent’ form.
Unlike the way we describe objects in our day-to-day living, which have qualities like color, position, speed, and rotation, quantum descriptions of objects involve something less settled. Until their characteristics are locked in place with a quick look, we have to treat objects as if they are smeared over a wide space, spinning in different directions, yet to adopt a simple measurement.
How hard would it be to train an AI model to be secretly evil? As it turns out, according to AI researchers, not very — and attempting to reroute a bad apple AI’s more sinister proclivities might backfire in the long run.
In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with “exploitable code,” meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases. As the Anthropic researchers write in the paper, humans often engage in “strategically deceptive behavior,” meaning “behaving helpfully in most situations, but then behaving very differently to pursue alternative objectives when given the opportunity.” If an AI system were trained to do the same, the scientists wondered, could they “detect it and remove it using current state-of-the-art safety training techniques?”
Unfortunately, as it stands, the answer to that latter question appears to be a resounding “no.” The Anthropic scientists found that once a model is trained with exploitable code, it’s exceedingly difficult — if not impossible — to train a machine out of its duplicitous tendencies. And what’s worse, according to the paper, attempts to reign in and reconfigure a deceptive model may well reinforce its bad behavior, as a model might just learn how to better hide its transgressions.
What is happening at the frontier of research and application, and how are novel techniques and approaches changing the risks and opportunities linked to frontier, generative AI models?
Two researchers have revealed how they are creating a single super-brain that can pilot any robot, no matter how different they are.
Sergey Levine and Karol Hausman wrote in IEEE Spectrum that generative AI, which can create text and images, is not enough for robotics because the Internet does not have enough data on how robots interact with the world.
Poor vision is associated with risks for falls and fractures, but details about risks associated with specific eye diseases are less clear. In this retrospective U.K. cohort study, researchers identified nearly 600,000 patients (mean age, 74) with cataracts, glaucoma, or age-related macular degeneration (AMD) and compared them with age-and sex-matched control patients who did not have eye diseases. Falls and fractures were tracked for a median of about 4 years. Analyses were adjusted for a wide range of chronic diseases and medications that increase risk for falls.
Compared with controls, patients with eye diseases had significantly higher hazard ratios for falls and fractures: HRs ranged from 1.18 to 1.38 for the three eye-disease groups. The incidence rates for falls per 100,000 person-years were about 1,800 to 2,500 for the three eye-disease groups, compared with 620 to 850 for control groups. For fractures, the corresponding incidence rates for the three eye-disease groups were 970 to 1,290, compared with 380 to 500 for control groups.
The absolute and relative risks for falls and fractures were similar for all 3 eye-disease groups. An unexplained fall, particularly an injurious one, should prompt primary care clinicians to explore visual impairment as a potential cause or contributing factor.
As the Moore’s law approaching the end, computer technology is changing direction towards artificial neurons. But this time neurons are physical, and they evolve overtime. Artificial intelligence approaching a rapid growth age.
In Neuromorphic Computing Part 2, we dive deeper into mapping neuromorphic concepts into chips built from silicon. With the state of modern neuroscience and chip design, the tools the industry is working with we’re working with are simply too different from biology. Mike Davies, Senior Principal Engineer and Director of Intel’s Neuromorphic Computing Lab, explains the process and challenge of creating a chip that can replicate some of the form and functions in biological neural networks.
Mike’s leadership in this specialized field allows him to share the latest insights from the promising future in neuromorphic computing here at Intel. Let’s explore nature’s circuit design of over a billion years of evolution and today’s CMOS semiconductor manufacturing technology supporting incredible computing efficiency, speed and intelligence.
Architecture All Access Season 2 is a master class technology series, featuring Senior Intel Technical Leaders taking an educational approach in explaining the historical impact and future innovations in their technical domains. Here at Intel, our mission is to create world-changing technology that improves the life of every person on earth. If you would like to learn more about AI, Wi-Fi, Ethernet and Neuromorphic Computing, subscribe and hit the bell to get instant notifications of new episodes.
Jump to Chapters: 0:00 Welcome to Neuromorphic Computing. 0:30 How to architect a chip that behaves like a brain. 1:29 Advantages of CMOS semiconductor manufacturing technology. 2:18 Objectives in our design toolbox. 2:36 Sparse distributed asynchronous communication. 4:51 Reaching the level of efficiency and density of the brain. 6:34 Loihi 2 a fully digital chip implemented in a standard CMOS process. 6:57 Asynchronous vs Synchronous. 7:54 Function of the core’s memory. 8:13 Spikes and Table Lookups. 9:24 Loihi learning process. 9:45 Learning rules, input and the network. 10:12 The challenge of architecture and programming today. 10:45 Recent publications to read.
Architecture all access season 2 playlist — • architecture all access season 2
Computer design has always been inspired by biology, especially the brain. In this episode of Architecture All Access — Mike Davies, Senior Principal Engineer and Director of Intel’s Neuromorphic Computing Lab — explains the relationship of Neuromorphic Computing and understanding the principals of brain computations at the circuit level that are enabling next-generation intelligent devices and autonomous systems.
Mike’s leadership in this specialized field allows him to share the latest insights from the promising future in neuromorphic computing here at Intel. Discover the history and influence of the secrets that nature has evolved over a billion years supporting incredible computing efficiency, speed and intelligence.
Architecture All Access Season 2 is a master class technology series, featuring Senior Intel Technical Leaders taking an educational approach in explaining the historical impact and future innovations in their technical domains. Here at Intel, our mission is to create world-changing technology that improves the life of every person on earth. If you would like to learn more about AI, Wi-Fi, Ethernet and Neuromorphic Computing, subscribe and hit the bell to get instant notifications of new episodes.
Chapters: 0:00 Welcome to Neuromorphic Computing. 1:16 Introduction to Mike Davies. 1:34 The pioneers of modern computing. 1:48 A 2 GR. brain running on 50 mW of power. 2:19 The vision of Neuromorphic Computing. 2:31 Biological Neural Networks. 4:03 Patterns of Connectivity explained. 4:36 How neural networks achieve great energy efficiency and low latency. 6:20 Inhibitory Networks of Neurons. 7:42 Conventional Architecture. 8:01 Neuromorphic Architecture. 9:51 Conventional processors vs Neuromorphic chips.
Connect with Intel Technology: Visit Intel Technologies WEBSITE: https://intel.ly/IntelTechnologies. Follow Intel Technology on TWITTER: / inteltech.