Toggle light / dark theme

Stable Code 3B: Coding on the Edge

Stability # AI announces their first Large Language Model release of 2024: Stable Code 3B. This new LLM is available for non-commercial & commercial use.


Stable Code, an upgrade from Stable Code Alpha 3B, specializes in code completion and outperforms predecessors in efficiency and multi-language support. It is compatible with standard laptops, including non-GPU models, and features capabilities like FIM and expanded context size. Trained in multiple.

In Leaked Audio, Microsoft Cherry-Picked Examples to Make Its AI Seem Functional

Microsoft “cherry-picked” examples of its generative AI’s output after it would frequently “hallucinate” incorrect responses, Business Insider reports.

The scoop comes from leaked audio of an internal presentation on an early version of Microsoft’s Security Copilot, a ChatGPT-like AI tool designed to help cybersecurity professionals.

According to BI, the audio contains a Microsoft researcher discussing the results of “threat hunter” tests in which the AI analyzed a Windows security log for possible malicious activity.

Bridging the Quantum “Reality Gap” — Unveiling the Invisible With AI

A study led by the University of Oxford has used the power of machine learning to overcome a key challenge affecting quantum devices. For the first time, the findings reveal a way to close the ‘reality gap’: the difference between predicted and observed behavior from quantum devices. The results have been published in Physical Review X.

Quantum computing could supercharge a wealth of applications, from climate modeling and financial forecasting, to drug discovery and artificial intelligence. But this will require effective ways to scale and combine individual quantum devices (also called qubits). A major barrier against this is inherent variability: where even apparently identical units exhibit different behaviors.

The cause of variability in quantum devices.

Scientists Train AI to Be Evil, Find They Can’t Reverse It

How hard would it be to train an AI model to be secretly evil? As it turns out, according to AI researchers, not very — and attempting to reroute a bad apple AI’s more sinister proclivities might backfire in the long run.

In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with “exploitable code,” meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases. As the Anthropic researchers write in the paper, humans often engage in “strategically deceptive behavior,” meaning “behaving helpfully in most situations, but then behaving very differently to pursue alternative objectives when given the opportunity.” If an AI system were trained to do the same, the scientists wondered, could they “detect it and remove it using current state-of-the-art safety training techniques?”

Unfortunately, as it stands, the answer to that latter question appears to be a resounding “no.” The Anthropic scientists found that once a model is trained with exploitable code, it’s exceedingly difficult — if not impossible — to train a machine out of its duplicitous tendencies. And what’s worse, according to the paper, attempts to reign in and reconfigure a deceptive model may well reinforce its bad behavior, as a model might just learn how to better hide its transgressions.

Boffins build a giant brain for a robot

One brain to rule them

Two researchers have revealed how they are creating a single super-brain that can pilot any robot, no matter how different they are.

Sergey Levine and Karol Hausman wrote in IEEE Spectrum that generative AI, which can create text and images, is not enough for robotics because the Internet does not have enough data on how robots interact with the world.

Architecture All Access: Neuromorphic Computing Part 2

In Neuromorphic Computing Part 2, we dive deeper into mapping neuromorphic concepts into chips built from silicon. With the state of modern neuroscience and chip design, the tools the industry is working with we’re working with are simply too different from biology. Mike Davies, Senior Principal Engineer and Director of Intel’s Neuromorphic Computing Lab, explains the process and challenge of creating a chip that can replicate some of the form and functions in biological neural networks.

Mike’s leadership in this specialized field allows him to share the latest insights from the promising future in neuromorphic computing here at Intel. Let’s explore nature’s circuit design of over a billion years of evolution and today’s CMOS semiconductor manufacturing technology supporting incredible computing efficiency, speed and intelligence.

Architecture All Access Season 2 is a master class technology series, featuring Senior Intel Technical Leaders taking an educational approach in explaining the historical impact and future innovations in their technical domains. Here at Intel, our mission is to create world-changing technology that improves the life of every person on earth. If you would like to learn more about AI, Wi-Fi, Ethernet and Neuromorphic Computing, subscribe and hit the bell to get instant notifications of new episodes.

Jump to Chapters:
0:00 Welcome to Neuromorphic Computing.
0:30 How to architect a chip that behaves like a brain.
1:29 Advantages of CMOS semiconductor manufacturing technology.
2:18 Objectives in our design toolbox.
2:36 Sparse distributed asynchronous communication.
4:51 Reaching the level of efficiency and density of the brain.
6:34 Loihi 2 a fully digital chip implemented in a standard CMOS process.
6:57 Asynchronous vs Synchronous.
7:54 Function of the core’s memory.
8:13 Spikes and Table Lookups.
9:24 Loihi learning process.
9:45 Learning rules, input and the network.
10:12 The challenge of architecture and programming today.
10:45 Recent publications to read.

Architecture all access season 2 playlist — • architecture all access season 2

Intel Wireless Technology — https://intel.com/wireless.

Architecture All Access: Neuromorphic Computing Part 1

Computer design has always been inspired by biology, especially the brain. In this episode of Architecture All Access — Mike Davies, Senior Principal Engineer and Director of Intel’s Neuromorphic Computing Lab — explains the relationship of Neuromorphic Computing and understanding the principals of brain computations at the circuit level that are enabling next-generation intelligent devices and autonomous systems.

Mike’s leadership in this specialized field allows him to share the latest insights from the promising future in neuromorphic computing here at Intel. Discover the history and influence of the secrets that nature has evolved over a billion years supporting incredible computing efficiency, speed and intelligence.

Architecture All Access Season 2 is a master class technology series, featuring Senior Intel Technical Leaders taking an educational approach in explaining the historical impact and future innovations in their technical domains. Here at Intel, our mission is to create world-changing technology that improves the life of every person on earth. If you would like to learn more about AI, Wi-Fi, Ethernet and Neuromorphic Computing, subscribe and hit the bell to get instant notifications of new episodes.

Chapters:
0:00 Welcome to Neuromorphic Computing.
1:16 Introduction to Mike Davies.
1:34 The pioneers of modern computing.
1:48 A 2 GR. brain running on 50 mW of power.
2:19 The vision of Neuromorphic Computing.
2:31 Biological Neural Networks.
4:03 Patterns of Connectivity explained.
4:36 How neural networks achieve great energy efficiency and low latency.
6:20 Inhibitory Networks of Neurons.
7:42 Conventional Architecture.
8:01 Neuromorphic Architecture.
9:51 Conventional processors vs Neuromorphic chips.

Connect with Intel Technology:
Visit Intel Technologies WEBSITE: https://intel.ly/IntelTechnologies.
Follow Intel Technology on TWITTER: / inteltech.

/* */