Cerebras Systems, maker of the world’s largest processor, has broken the record for the most complex AI model trained using a single device.
Using one CS-2 system, powered by the company’s wafer-sized chip (WSE-2), Cerebras is now able to train AI models with up to 20 billion parameters thanks to new optimizations at the software level.
The firm says the breakthrough will resolve one of the most frustrating problems for AI engineers: the need to partition large-scale models across thousands of GPUs. The result is an opportunity to drastically cut the time it takes to develop and train new models.
Our Growing Digital Connected World — Made For Botnets
There are dire implications of having devices and networks so digitally interconnected when it comes to bot nets. Especially when you have unpatched vulnerabilities in networks. The past decade has recorded many botnet cyber-attacks. Many who are involved in cybersecurity will recall the massive and high profile Mirai botnet DDoS attack in 2016. Mirai was an IoT botnet made up of hundreds of thousands of compromised IoT devices, It targeted Dyn—a domain name system (DNS) provider for many well-known internet platforms in a distributed denial-of-service (DDoS) attack. That DDoS attack sent millions of bytes of traffic to a single server to cause the system to shut down. The Dyn attacks leveraged Internet of Things devices and some of the attacks were launched by common devices like digital routers, webcams and video recorders infected with malware.
Summary: Human cortical networks have evolved a novel neural network that relies on abundant connections between inhibitory interneurons.
Source: Max Planck Institute.
The analysis of the human brain is a central goal of neuroscience. However, for methodological reasons, research has largely focused on model organisms, in particular the mouse.
AZoRobotics speaks with Dr. Erik Enegberg from Florida Atlantic University about his research into a wearable soft robotic armband. This could be a life-changing device for prosthetic hands users who have long-desired advances in dexterity.
Typing on a keyboard, pressing buttons on a remote control, or braiding a child’s hair has remained elusive for prosthetic hand users. How does the loss of tactile sensations impact limb-absent people’s lives?
Losing the sensation of touch has a profound impact on people’s lives. Some of the things that may seem simple and a part of everyday life, such as stroking the fur of a pet or the skin of a loved one, are a meaningful and fundamental way to connect with those around us for others. For example, a patient with a bilateral amputation has previously expressed concerns that he might hurt his granddaughter by accidentally squeezing her hand too tightly as he has lost tactile sensation.
Scientists in China say they have been able to run an artificial intelligence model as sophisticated as a human brain on their most powerful supercomputer, a report from the South China Morning Pos t reveals.
According to the report, this puts China’s Newest Generation Sunway supercomputer on the same level as the U.S. Department of Energy’s Frontier, which was named the world’s most powerful supercomputer earlier this month.
As a point of reference, Frontier is the first machine to have demonstrated it can perform more than one quintillion calculations per second.
Microsoft-owned GitHub is launching its Copilot AI tool today, which helps suggest lines of code to developers inside their code editor. GitHub originally teamed up with OpenAI last year to launch a preview of Copilot, and it’s generally available to all developers today. Priced at US$10 per month or US$100 a year, GitHub Copilot is capable of suggesting the next line of code as developers type in an integrated development environment (IDE) like Visual Studio Code, Neovim, and JetBrains IDEs. Copilot can suggest complete methods and complex algorithms alongside boilerplate code and assistance with unit testing. More than 1.2 million developers signed up to use the GitHub Copilot preview over the past 12 months, and it will remain a free tool for verified students and maintainers of popular open-source projects. In files where it’s enabled, GitHub says nearly 40 percent of code is now being written by Copilot.
“Over the past year, we’ve continued to iterate and test workflows to help drive the ‘magic’ of Copilot,” Ryan J. Salva, VP of product at GitHub, told TechCrunch via email. “We not only used the preview to learn how people use GitHub Copilot but also to scale the service safely.”
“We specifically designed GitHub Copilot as an editor extension to make sure nothing gets in the way of what you’re doing,” GitHub CEO Thomas Dohmke says in a blog post(Opens in a new window). “GitHub Copilot distills the collective knowledge of the world’s developers into an editor extension that suggests code in real-time, to help you stay focused on what matters most: building great software.”
Unfortunately my internet link went down in the second Q&A session at the end and the recording cut off. Shame, loads of great information came out about FPGA/ASIC implementations, AI for the VR/AR, C/C++ and a whole load of other riveting and most interesting techie stuff. But thankfully the main part of the talk was recorded.
TALK OVERVIEW This talk is about the realization of the ideas behind the Fractal Brain theory and the unifying theory of life and intelligence discussed in the last Zoom talk, in the form of useful technology. The Startup at the End of Time will be the vehicle for the development and commercialization of a new generation of artificial intelligence (AI) and machine learning (ML) algorithms.
We will show in detail how the theoretical fractal brain/genome ideas lead to a whole new way of doing AI and ML that overcomes most of the central limitations of and problems associated with existing approaches. A compelling feature of this approach is that it is based on how neurons and brains actually work, unlike existing artificial neural networks, which though making sensational headlines are impeded by severe limitations and which are based on an out of date understanding of neurons form about 70 years ago. We hope to convince you that this new approach, really is the path to true AI.
In the last Zoom talk, we discussed a great unifying of scientific ideas relating to life & brain/mind science through the application of the mathematical idea of symmetry. In turn the same symmetry approach leads to a unifying of a mass of ideas relating to computer and information science. There’s been talk in recent years of a ‘master algorithm’ of machine learning and AI. We’ll explain that it goes far deeper than that and show how there exists a way of unifying into a single algorithm, the most important fundamental algorithms in use in the world today, which relate to data compression, databases, search engines and also existing AI/ML. Furthermore and importantly this algorithm is completely fractal or scale invariant. The same algorithm which is able to perform all these functionalities is able to run on a micro-controller unit (MCU), mobile phone, laptop and workstation, going right up to a supercomputer.
The application and utility of this new technology is endless. We will discuss the road map by which the sort of theoretical ideas I’ve been discussing in the Zoom, academic and public talks over the past few years, and which I’ve written about in the Fractal Brain Theory book, will become practical technology. And how the Java/C/C++ code running my workstation and mobile phones will become products and services.
In his keynote at Amazon re: MARS, Alexa AI senior vice president and head scientist Rohit Prasad argued that the emerging paradigm of ambient intelligence offer… See more.
Rohit Prasad on the pathway to generalizable intelligence and what excites him most about his re: MARS keynote.
From there, they ran flight tests using a specially designed motion-tracking system. Each electroluminescent actuator served as an active marker that could be tracked using iPhone cameras. The cameras detect each light color, and a computer program they developed tracks the position and attitude of the robots to within 2 millimeters of state-of-the-art infrared motion capture systems.
“We are very proud of how good the tracking result is, compared to the state-of-the-art. We were using cheap hardware, compared to the tens of thousands of dollars these large motion-tracking systems cost, and the tracking results were very close,” Kevin Chen says.
In the future, they plan to enhance that motion tracking system so it can track robots in real-time. The team is working to incorporate control signals so the robots could turn their light on and off during flight and communicate more like real fireflies. They are also studying how electroluminescence could even improve some properties of these soft artificial muscles, Kevin Chen says.