Toggle light / dark theme

Amazon Web Services recently had to defend against a DDoS attack with a peak traffic volume of 2.3 Tbps, the largest ever recorded, ZDNet reports. Detailing the attack in its Q1 2020 threat report, Amazon said that the attack occurred back in February, and was mitigated by AWS Shield, a service designed to protect customers of Amazon’s on-demand cloud computing platform from DDoS attacks, as well as from bad bots and application vulnerabilities. The company did not disclose the target or the origin of the attack.

To put that number into perspective, prior to February of this year, ZDNet notes that the largest DDoS attack recorded was back in March 2018, when NetScout Arbor mitigated a 1.7 Tbps attack. The previous month, GitHub disclosed that it had been hit by an attack with a peak of 1.35 Tbps.

The discovery that led Nir Shavit to start a company came about the way most discoveries do: by accident. The MIT professor was working on a project to reconstruct a map of a mouse’s brain and needed some help from deep learning. Not knowing how to program graphics cards, or GPUs, the most common hardware choice for deep-learning models, he opted instead for a central processing unit, or CPU, the most generic computer chip found in any average laptop.

“Lo and behold,” Shavit recalls, “I realized that a CPU can do what a GPU does—if programmed in the right way.”

This insight is now the basis for his startup, Neural Magic, which launched its first suite of products today. The idea is to allow any company to deploy a deep-learning model without the need for specialized hardware. It would not only lower the costs of deep learning but also make AI more widely accessible.

Qualcomm today announced its RB5 reference design platform for the robotics and intelligent drone ecosystem. As the field of robotics continues to evolve towards more advanced capabilities, Qualcomm’s latest platform should help drive the next step in robotics evolution with intelligence and connectivity. The company has combined its 5G connectivity and AI-focused processing along with a flexible peripherals architecture based on what they are calling “mezzanine” modules. The new Qualcomm RB5 platform promises an acceleration in the robotics design and development process with a full suite of hardware, software and development tools. The company is making big promises for the RB5 platform, and if current levels of ecosystem engagement are any indicator, the platform will have ample opportunities to prove itself.

Targeting robot and drone designs meant for enterprise, industrial and professional service applications, at the heart of the platform is Qualcomm’s QRB5165 system on chip (SOC) processor. The QRB5165 is derived from the Snapdragon 865 processor used in mobile devices, but customized for robotic applications with increased camera and image signal processor (ISP) capabilities for additional camera sensors, higher industrial grade temperature and security ratings and a non-Package-on-Package (POP) configuration option.

To help bring highly capable artificial intelligence and machine learning capabilities to bear in these applications, the chip is rated for 15 Tera Operations Per Second (TOPS) of AI performance. Additionally, as it is critical that robots and drones can “see” their surroundings, the architecture also includes support for up to seven concurrent cameras and a dedicated computer vision engine meant to provide enhanced video analytics. Given the sheer amount of information that the platform can generate, process and analyze, the platform also has support for a communications module boasting 4G and 5G connectivity speeds. In particular, the addition of 5G to the platform will allow high speed and low latency data connectivity to the robots or drones.

The data came from Common Crawl, a non-profit that scans the open web every month and downloads content from billions of HTML pages then makes it available in a special format for large-scale data mining. In 2017 the average monthly “crawl” yielded over three billion web pages. Common Crawl has been doing this since 2011, and has petabytes of data in over 40 different languages. The OpenAI team applied some filtering techniques to improve the overall quality of the data, including adding curated datasets like Wikipedia.

GPT stands for Generative Pretrained Transformer. The “transformer” part refers to a neural network architecture introduced by Google in 2017. Rather than looking at words in sequential order and making decisions based on a word’s positioning within a sentence, text or speech generators with this design model the relationships between all the words in a sentence at once. Each word gets an “attention score,” which is used as its weight and fed into the larger network. Essentially, this is a complex way of saying the model is weighing how likely it is that a given word will be preceded or followed by another word, and how much that likelihood changes based on the other words in the sentence.

Through finding the relationships and patterns between words in a giant dataset, the algorithm ultimately ends up learning from its own inferences, in what’s called unsupervised machine learning. And it doesn’t end with words—GPT-3 can also figure out how concepts relate to each other, and discern context.

Artificial intelligence networks have learnt a new trick: being able to create photo-realistic faces from just a few pixelated dots, adding in features such as eyelashes and wrinkles that can’t even be found in the original.

Before you freak out, it’s good to note this is not some kind of creepy reverse pixelation that can undo blurring, because the faces the AI comes up with are artificial – they don’t belong to real people. But it’s a cool technological step forward from what such networks have been able to do before.

The PULSE (Photo Upsampling via Latent Space Exploration) system can produce photos with up to 64 times greater resolution than the source images, which is 8 times more detailed than earlier methods.

In recent years, many research teams worldwide have been developing and evaluating techniques to enable different locomotion styles in legged robots. One way of training robots to walk like humans or animals is by having them analyze and emulate real-world demonstrations. This approach is known as imitation learning.

Researchers at the University of Edinburgh in Scotland have recently devised a for training humanoid robots to walk like humans using human demonstrations. This new framework, presented in a paper pre-published on arXiv, combines imitation learning and deep reinforcement learning techniques with theories of robotic control, in order to achieve natural and dynamic locomotion in humanoid robots.

“The key question we set out to investigate was how to incorporate useful human knowledge in locomotion and human motion capture data for imitation into deep reinforcement learning paradigm to advance the autonomous capabilities of legged robots more efficiently,” Chuanyu Yang, one of the researchers who carried out the study, told TechXplore. We proposed two methods of introducing human prior knowledge into a DRL framework.”

Back in the Sixties, one of the hottest toys in history swept America. It was called Etch-A-Sketch, and its popularity was based on a now-laughably simple feature. It was a handheld small-laptop-sized device that allowed users to create crude images by turning two control knobs that drew horizontal, vertical and diagonal lines composed of aluminum particles sealed in a plastic case. It allowed experienced artists to compose simple and sometimes recognizable portraits. And it allowed inexperienced wannabe artists who could barely draw stick-figure characters to feel like masters of the genre by generating what, frankly, still looked pretty much like mush. But Etch-A-Sketch was fun, and it went on to sell 100 million units to this day.

Six decades later, researchers at the Chinese Academy of Sciences and City University of Hong Kong have come up with an invention that actually does what so many wishful enthusiasts imagined Etch-A-Sketch did all those years ago.

DeepFaceDrawing allows users to create stunningly lifelike portraits by inputting loose, non-professional, roughly drawn sketches. It requires no artistic skills and no programming experience.

The Spot Explorer is likely not coming to a workplace near you. The high price point makes it impractical to all but a few institutions, like high-end construction firms, energy companies, and government agencies. But just seeing out in the world, doing more than dancing to “Uptown Funk,” is surely a sign of progress.


These viral robots have ranked up big YouTube numbers for years. Now, they’re about to start their day jobs.