Toggle light / dark theme

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST in July 2022 for post-quantum cryptography has been broken. Researchers from the KTH Royal Institute of Technology, Stockholm, Sweden, used recursive training AI combined with side channel attacks.

A side-channel attack exploits measurable information obtained from a device running the target implementation via channels such as timing or power consumption. The revolutionary aspect of the research (PDF) was to apply deep learning analysis to side-channel differential analysis.

“Deep learning-based side-channel attacks,” say the researchers, “can overcome conventional countermeasures such as masking, shuffling, random delays insertion, constant-weight encoding, code polymorphism, and randomized clock.”

For the first time, scientists have used machine learning to create brand-new enzymes, which are proteins that accelerate chemical reactions. This is an important step in the field of protein design, as new enzymes could have many uses across medicine and industrial manufacturing.

“Living organisms are remarkable chemists. Rather than relying on toxic compounds or extreme heat, they use enzymes to break down or build up whatever they need under gentle conditions. New enzymes could put renewable chemicals and biofuels within reach,” said senior author David Baker, professor of biochemistry at the University of Washington School of Medicine and recipient of the 2021 Breakthrough Prize in Life Sciences.

As reported Feb, 22 in the journal Nature, a team based at the Institute for Protein Design at UW Medicine devised algorithms that can create light-emitting enzymes called luciferases. Laboratory testing confirmed that the new enzymes can recognize specific chemicals and emit light very efficiently. This project was led by two postdoctoral scholars in the Baker Lab, Andy Hsien-Wei Yeh and Christoffer Norn.

SAN FRANCISCO, Feb 21 (Reuters) — Until recently, Brett Schickler never imagined he could be a published author, though he had dreamed about it. But after learning about the ChatGPT artificial intelligence program, Schickler figured an opportunity had landed in his lap.

“The idea of writing a book finally seemed possible,” said Schickler, a salesman in Rochester, New York. “I thought ‘I can do this.’”

Using the AI software, which can generate blocks of text from simple prompts, Schickler created a 30-page illustrated children’s e-book in a matter of hours, offering it for sale in January through Amazon.com Inc’s (AMZN.O) self-publishing unit.

RoboCup is an annual international robotics competition proposed and founded in 1996 by a group of university professors. The aim of such a competition consists of promoting robotics and AI research, by offering a publicly appealing, but #formidable challenge.

In 2016, the world’s competition was held in Leipzig, Germany. RoboCup 2017 was held in Nagoya, Japan.
#Humanoid League.

===============================================================
Want to make a donation?
Here is my bitcoin address 3NbvUroYjMRoUQkqPPPBWkNXvj4gaAUcUN
===============================================================

Copyright © Thermo Man. All rights reserved.
https://www.youtube.com/channel/UCev3bR4qEEdCqiRGMkocKIQ

Playlist.

Shopping Playlist.

Microsoft’s ChatGPT-powered Bing search engine is sending “unhinged” messages to users, telling lies, sulking, gaslighting, questioning why it exists, and more. Martin Ciupa discusses ChatGPT, large language models, and artificial intelligence research.

Martin Ciupa is a subject matter expert on artificial intelligence, communications and information technology. Martin is the CEO of Remoscope Inc, an AI-based Telehealth startup, and an advisor & consultant to Mindmaze, a Unicorn Neurotech company focuses on applying advanced neuroscience to everyday life.

Martin has decades of experience in computing and artificial intelligence, PhD studies in AI, and a Master’s Degree in Cybernetics.

A.I. systems like ChatGPT, Bing, and Bard are here to stay

Generative A.I., the kind of software that powers OpenAI’s ChatGPT, Microsoft’s (MSFT) Bing, and Google’s (GOOG, GOOGL) Bard, is all the rage. But the explosion in generative A.I., so named because it generates “new” content based on information it pulls from the web, is facing increasing scrutiny from consumers and experts.

Fears that the software could be used to help students cheat on tests and provide inaccurate, bizarre responses to users’ queries are drawing questions about the platforms’ accuracy and capabilities. And some are wondering if the products have been released too early for their own good.

OpenAI is quietly launching a new developer platform that lets customers run the company’s newer machine learning models, like GPT-3.5, on dedicated capacity. In screenshots of documentation published to Twitter by users with early access, OpenAI describes the forthcoming offering, called Foundry, as “designed for cutting-edge customers running larger workloads.”

“[Foundry allows] inference at scale with full control over the model configuration and performance profile,” the documentation reads. We’ve reached out to OpenAI to confirm the veracity.

If the screenshots are to be believed, Foundry — whenever it launches — will deliver a “static allocation” of compute capacity (perhaps on Azure, OpenAI’s preferred public cloud platform) dedicated to a single customer. Users will be able to monitor specific instances with the same tools and dashboards that OpenAI uses to build and optimize models. In addition, Foundry will provide some level of version control, letting customers decide whether or not to upgrade to newer model releases, as well as “more robust” fine-tuning for OpenAI’s latest models.