Toggle light / dark theme

BRUSSELS (AP) — NATO member countries that signed a key Cold War-era security treaty froze their participation in the pact on Tuesday just hours after Russia pulled out, raising fresh questions about the future of arms control agreements in Europe. Many of NATO’s 31 allies are parties to the Treaty of Conventional Armed Forces in Europe, which was aimed at preventing Cold War rivals from massing forces at or near their mutual borders. The CFE…

Scientists showcased the application of machine learning in the sodium-cooled fast reactor (SFR).

Machine learning technology has the potential to transform nuclear reactor operations, according to a team of experts from the US Department of Energy’s Argonne National Laboratory, who demonstrated how it may improve security and efficiency.

They showcased the application of machine learning in the sodium-cooled fast reactor (SFR), a specialized cutting-edge nuclear reactor.

In recent years, the field of artificial intelligence has witnessed remarkable advancements, with researchers exploring innovative ways to utilize existing technology in groundbreaking applications. One such intriguing concept is the use of WiFi routers as virtual cameras to map a home and detect the presence and locations of individuals, akin to an MRI machine. This revolutionary technology harnesses the power of AI algorithms and WiFi signals to create a unique, non-intrusive way of monitoring human presence within indoor spaces. In this article, we will delve into the workings of this technology, its potential capabilities, and the implications it may have on the future of smart homes and security.

The Foundation of WiFi Imaging: WiFi imaging, also known as radio frequency (RF) sensing, revolves around leveraging the signals emitted by WiFi routers. These signals interact with the surrounding environment, reflecting off objects and people within their range. AI algorithms then process the alterations in these signals to form an image of the indoor space, thus providing a representation of the occupants and their movements. Unlike traditional cameras, WiFi imaging is capable of penetrating walls and obstructions, making it particularly valuable for monitoring people without compromising their privacy.

AI Algorithms in WiFi Imaging: The heart of this technology lies in the powerful AI algorithms that interpret the fluctuations in WiFi signals and translate them into meaningful data. Machine learning techniques, such as neural networks, play a pivotal role in recognizing patterns, identifying individuals, and discerning between static objects and moving entities. As the AI model continuously learns from the WiFi data, it enhances its accuracy and adaptability, making it more proficient in detecting and tracking people over time.

A scientist claims to have developed an inexpensive system for using quantum computing to crack RSA, which is the world’s most commonly used public key algorithm.

See Also: Live Webinar | Generative AI: Myths, Realities and Practical Use Cases

The response from multiple cryptographers and security experts is: Sounds great if true, but can you prove it? “I would be very surprised if RSA-2048 had been broken,” Alan Woodward, a professor of computer science at England’s University of Surrey, told me.

Atlassian has discovered yet another critical vulnerability in its Confluence Data Center and Server collaboration and project management platform, and it’s urging customers to patch the problem immediately. The latest advisory by Atlassian describes CVE-2023–22518 as an improper authorization vulnerability that affects all versions of the on-premises versions of Confluence.

It is the second critical vulnerability reported by Atlassian in a month, tied to its widely used Confluence Data Center and Server platform and among numerous security issues from the company during the past year. The previous bulletin (CVE-2023–22515) revealed a vulnerability that could allow an attacker to create unauthorized Confluence administrator accounts, thereby gaining access to instances. That vulnerability had a severity level of 10 and was discovered initially by some customers who reported they may have been breached by it.

To date, Atlassian is not aware of any active exploits of the newest vulnerability, which has a severity level of 9.1., though the company issued a statement encouraging customers to apply the patch. “We have discovered that Confluence Data Center and Server customers are vulnerable to significant data loss if exploited by an unauthenticated attacker,” Atlassian CISO Bala Sathiamurthy warned in a statement. “Customers must take immediate action to protect their instances.”

An international team of scientists has proposed a new remote monitoring method of nuclear stockpiles using mirrors and radio waves.

An international team of scientists has devised an innovative method of using radio waves to monitor a nation’s nuclear stockpile remotely. Conducted by a team of IT security experts from Germany and the United States, it could be used to build trust between nuclear powers to ensure rivals are keeping their promises when it comes to agreed nuclear disarmament treaties. It could also be used to give a “heads up” if one particular nuclear power removes stored nuclear warheads, which could be an indication of intended use.


Johannes Tobisch et al 2023.

Remote nuke monitoring.

The goal of the order, according to the White House, is to improve “AI safety and security.” It also includes a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security. This is a surprising move that invokes the Defense Production Act, typically used during times of national emergency.

The executive order advances the voluntary requirements for AI policy that the White House set back in August, though it lacks specifics on how the rules will be enforced. Executive orders are also vulnerable to being overturned at any time by a future president, and they lack the legitimacy of congressional legislation on AI, which looks unlikely in the short term.

“The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” says Anu Bradford, a law professor at Columbia University who specializes in digital regulation.

A team of scientists at UC San Francisco reported a way to leverage cancers’ unique metabolic profile to ensure that drugs only target cancer cells: Freethink.


To make matters worse, cancer cells sometimes only die when patients take relatively high doses of a drug. This is because cancer’s metabolism is often greater in cancer cells than in normal cells. For instance, some cancer cells have more MEK enzyme — meaning more cobimetinib is required to stop these cells from replicating. Unfortunately, the doses cancer patients receive often closely approach or even exceed the levels at which the drug causes toxicities in healthy tissues.

Cancer cells hoard iron at a far greater rate than healthy cells, according to previous studies. Although the reason for this remains unclear, the UCSF team realized this could be leveraged to increase the specificity of cancer drugs. If a cancer drug, such as cobimetinib, were only activated in the iron-rich environment of a cancer cell, the drug would be inert when it interacts with healthy cells. It’s something like a two-factor authentication system for cancer drugs.

To test this, the scientist synthesized an iron-activated (IA) cobimetinib that only blocks MEK in an iron-rich environment. The experimental drug inhibited tumor growth as efficiently as standard cobimetinib, but it spared healthy cells. Using a mouse-lung cancer model, mice receiving either IA-cobimetinib or standard cobimetinib had fewer lung lesions and showed prolonged overall survival compared to vehicle-treated mice. When the scientists evaluated IA-cobimetinib’s effect on healthy human retinal and skin cells, they found the healthy tissue was about 10-fold less sensitive than cancer cells to IA-cobimetinib.

BRUSSELS, Oct 29 (Reuters) — The Group of Seven industrial countries will on Monday agree a code of conduct for companies developing advanced artificial intelligence systems, a G7 document showed, as governments seek to mitigate the risks and potential misuse of the technology.

The voluntary code of conduct will set a landmark for how major countries govern AI, amid privacy concerns and security risks, the document seen by Reuters showed.

Leaders of the Group of Seven (G7) economies made up of Canada, France, Germany, Italy, Japan, Britain and the United States, as well as the European Union, kicked off the process in May at a ministerial forum dubbed the “Hiroshima AI process”.