Toggle light / dark theme

Eight countries have signed the Artemis Accords, a set of guidelines surrounding the Artemis Program for crewed exploration of the moon. The United Kingdom, Italy, Australia, Canada, Japan, Luxembourg, the United Arab Emirates and the US are now all participants in the project, which aims to return humans to the moon by 2024 and establish a crewed lunar base by 2030.

This may sound like progress. Nations have for a number of years struggled with the issue of how to govern a human settlement on the moon and deal with the management of any resources. But a number of key countries have serious concerns about the accords and have so far refused to sign them.

Previous attempts to govern space have been through painstakingly negotiated international treaties. The Outer Space Treaty 1967 laid down the foundational principles for human space exploration – it should be peaceful and benefit all mankind, not just one country. But the treaty has little in the way of detail. The moon Agreement of 1979 attempted to prevent commercial exploitation of outer-space resources, but only a small number of states have ratified it – the US, China and Russia haven’t.

A gene therapy that could restore the fading sight of the elderly is being tested on humans for the first time after positive results in blind mice.

It could be used to treat age-related macular degeneration, a common condition that usually first affects people in their 50s and 60s, scientists said.

It involves a one-time injection of a modified virus into the eye. This viral vector is altered to contain a synthetic gene that produces a protein that plays a critical role in the perception of light.

(Reuters) — India’s richest state Maharashtra has invited U.S. electric-car maker Tesla Inc, weeks after its Chief Executive Officer Elon Musk suggested entering the country next year.

In a tweet here on Thursday, state tourism and environment minister Aaditya Thackeray said he and industries minister Subhash Desai held a video call with Tesla executives earlier in the day to invite them to the state.

Earlier this month, Musk said “Next year for sure” on Twitter in reply to a post with a photograph of a T-shirt with the message: “India wants Tesla”.

A new Tesla software leak revealed that the automaker is planning to bring a HEPA filter, enabling Tesla’s Bioweapon Defense Mode, to Model Y.

With the Model X and later the Model S, Tesla has started to put massive HEPA-rated air filters inside its vehicles.

The idea is for Tesla to put efforts into developing a more powerful air filtering system in order to not only contribute to the reduction of local air pollution with electric vehicles but also to reduce the direct impact of air pollution on the occupants of its vehicles.

Machine learning (ML) is making incredible transformations in critical areas such as finance, healthcare, and defense, impacting nearly every aspect of our lives. Many businesses, eager to capitalize on advancements in ML, have not scrutinized the security of their ML systems. Today, along with MITRE, and contributions from 11 organizations including IBM, NVIDIA, Bosch, Microsoft is releasing the Adversarial ML Threat Matrix, an industry-focused open framework, to empower security analysts to detect, respond to, and remediate threats against ML systems.

During the last four years, Microsoft has seen a notable increase in attacks on commercial ML systems. Market reports are also bringing attention to this problem: Gartner’s Top 10 Strategic Technology Trends for 2020, published in October 2019, predicts that “Through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.” Despite these compelling reasons to secure ML systems, Microsoft’s survey spanning 28 businesses found that most industry practitioners have yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses indicated that they don’t have the right tools in place to secure their ML systems. What’s more, they are explicitly looking for guidance. We found that preparation is not just limited to smaller organizations. We spoke to Fortune 500 companies, governments, non-profits, and small and mid-sized organizations.

Our survey pointed to marked cognitive dissonance especially among security analysts who generally believe that risk to ML systems is a futuristic concern. This is a problem because cyber attacks on ML systems are now on the uptick. For instance, in 2020 we saw the first CVE for an ML component in a commercial system and SEI/CERT issued the first vuln note bringing to attention how many of the current ML systems can be subjected to arbitrary misclassification attacks assaulting the confidentiality, integrity, and availability of ML systems. The academic community has been sounding the alarm since 2004, and have routinely shown that ML systems, if not mindfully secured, can be compromised.