Toggle light / dark theme

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

In 2022, leaders in the U.S. military technology and cybersecurity community said that they considered 2023 to be the “reset year” for quantum computing. They estimated the time it will take to make systems quantum-safe will match the time that the first quantum computers that threaten their security will become available: both around four to six years. It is vital that industry leaders quickly start to understand the security issues around quantum computing and take action to resolve the issues that will arise when this powerful technology surfaces.

Quantum computing is a cutting-edge technology that presents a unique set of challenges and promises unprecedented computational power. Unlike traditional computing, which operates using binary logic (0s and 1s) and sequential calculations, quantum computing works with quantum bits, or qubits, that can represent an infinite number of possible outcomes. This allows quantum computers to perform an enormous number of calculations simultaneously, exploiting the probabilistic nature of quantum mechanics.

What do you think? China is doing it. The West is going to have to keep up. Have you seen the Netflix series Altered Carbon? It’s like that.


A U.S. Army video shows its concept of the soldier of the future. At first glance, it looks like it will only be a better-equipped soldier.

But the video mentions “neural enhancement.” That can mean a brain implant that connects a human to computers. The defense agency DARPA has been working on an advanced implant that would essentially put the human brain “online.” There could also be eye and ear implants and other circuitry under the skin to make the optimal fighting machine.

Americans will have to decide whether this is ethical because some in our military clearly want it.

FULL REPORT: https://www1.cbn.com/cbnnews/national-security/2021/april/ne…-soldiers.

New details of Musk’s involvement in the Ukraine-Russia war revealed in his biography.

Elon Musk holds many titles. He is the CEO of Tesla SpaceX and owns the social media company X, which was recently rebranded from Twitter. Going by an excerpt of his biography, published in the Washington Post.

According to the excerpt from Walter Isaacson’s book, Musk disabled his company Starlink’s satellite communication networks, which were being used by the Ukrainian military to attack the Russian naval fleet in Sevastopol, Crimea, sneakily. The Ukrainian army was using Starlink as a guide to target Russian ships and attack them with six small… More.


Musk’s biographer alleges he prevented nuclear war between Ukraine and Russia by turning off Starlink satellite network near Crimea, but Musk says, ‘SpaceX did not deactivate anything’.

Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics? Or one in which AI fuels an arms race between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?

Decades of research show that most democratic societies struggle to hold nuanced debates about new technologies. These discussions need to be informed not only by the best available science but also the numerous ethical, regulatory and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already… More.


Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We study the public and political aspects of emerging science. In 2022, our research group at the University of Wisconsin-Madison interviewed almost 2,200 researchers who had published on the topic of AI. Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.

Who gets a say on AI?

Industry leaders, policymakers and academics have been slow to adjust to the rapid onset of powerful AI technologies. In 2017, researchers and scholars met in Pacific Grove for another small expert-only meeting, this time to outline principles for future AI research. Senator Chuck Schumer plans to hold the first of a series of AI Insight Forums on Sept. 13, 2023, to help Beltway policymakers think through AI risks with tech leaders like Meta’s Mark Zuckerberg and X’s Elon Musk.

Other questions to the experts in this canvassing invited their views on the hopeful things that will occur in the next decade and for examples of specific applications that might emerge. What will human-technology co-evolution look like by 2030? Participants in this canvassing expect the rate of change to fall in a range anywhere from incremental to extremely impactful. Generally, they expect AI to continue to be targeted toward efficiencies in workplaces and other activities, and they say it is likely to be embedded in most human endeavors.

The greatest share of participants in this canvassing said automated systems driven by artificial intelligence are already improving many dimensions of their work, play and home lives and they expect this to continue over the next decade. While they worry over the accompanying negatives of human-AI advances, they hope for broad changes for the better as networked, intelligent systems are revolutionizing everything, from the most pressing professional work to hundreds of the little “everyday” aspects of existence.

One respondent’s answer covered many of the improvements experts expect as machines sit alongside humans as their assistants and enhancers. An associate professor at a major university in Israel wrote, “In the coming 12 years AI will enable all sorts of professions to do their work more efficiently, especially those involving ‘saving life’: individualized medicine, policing, even warfare (where attacks will focus on disabling infrastructure and less in killing enemy combatants and civilians). In other professions, AI will enable greater individualization, e.g., education based on the needs and intellectual abilities of each pupil/student. Of course, there will be some downsides: greater unemployment in certain ‘rote’ jobs (e.g., transportation drivers, food service, robots and automation, etc.).”

North Korea-linked hackers have stolen hundreds of millions of crypto to fund the regime’s nuclear weapons programs, research shows.

So far this year, from January to Aug. 18, North Korea-affiliated hackers stole $200 million worth of crypto — accounting for over 20% of all stolen crypto this year, according to blockchain intelligence firm TRM Labs.

“In recent years, there has been a marked rise in the size and scale of cyber attacks against cryptocurrency-related businesses by North Korea. This has coincided with an apparent acceleration in the country’s nuclear and ballistic missile programs,” said TRM Labs in a June discussion with North Korea experts.

SEOUL, Sept 3 (Reuters) — North Korea conducted a simulated tactical nuclear attack drill that included two long-range cruise missiles in an exercise to “warn enemies” the country would be prepared in case of nuclear war, the KCNA state news agency said on Sunday.

KCNA said the drill was successfully carried out on Saturday and two cruise missiles carrying mock nuclear warheads were fired towards the West Sea of the Korean peninsula and flew 1,500 km (930 miles) at a preset altitude of 150 meters.

Pyongyang also said it would bolster its military deterrence against the United States and South Korea.

This post is also available in: he עברית (Hebrew)

Ukraine’s security agency claims that the Russian military intelligence service GRU can access compromised Android devices with a new malware called Infamous Chisel, which is associated with the threat actor Sandworm, previously attributed to the Russian GRU’s Main Centre for Special Technologies (GTsST).

Sandworm uses this new malware to target Android devices used by the Ukrainian military, enables unauthorized access to compromised devices, and is designed to scan files, monitor traffic, and steal information.

The United States Air Force has completed a critical AI-controlled autonomous flight of its modified Osprey Mark III unmanned aerial system.

The USAF reports that the United States Air Force’s (USAF) “Osprey” Mark III unmanned aerial system (UAS) has completed its first fully autonomous test flight. Conducted on July 20, 2023, the test formed part of the USAF’s larger Autonomy, Data, and AI Experimentation (ADAx) Proving Ground effort for the program, specifically the USAF’s Autonomy Prime Environment for Experimentation or APEX, a subset of ADAx. The trial was conducted to evaluate and operationalize artificial intelligence and autonomy concepts to support warfighters on the evolving… More.


USAF

Connecting the dots.