Toggle light / dark theme

Well before Washington banned Nvidia’s exports of high-performance graphic processing units to China, the country’s tech giants had been hoarding them in anticipation of an escalating tech war between the two nations.

Baidu, one of the tech firms building China’s counterparts to OpenAI, has secured enough AI chips to keep training its ChatGPT equivalent Ernie Bot for the “next year or two,” the firm’s CEO Robin Li said on an earnings call this week.

“Also, inference requires less powerful chips, and we believe our chip reserves, as well as other alternatives, will be sufficient to support lots of AI-native apps for the end users,” he said. “And in the long run, having difficulties in acquiring the most advanced chips inevitably impacts the pace of AI development in China. So, we are proactively seeking alternatives.”

They are faster than ambulances in situations where timing is key.


Karolinska Institutet researchers have been investigating the idea of sending drones equipped with automated external defibrillators (AEDs) to patients in cardiac arrest instead of ambulances and have now found that, in more than half of the cases, the drones were three minutes ahead of the vehicles. In addition, in the majority of cases where the patient was in cardiac arrest, the drone-delivered defibrillator was employed to stop the condition from getting worse or leading to death.

The most simple factor

“The use of an AED is the single most important factor in saving lives. We have been deploying drones equipped with AED since the summer of 2020 and show in this follow-up study that drones can arrive at the scene before an ambulance by several minutes. This lead time has meant that the AED could be used by people at the scene in several cases,” said Andreas Claesson, Associate Professor at the Center for Cardiac Arrest Research at the Department of Clinical Research and Education, Södersjukhuset, Karolinska Institutet, and principal investigator of the study.

The robot can help the construction industry overcome its challenges and reduce its environmental impact.


Michael Lyrenmann via Science Robotics.

A team of researchers has developed a 12-ton (approximately 2,000 pounds) autonomous robot that can construct stone walls from natural and recycled materials using advanced technologies. This could help the construction industry overcome its challenges of low productivity, high waste, and labor shortages while reducing its environmental impact and improving its sustainability.

Upon completion of Mission 1, Astrolab’s FLEX will become the largest and most capable rover to ever travel the Moon, claims the company.


Venturi Astrolab.

One such firm is Monacco-based lunar technology startup Venturi Astrolab which initiated its electric lunar rover programme in 2019. The Venturi group has leveraged its experience of designing and manufacturing high-performance electric vehicles since 2000 to develop its maiden rover called the Flexible Logistics and Exploration (FLEX), which is scheduled to land on the Moon in 2026.

Volocopter plans to focus on cities expediting infrastructure, routes, regulations, and digital networks, highlighting global economic uncertainties and the crucial role of local partners.


Volocopter.

The eagerly anticipated initiative has been paused due to challenges in securing local partners willing to share the financial responsibility for the cutting-edge technology involved.

The engineers at Fourier Intelligence have successfully combined functionality with a touch of creativity, making the GR-1 more than just a caregiver. The 300-Nm hip actuators, equivalent to 221 pound-feet (lb-ft), empower the GR-1 to lift a remarkable 110 lb (50 kilograms, kg) – an impressive feat for a robot of its stature. This capability positions the GR-1 as valuable in assisting patients with various activities, from getting up from a bed or toilet to navigating a wheelchair.

“The world isn’t doing terribly well in averting global ecological collapse,” says Dr. Florian Rabitz, a chief researcher at Kaunas University of Technology (KTU), Lithuania, the author of a new monograph, “Transformative Novel Technologies and Global Environmental Governance,” recently published by Cambridge University Press.

Greenhouse gas emissions, species extinction, ecosystem degradation, chemical pollution, and more are threatening the Earth’s future. Despite decades of international agreements and countless high-level summits, success in forestalling this existential crisis has remained elusive, says Dr. Rabitz.

In his new monograph, the KTU researcher delves into the intersection of cutting-edge technological solutions and the global environmental crisis. The author explores how international institutions respond (or fail to respond) to high-impact technologies that have been the subject of extensive debate and controversy.

IVO chief executive Richard Mansell said his company performed 100 hours of vacuum chamber testing before the launch, during which the quantum drive produced a small amount of thrust.

“Deploying Quantum Drive into orbit in a Rogue satellite on SpaceX Transporter 9 is a milestone for the future of space propulsion,” Mansell said.

“Quantum Drive’s capability allows Rogue to produce new satellite vehicles with unlimited Delta V.”

What role should text-generating large language models (LLMs) have in the scientific research process? According to a team of Oxford scientists, the answer — at least for now — is: pretty much none.

In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-powered tools like chatbots to assist in scientific research on the grounds that AI’s penchant for hallucinating and fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines, could lead to larger information breakdowns — a fate that could ultimately threaten the fabric of science itself.

“Our tendency to anthropomorphize machines and trust models as human-like truth-tellers, consuming and spreading the bad information that they produce in the process,” the researchers write in the essay, which was published this week in the journal Nature Human Behavior, “is uniquely worrying for the future of science.”

In a letter to the company’s board of directors, OpenAI researchers are said to have warned of an AI discovery that could pose a threat to humanity.

This was reported by Reuters, citing two sources familiar with the matter. The letter is also linked to Altman’s firing, but is not the only reason, according to Reuters.

According to a source from The Verge, the board never received such a letter, which is why it played no role in Altman’s firing. Reuters says it has not seen the letter. The Information reports not on the letter itself, but on the “Q*” breakthrough described in it.