Toggle light / dark theme

Soft artificial muscles developed for robot motion

Researchers at ETH Zurich have recently developed artificial muscles for robot motion. Their solution offers several advantages over previous technologies: It can be used wherever robots need to be soft rather than rigid or where they need more sensitivity when interacting with their environment.

Many roboticists dream of building robots that are not just a combination of metal or other hard materials and motors but also softer and more adaptable.

Soft robots could interact with their environment in a completely different way; for example, they could cushion impacts the way human limbs do, or grasp an object delicately. This would also offer benefits regarding ; robot motion today usually requires a lot of energy to maintain a position, whereas soft systems could store energy well, too. So, what could be more obvious than to take the human muscle as a model and attempt to recreate it?

Quantum material-based spintronic devices operate at ultra-low power

As artificial intelligence technologies such as Chat-GPT are utilized in various industries, the role of high-performance semiconductor devices for processing large amounts of information is becoming increasingly important. Among them, spin memory is attracting attention as a next-generation electronics technology because it is suitable for processing large amounts of information with lower power than silicon semiconductors that are currently mass-produced.

Utilizing recently discovered in spin memory is expected to dramatically improve performance by improving signal ratio and reducing power, but to achieve this, it is necessary to develop technologies to control the properties of quantum materials through electrical methods such as current and voltage.

Dr. Jun Woo Choi of the Center for Spintroncs Research at the Korea Institute of Science and Technology (KIST) and Professor Se-Young Park of the Department of Physics at Soongsil University have announced the results of a collaborative study showing that ultra-low-power memory can be fabricated from quantum materials. The findings are published in the journal Nature Communications.

New AI model designs proteins to deliver gene therapy

Researchers at the University of Toronto have used an artificial intelligence framework to redesign a crucial protein involved in the delivery of gene therapy.

The study, published in Nature Machine Intelligence, describes new work optimizing proteins to mitigate immune responses, thereby improving the efficacy of gene therapy and reducing side effects.

“Gene therapy holds immense promise, but the body’s pre-existing to viral vectors greatly hampers its success. Our research zeroes in on hexons, a fundamental protein in adenovirus vectors, which—but for the immune problem—hold huge potential for gene therapy,” says Michael Garton, an assistant professor at the Institute of Biomedical Engineering in the Faculty of Applied Science & Engineering.

The Professions of the Future (1)

We are witnessing a professional revolution where the boundaries between man and machine slowly fade away, giving rise to innovative collaboration.

Photo by Mateusz Kitka (Pexels)

As Artificial Intelligence (AI) continues to advance by leaps and bounds, it’s impossible to overlook the profound transformations that this technological revolution is imprinting on the professions of the future. A paradigm shift is underway, redefining not only the nature of work but also how we conceptualize collaboration between humans and machines.

As creator of the ETER9 Project (2), I perceive AI not only as a disruptive force but also as a powerful tool to shape a more efficient, innovative, and inclusive future. As we move forward in this new world, it’s crucial for each of us to contribute to building a professional environment that celebrates the interplay between humanity and technology, where the potential of AI is realized for the benefit of all.

Licensing NASA Tech: Bridging Government to Commerce

While NASA is well-known for advancing various technologies for the purposes of space exploration, whether it’s sending spacecraft to another world or for use onboard the International Space Station (ISS), the little-known fact is that these same technologies can be licensed for commercial use to benefit humankind right here on the Earth through NASA’s Spinoff program, which is part of NASA’s Space Technology Mission Directorate and its Technology Transfer program. This includes fields like communication, medical, weather forecasting, and even the very mattresses we sleep on, and are all featured in NASA’s annual Spinoff book, with NASA’s 2024 Spinoff book being the latest in sharing these technologies with the private sector.

“As NASA’s longest continuously running program, we continue to increase the number of technologies we license year-over-year while streamlining the development path from the government to the commercial sector,” Daniel Lockney, Technology Transfer Program Executive at NASA Headquarters, said in a statement. “These commercialization success stories continually prove the benefits of transitioning agency technologies into private hands, where the real impacts are made.”

One example is a medical-grade smartwatch called EmbracePlus developed by Empatica Inc., which uses machine learning algorithms to monitor a person’s vitals, including sleep patterns, heart rate, and oxygen flow. EmbracePlus reached mass production status in 2021 and has been approved by the U.S. Food and Drug Administration (FDA) with the goal of using the smartwatch for astronauts on future spaceflights, including the upcoming Artemis missions, along with medical patients back on Earth.

Google releases new Bard Gemini model that is on par with GPT-4 in human evaluation

Google’s Bard chatbot is powered by a new Gemini model. Early users rate it as similar to GPT-4.

Google’s head of AI, Jeff Dean, announced the new Gemini model on X. It is a model from the Gemini Pro family with the suffix “scale”

Thanks to the Gemini updates, Bard is “much better” and has “many more capabilities” compared to the launch in March, according to Dean.

Large language models improve annotation of prokaryotic viral proteins

The latest in the intersection of large language models and life science: virus sequences, virus proteins, and their function.

Large language models improve annotation of prokaryotic viral proteins.


Ocean viral proteome annotations are expanded by a machine learning approach that is not reliant on sequence homology and can annotate sequences not homologous to those seen in training.

Elon Musk’s Neuralink implants first brain chip in human

The ambition is to supercharge human capabilities, treat neurological disorders like ALS or Parkinson’s, and may be one day achieve a symbiotic relationship between humans and artificial intelligence.

“The first human received an implant from Neuralink yesterday and is recovering well,” Musk said in a post on X, formerly Twitter.

“Initial results show promising neuron spike detection,” he added.

As AI Destroys Search Results, Google Fires Workers in Charge of Improving It

Amid a massive wave of tech company layoffs in favor of AI, Google is firing thousands of contractors tasked with making its namesake search engine work better.

As Vice reports, news of the company ending its contract with Appen — a data training firm that employs thousands of poorly paid gig workers in developing countries to maintain, among other things, Google’s search algorithm — coincidentally comes a week after a new study found that the quality of its search engine’s results has indeed gotten much worse in recent years.

Back in late 2022, journalist Cory Doctorow coined the term “enshittification” to refer to the demonstrable worsening of all manner of online tools, which he said was by design as tech giants seek to extract more and more money out of their user bases. Google Search was chief among the writer’s examples of the enshittification effect in a Wired article published last January, and as the new study out of Germany found, that effect can be measured.

The most powerful AI processing supercomputer in the world is set to be built in Germany, and planned to become operational within a mere year. Crikey

AI processing can take a huge amount of computing power, but by the looks of this latest joint project from the Jülich Supercomputing Center and French computing provider Eviden, power will not be in short supply.


“But can it run Crysis” is an old gag, but I’m still going to see if I get away with it.