Toggle light / dark theme

Apptronik, a NASA-backed robotics company, has unveiled Apollo, a humanoid robot that could revolutionize the workforce — because there’s virtually no limit to the number of jobs it can do.

“The focus for Apptronik is to build one robot that can do thousands of different things,” Jeff Cardenas, the company’s co-founder and CEO, told Freethink. “The best way to think of it is kind of like the iPhone of robots.”

The challenge: Robots have been automating repetitive tasks for decades — instead of having a person weld the same two car parts together 100 times a day, for example, an automaker might just add a welding robot to that segment of the assembly line.

Noise-canceling headphones are a godsend for living and working in loud environments. They automatically identify background sounds and cancel them out for much-needed peace and quiet. However, typical noise-canceling fails to distinguish between unwanted background sounds and crucial information, leaving headphone users unaware of their surroundings.

ChatGPT 4O can now speak and sing in real time. It can even view the real world through your phone’s camera and describe what’s happening in real time.


The AI race has just shifted into high gear, with US artificial intelligence pioneers OpenAI rolling out its new interface that works with audio and vision as well as text. The new model, called GPT-4o, has gone beyond the familiar chat-bot features and is capable of real-time, near-natural voice conversations. The developer OpenAI will also make it available to free users.

ChatGPT was already able to talk to users, but with long pauses to process the data. It often seemed a bit sluggish. This was because the feature required three internal applications, the company explained: transcribing the spoken text, processing and generating, and converting the response to speech. This caused delays.

We talk to computer scientist Mike Cook from the renowned Kings College London about the new Chat GPT-4o development.

#artificialintelligence #chatgpt #openai.

What if your earbuds could do everything your smartphone can do already, except better? What sounds a bit like science fiction may actually not be so far off. A new class of synthetic materials could herald the next revolution of wireless technologies, enabling devices to be smaller, require less signal strength and use less power.

The key to these advances lies in what experts call phononics, which is similar to photonics. Both take advantage of similar physical laws and offer new ways to advance technology. While photonics takes advantage of photons – or light – phononics does the same with phonons, which are the physical particles that transmit mechanical vibrations through a material, akin to sound, but at frequencies much too high to hear.

In a paper published in Nature Materials (“Giant electron-mediated phononic nonlinearity in semiconductor–piezoelectric heterostructures”), researchers at the University of Arizona Wyant College of Optical Sciences and Sandia National Laboratories report clearing a major milestone toward real-world applications based on phononics. By combining highly specialized semiconductor materials and piezoelectric materials not typically used together, the researchers were able to generate giant nonlinear interactions between phonons. Together with previous innovations demonstrating amplifiers for phonons using the same materials, this opens up the possibility of making wireless devices such as smartphones or other data transmitters smaller, more efficient and more powerful.

Superradiant atoms offer a groundbreaking method for measuring time with an unprecedented level of precision. In a recent study published by the scientific journal Nature Communications, researchers from the University of Copenhagen present a new method for measuring the time interval, seconds, that overcomes some of the limitations that even today’s most advanced atomic clocks encounter. This advancement could have broad implications in areas such as space exploration, volcanic monitoring, and GPS systems.

The second, which is the most precisely defined unit of measurement, is currently measured by atomic clocks in different places around the world that together tell us what time it is. Using radio waves, atomic clocks continuously send signals that synchronize our computers, phones, and watches.

Oscillations are the key to keeping time. In a grandfather clock, these oscillations are from a pendulum’s swinging from side to side every second, while in an atomic clock, it is a laser beam that corresponds to an energy transition in strontium and oscillates about a million billion times per second.

At the end of the day it just got too expensive to make games, and too risky to release bad ones. Not to mention the political nonsense. AI is now in the wings poised for a take over game development. Will of mostly taken over around 2030. And, it will quickly be back to the old days.


There’s one topic that’s stayed on my mind since the Game Developers Conference in March: generative AI. This year’s GDC wasn’t flooded with announcements that AI is being added to every game — unlike how the technology’s been touted in connection with phones and computers. But artificial intelligence definitely made a splash.

Enthusiasm for generative AI was uneven. Some developers were excited about its possibilities, while others were concerned over its potential for abuse in an industry with shattered morale about jobs and careers.

AI has been a common theme at GDC presentations in years past, but in 2024 it was clear that generative AI is coming for gaming, and some of the biggest companies are exploring ways to use it. With all new technologies, there’s no guarantee they’ll stick. Will generative AI flame out like blockchain and NFTs, or will it change the future of gaming?