Toggle light / dark theme

A study, published in PNAS Nexus, describes a fabric that can be modulated between two different states to stabilize radiative heat loss and keep the wearer comfortable across a range of temperatures.

Po-Chun Hsu, Jie Yin, and colleagues designed a made of a layered semi-solid electrochemical cell deployed on nylon cut in a kirigami pattern to allow it to stretch and move with the wearer’s body. Modern clothes are made with a variety of insulating or breathable fabrics, but each fabric offers only one thermal mode, determined by the fabric’s emissivity: the rate at which it emits .

The in the fabric can be electrically switched between two states—a transmissive dielectric state and a lossy metallic state—each with different emissivity. The fabric can thus keep the wearer comfortable by adjusting how much body heat is retained and how much is radiated away. A user would feel the same skin temperature whether the external temperature was 22.0°C (71.6°F) or 17.1°C (62.8°F). The authors call this fabric a “wearable variable-emittance device,” or WeaVE, and have configured it to be controlled with a .

In one sense, it is undeniably new. Interactions with ChatGPT can feel unprecedented, as when a tech journalist couldn’t get a chatbot to stop declaring its love for him. In my view, however, the boundary between humans and machines, in terms of the way we interact with one another, is fuzzier than most people would care to admit, and this fuzziness accounts for a good deal of the discourse swirling around ChatGPT.

When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought—of course I’m not a robot. On the other hand, when my email client suggests a word or phrase to complete my sentence, or when my phone guesses the next word I’m about to text, I start to doubt myself. Is that what I meant to say? Would it have occurred to me if the application hadn’t suggested it? Am I part robot? These large language models have been trained on massive amounts of “natural” human language. Does this make the robots part human?

It has 19 cores which can each carry a signal and can be adopted without any infrastructure changes.

An international collaboration of researchers has achieved a new speed record after transferring 1.7 petabits of data over 41 miles (67 km) of standard optical fiber cable. That’s the equivalent speed of 17 million broadband internet connections.

Optical fiber cables are a critical component of the modern world of the internet, where they connect data centers, satellite ground stations, mobile phone towers as well as continents to one another.

One nebulous aspect of the poll, and of many of the headlines about AI we see on a daily basis, is how the technology is defined. What are we referring to when we say “AI”? The term encompasses everything from recommendation algorithms that serve up content on YouTube and Netflix, to large language models like ChatGPT, to models that can design incredibly complex protein architectures, to the Siri assistant built into many iPhones.

IBM’s definition is simple: “a field which combines computer science and robust datasets to enable problem-solving.” Google, meanwhile, defines it as “a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more.”

It could be that peoples’ fear and distrust of AI comes partly from a lack of understanding of it, and a stronger focus on unsettling examples than positive ones. The AI that can design complex proteins may help scientists discover stronger vaccines and other drugs, and could do so on a vastly accelerated timeline.

Like something out of a spy movie, thermal cameras make it possible to “see” heat by converting infrared radiation into an image. They can detect infrared light given off by animals, vehicles, electrical equipment and even people—leading to specialized applications in a number of industries.

Despite these applications, technology remains too expensive to be used in many such as self-driving cars or smartphones.

Our team at Flinders University has been working hard to turn this technology into something we can all use, and not just something we see in spy movies. We’ve developed a low-cost thermal imaging that could be scaled up and brought into the lives of everyday people. Our findings are published in the journal Advanced Optical Materials.

Have you ever made a great catch—like saving a phone from dropping into a toilet or catching an indoor cat from running outside? Those skills—the ability to grab a moving object—takes precise interactions within and between our visual and motor systems. Researchers at the Del Monte Institute for Neuroscience at the University of Rochester have found that the ability to visually predict movement may be an important part of the ability to make a great catch—or grab a moving object.

“We were able to develop a method that allowed us to analyze behaviors in a natural environment with high precision, which is important because, as we showed, differ in a controlled setting,” said Kuan Hong Wang, Ph.D., a Dean’s Professor of Neuroscience at the University of Rochester Medical Center.

Wang led the study out today in Current Biology in collaboration with Jude Mitchell, Ph.D., assistant professor of Brain and Cognitive Sciences at the University of Rochester, and Luke Shaw, a graduate student in the Neuroscience Graduate Program at the School of Medicine & Dentistry at the University of Rochester. “Understanding how natural behaviors work will give us better insight into what is going awry in an array of neurological disorders.”

At Apple’s WWDC23, I think I saw the future. [Pausing to ponder.] Yeah, I’m pretty sure I saw the future–or at least Apple’s vision of the future of computing. On Tuesday morning, I got to try the Apple Vision Pro, the new $3,499 mixed-reality headset that was announced this week and ships next year.

I’m here to tell you the major details of my experience, but the overall impression I have is that the Vision Pro is the most impressive first-gen product I’ve seen from Apple–more impressive than the 1998 iMac, or the 2007 iPhone. And I’m fully aware that other companies have made VR headsets, but Apple does that thing that it does, where it puts its understanding of what makes a satisfying user experience and creates a new product in an existing market that sets a higher bar of excellence.

Yes, it’s expensive, and yes, this market hasn’t proven that it can move beyond being niche. Those are very important considerations to discuss in other articles. For now, I’ll convey my experiences and impressions here, from a one-hour demonstration at Apple Park. (I was not allowed to take photos or record video; the photos posted here were supplied by Apple.) The device I used is an early beta, so it’s possible—likely even—that the hardware or software could change before next year.

We’ve been waxing lyrical (and critical) about Apple’s Vision Pro here at TechCrunch this week – but, of course, there are other things happening in the world of wearable tech, as well. Sol Reader raised a $5 million seed round with a headset that doesn’t promise to do more. In fact, it is trying to do just the opposite: Focus your attention on just the book at hand. Or book on the face, as it were.

“I’m excited to see Apple’s demonstration of the future of general AR/VR for the masses. However, even if it’s eventually affordable and in a much smaller form factor, we’re still left with the haunting question: Do I really need more time with my smart devices,” said Ben Chelf, CEO at Sol. “At Sol, we’re less concerned with spatial computing or augmented and virtual realities and more interested in how our personal devices can encourage us to spend our time wisely. We are building the Sol Reader specifically for a single important use case — reading. And while Big Tech surely will improve specs and reduce cost over time, we can now provide a time-well-spent option at 10% of the cost of Apple’s Vision.”

The device is simple: It slips over your eyes like a pair of glasses and blocks all distractions while reading. Even as I’m typing that, I’m sensing some sadness: I have wanted this product to exist for many years – I was basically raised by books, and lost my ability to focus on reading over the past few years. Something broke in me during the pandemic – I was checking my phone every 10 seconds to see what Trump had done now and how close we were to a COVID-19-powered abyss. Suffice it to say, my mental health wasn’t at its finest – and I can’t praise the idea of Sol Reader enough. The idea of being able to set a timer and put a book on my face is extremely attractive to me.

On Thursday, Mark Zuckerberg chimed in with his thoughts about the Apple Vision Pro, and they’re oddly reminiscent of how Microsoft’s Steve Ballmer slammed the iPhone for being useless and of no value to customers.

On the one hand, it’s good for the head of a rival company not to seem all that worried about an incoming competitive product. On the other hand, executives that have dismissed something of Apple’s for the last 20 years has historically ended very poorly.

Just ask Microsoft’s former CEO, Steve Ballmer.