Toggle light / dark theme

Have you ever made a great catch—like saving a phone from dropping into a toilet or catching an indoor cat from running outside? Those skills—the ability to grab a moving object—takes precise interactions within and between our visual and motor systems. Researchers at the Del Monte Institute for Neuroscience at the University of Rochester have found that the ability to visually predict movement may be an important part of the ability to make a great catch—or grab a moving object.

“We were able to develop a method that allowed us to analyze behaviors in a natural environment with high precision, which is important because, as we showed, differ in a controlled setting,” said Kuan Hong Wang, Ph.D., a Dean’s Professor of Neuroscience at the University of Rochester Medical Center.

Wang led the study out today in Current Biology in collaboration with Jude Mitchell, Ph.D., assistant professor of Brain and Cognitive Sciences at the University of Rochester, and Luke Shaw, a graduate student in the Neuroscience Graduate Program at the School of Medicine & Dentistry at the University of Rochester. “Understanding how natural behaviors work will give us better insight into what is going awry in an array of neurological disorders.”

At Apple’s WWDC23, I think I saw the future. [Pausing to ponder.] Yeah, I’m pretty sure I saw the future–or at least Apple’s vision of the future of computing. On Tuesday morning, I got to try the Apple Vision Pro, the new $3,499 mixed-reality headset that was announced this week and ships next year.

I’m here to tell you the major details of my experience, but the overall impression I have is that the Vision Pro is the most impressive first-gen product I’ve seen from Apple–more impressive than the 1998 iMac, or the 2007 iPhone. And I’m fully aware that other companies have made VR headsets, but Apple does that thing that it does, where it puts its understanding of what makes a satisfying user experience and creates a new product in an existing market that sets a higher bar of excellence.

Yes, it’s expensive, and yes, this market hasn’t proven that it can move beyond being niche. Those are very important considerations to discuss in other articles. For now, I’ll convey my experiences and impressions here, from a one-hour demonstration at Apple Park. (I was not allowed to take photos or record video; the photos posted here were supplied by Apple.) The device I used is an early beta, so it’s possible—likely even—that the hardware or software could change before next year.

We’ve been waxing lyrical (and critical) about Apple’s Vision Pro here at TechCrunch this week – but, of course, there are other things happening in the world of wearable tech, as well. Sol Reader raised a $5 million seed round with a headset that doesn’t promise to do more. In fact, it is trying to do just the opposite: Focus your attention on just the book at hand. Or book on the face, as it were.

“I’m excited to see Apple’s demonstration of the future of general AR/VR for the masses. However, even if it’s eventually affordable and in a much smaller form factor, we’re still left with the haunting question: Do I really need more time with my smart devices,” said Ben Chelf, CEO at Sol. “At Sol, we’re less concerned with spatial computing or augmented and virtual realities and more interested in how our personal devices can encourage us to spend our time wisely. We are building the Sol Reader specifically for a single important use case — reading. And while Big Tech surely will improve specs and reduce cost over time, we can now provide a time-well-spent option at 10% of the cost of Apple’s Vision.”

The device is simple: It slips over your eyes like a pair of glasses and blocks all distractions while reading. Even as I’m typing that, I’m sensing some sadness: I have wanted this product to exist for many years – I was basically raised by books, and lost my ability to focus on reading over the past few years. Something broke in me during the pandemic – I was checking my phone every 10 seconds to see what Trump had done now and how close we were to a COVID-19-powered abyss. Suffice it to say, my mental health wasn’t at its finest – and I can’t praise the idea of Sol Reader enough. The idea of being able to set a timer and put a book on my face is extremely attractive to me.

On Thursday, Mark Zuckerberg chimed in with his thoughts about the Apple Vision Pro, and they’re oddly reminiscent of how Microsoft’s Steve Ballmer slammed the iPhone for being useless and of no value to customers.

On the one hand, it’s good for the head of a rival company not to seem all that worried about an incoming competitive product. On the other hand, executives that have dismissed something of Apple’s for the last 20 years has historically ended very poorly.

Just ask Microsoft’s former CEO, Steve Ballmer.

Google’s aiming to make it easier to use and secure passwords — at least, for users of the Password Manager tool built into its Chrome browser.

Today, the tech giant announced that Password Manager, which generates unique passwords and autofills them across platforms, will soon gain biometric authentication on PC. (Android and iOS have had biometric authentication for some time.) When enabled, it’ll require an additional layer of security, like fingerprint recognition or facial recognition, before Chrome autofills passwords.

Exactly which types of biometrics are available in Password Manager on desktop will depend on the hardware attached to the PC, of course (e.g. a fingerprint reader), as well as whether the PC’s operating system supports it. Beyond “soon,” Google didn’t say when to expect the feature to arrive.

September 2021: In an interview with tech YouTuber iJustine, Cook said that he was AR’s number one fan and reiterated his hopes for it as a collaboration tool.

I am so excited about AR. I think AR is one of these very few profound technologies that we will look back on one day and went, how did we live our lives without it? And so right now you can experience it in thousands of ways using your iPad or your iPhone, but of course, those will get better and better over time.

Already it’s a great way to shop, it’s a great way to learn. It enhances the learning process. I can’t wait for it to be even more important in collaboration and so forth.

A first-generation 2007 iPhone sold for more than $63,000 in an online auction Sunday, more than 100 times its original cost. Dubbed a “first-edition” device by auctioneer LCG Auctions, the box had never been opened.

The original iPhone cost $599 and offered early Apple adopters a 3.5-inch screen with a 2-megapixel camera, plus 4 GB and 8 GB storage options, internet capabilities and iTunes. It had no app store, ran on a 2G network and was exclusive to AT&T’s network.

Bidding on this phone began online earlier this month at $2,500. All told, there were 27 bids on it, according to LCG’s website. Mark Montero, founder of LCG Auctions, told CNN that 10 buyers vied for the iPhone and the winner was “an individual from the US.”

This study provides a new perspective on the relationship between the visual environment and cognitive performance, based on the results of path analysis (Supplementary Fig. 5). Regarding reading on a paper medium, moderate cognitive load may generate sighs (or deep breaths) and appears to restore respiratory variability and control of prefrontal brain activity. In contrast, reading on smartphones may require sustained task attention34, and acute cognitive load may inhibit the generation of sighs, causing overactivity in the prefrontal cortex. Sighing has been found to be associated with various cognitive functions13,27,28, and may reset respiratory variability36,37. This reset may also be associated with improved executive functions14.

The current study has several limitations. First, our experiment did not entail any measurement of subjective cognitive load. Based on the differences in the number of sighs and brain activity between reading on smartphones and paper media, it is highly likely that there might have been a difference in cognitive load as well. In future, it is necessary to assess cognitive load indices and examine the relationship between breathing and brain activity. Second, we did not control the movements when turning pages or pointing movements to maintain the focus of attention on the text. These bodily movements may have had some influence on the present index. In the future, such physical limitations should be taken into consideration.

The results of this study suggest that reduced reading comprehension on smartphone devices may be caused by reduced sighing and overactivity of the prefrontal cortex, although the effect on electronic devices other than smartphones has yet to be confirmed. Recent reports indicate that the use of smartphones and other electronic devices has been increasing due to pandemic-related lockdowns, and there are indications that this is negatively influencing sleep and physical activity38,39. The relationships among visual environment, respiration/brain activities, and cognitive performance detected in this study may indicate one of the negative effects of electronic device use on the human body. If the negative effects of smartphones are true, it may be beneficial to take deep breaths while reading since sighs, whether voluntary or involuntary, regulate disordered breathing36.

The Japanese electronics giant Sony has announced its first steps into quantum computing by joining other investment groups in a £42m venture in the UK quantum computing firm Quantum Motion. The move by the investment arm of Sony aims to boost the company’s expertise in silicon quantum chip development as well as to assist in a potential quantum computer roll-out onto the Japanese market.

Quantum Motion was founded in 2017 by scientists from University College London and the University of Oxford. It already raised a total of £20m via “seed investment” in 2017 and a “series A” investment in 2020. Quantum Motion uses qubits based on standard silicon chip technology and can therefore exploit the same manufacturing processes that mass-produces chips such as those found in smartphones.

A full-scale quantum computer, when built, is likely to require a million logical qubits to perform quantum-based calculations, with each logical qubit needing thousands of physical qubits to allow for robust error checking. Such demands will, however, require a huge amount of associated hardware if they are to be achieved. Quantum Motion claims that its technology could tackle this problem because it develops scalable arrays of qubits based on CMOS silicon technology to achieve high-density qubits.