Toggle light / dark theme

Is phone and digital media addiction real? If so, what steps can be taken to mitigate it? This is something that a recent study published in the journal Technology, Mind, and Behavior hopes to address as a team of researchers discuss a new instrument called the Digital Media Overuse Scale (dMOS) to determine a person’s level of addiction to digital media such as their phone. This study comes at a time when smartphones and digital media device technologies are only improving and holds the potential to help scientists and clinicians make the connection between technology and psychology.

“We wanted to create a tool that was immediately useful in the clinic and lab, that reflects current understandings about how digital addiction works, that wouldn’t go obsolete once the next big tech change hits,” said Dr. Daniel Hipp, who is a research consultant at the Digital Media Treatment & Education Center in Boulder, Colorado, and lead author of the study.

For the study, the researchers developed dMOS to address the outdated methods pertaining to bridging the gap between technology and psychology, such as how we talk about technology and asking outdated questions. The goal of dMOS is to allow scientists and clinicians to conduct a variety of analyses pertaining to digital media usage, including a broad analysis such as social media as a whole or a more focused analysis such a specific social media platform such as Facebook.

📸 Watch this video on Facebook https://www.facebook.com/share/v/NNeZinMSuGPtQDXL/?mibextid=i5uVoL


Working together as a consortium, FAIR or university partners captured these perspectives with the help of more than 800 skilled participants in the United States, Japan, Colombia, Singapore, India, and Canada. In December, the consortium will open source the data (including more than 1,400 hours of video) and annotations for novel benchmark tasks. Additional details about the datasets can be found in our technical paper. Next year, we plan to host a first public benchmark challenge and release baseline models for ego-exo understanding. Each university partner followed their own formal review processes to establish the standards for collection, management, informed consent, and a license agreement prescribing proper use. Each member also followed the Project Aria Community Research Guidelines. With this release, we aim to provide the tools the broader research community needs to explore ego-exo video, multimodal activity recognition, and beyond.

How Ego-Exo4D works.

Ego-Exo4D focuses on skilled human activities, such as playing sports, music, cooking, dancing, and bike repair. Advances in AI understanding of human skill in video could facilitate many applications. For example, in future augmented reality (AR) systems, a person wearing smart glasses could quickly pick up new skills with a virtual AI coach that guides them through a how-to video; in robot learning, a robot watching people in its environment could acquire new dexterous manipulation skills with less physical experience; in social networks, new communities could form based on how people share their expertise and complementary skills in video.

Individual technologies necessary for in-orbit cryogenic refueling are at a stage of development where they are “ready now to go into flight systems,” Dankanich said, either with a demonstration in space or on an operational spacecraft.

First, small steps

By the fourth anniversary of those awards, only SpaceX appears to have a chance to complete the tasks outlined in its “Tipping Point” award, valued at $53 million.

Tesla CEO Elon Musk has shared some new details about the automaker’s plans for its next-generation, $25,000 electric vehicle (EV), including that it will be built first in Austin, Texas.

On Tuesday, longtime auto industry veteran Sandy Munro shared a new interview with Musk, held following the Cybertruck delivery event. During the interview, the two talked mostly about the Cybertruck, though they also discussed Tesla’s next-generation vehicle for a few minutes.

While Musk said he couldn’t share any details about unit volume and dates for the next-gen EV, due to them being suggestive of Tesla’s financials, he did go on to share a few things. For one, Musk said that Tesla was “quite far advanced” in working to develop the low-cost, high-volume EV, adding that he reviews the production line plans for that on a weekly basis.

Google DeepMind has developed an AI agent system that can learn tasks from a human instructor. After enough time, the AI agent can not only imitate the actions of the human instructor but also recall the observed behavior.

In a paper published in Nature, researchers outline a process called cultural transmission to train the AI model without using any pre-collected human data.

This is a new form of imitative learning that the DeepMind researchers contend allows for more efficient transmission of skills to an AI. Think of it like watching a video tutorial – you watch, learn and apply the teachings as well as remember the video’s lessons later on.

Elon Musk commented on the upcoming low-cost Tesla electric car, saying that it is “advanced” in its development. The CEO also commented on the “revolutionary” manufacturing advancements that Tesla is making to make the vehicle a reality.

While the cost of Tesla vehicles has come down recently, they are still not financially accessible to most people – like most new cars.

The launch of the Cybertruck, which is about 50% more expensive than when originally announced in 2019, is not helping Tesla’s vehicle price range in becoming more affordable.