Toggle light / dark theme

Phase changes are central to the world around us. Probably the most familiar example is when ice melts into water or water boils into steam, but phase changes also underlie heating systems and even digital memory, such as that used in smartphones.

Triggered by or electricity, some materials can switch between two different phases that represent binary code 0s and 1s to store information. Understanding how a material transforms from one state or phase to another is key to tailoring materials with specific properties that could, for instance, increase switching speed or operate at lower energy costs.

Yet researchers have never been able to directly visualize how these transformations unfold in real time. We often assume materials are perfect and look the same everywhere, but “part of the challenge is that these processes are often heterogeneous, where different parts of the material change in different ways, and involve many different length scales and timescales,” said Aaron Lindenberg, co-author and SLAC and Stanford University professor.

A group of Carnegie Mellon University researchers recently devised a method allowing them to create large amounts of a material required to make two-dimensional (2D) semiconductors with record high performance. Their paper, published in ACS Applied Materials & Interfaces in late December 2024, could lead to more efficient and tunable photodetectors, paving the way for the next generation of light-sensing and multifunctional optoelectronic devices.

“Semiconductors are the key enabling technology for today’s electronics, from laptops to smartphones to AI applications,” said Xu Zhang, assistant professor of electrical and computer engineering. “They control the flow of electricity, acting as a bridge between conductors (which allow electricity to flow freely) and insulators (which block it).”

Zhang’s research group wanted to develop a certain kind of photodetector, a device capable of detecting light and which can be used in a variety of applications. To create this photodetector, the group needed to use materials that were an atom’s-width thick, or as close to 2D as is possible.

Basically mushrooms can cure all major illnesses all over the human body and brain. If all the pharmaceutical companies got into business with Chinese medicine which has used mushrooms of all types we essentially have a no side effect system of 100 percent healing. Even the basic food pyramid has show essentially to prove beneficial to humans more than medicines. Also essentially nanotransfection for people that have lost limbs or lost any body part could in the future regenerate limbs similar to wolverine like in the marvel comics but at a slower pace but would heal anything while the mushrooms keep one well and fed. A lot of the American studies are a stop gap measure while mushrooms can cure things slowly but to 100 percent. Along with healthy eating and nanotransfection one could have all they need for any regeneration in the far future. In the future this technology and food could essentially allow for minimal down time healing inside and the foods would fuel the body. It could be put on a smartphone where even trillions of dollars would be saved getting doctor treatments down to a dollar or less for entire body scans and healing. It would be the first step towards Ironman but using the human body to heal itself and the foods to fuel regeneration.


The WHO has published the first list of priority fungal pathogens, which affect more than 300 million people and kill at least 1.5 million people every year. However, funding to control this scourge is less than 1.5% of that devoted to infectious diseases.

Is a 1953 independently made American black-and-white science fiction/comedy film, produced by A.D. Nast, Jr., Arch Oboler, and Sidney Pink, written and directed by Arch Oboler, and starring Hans Conried, Gloria Blondell, Billy Lynn, and Edwin Max. The film was distributed by United Artists.

Directed by Arch Oboler.
Screenplay by Arch Oboler.

Hans Conried as Kerry West.
Janet Warren as Carolyn West.
Billy Lynn as Coach Trout.
Edwin Max as the Television Deliveryman.
Gloria Blondell as the Bill Collector.
Evelyn Beresford as Old Lady Motorist.
Bob Jellison as the TV Shop Owner.
Norman Field as the Doctor.
Stephen Roberts as Head Treasury Agent.
Connie Marshall as Susie.
William Phipps as Student.
Lenore Kingston as Offended Phone Operator #2
Alice Backes as Offended Phone Operator #1
Brick Sullivan as Cop.

Snap a photo of your meal, and artificial intelligence instantly tells you its calorie count, fat content, and nutritional value—no more food diaries or guesswork.

This futuristic scenario is now much closer to reality, thanks to an AI system developed by NYU Tandon School of Engineering researchers that promises a new tool for the millions of people who want to manage their weight, diabetes and other diet-related health conditions.

The technology, detailed in a paper presented at the 6th IEEE International Conference on Mobile Computing and Sustainable Informatics, uses advanced deep-learning algorithms to recognize food items in images and calculate their nutritional content, including calories, protein, carbohydrates and fat.

Originally released December 2023._ In today’s episode, host Luisa Rodriguez speaks to Nita Farahany — professor of law and philosophy at Duke Law School — about applications of cutting-edge neurotechnology.

They cover:
• How close we are to actual mind reading.
• How hacking neural interfaces could cure depression.
• How companies might use neural data in the workplace — like tracking how productive you are, or using your emotional states against you in negotiations.
• How close we are to being able to unlock our phones by singing a song in our heads.
• How neurodata has been used for interrogations, and even criminal prosecutions.
• The possibility of linking brains to the point where you could experience exactly the same thing as another person.
• Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind.
• And plenty more.

In this episode:
• Luisa’s intro [00:00:00]
• Applications of new neurotechnology and security and surveillance [00:04:25]
• Controlling swarms of drones [00:12:34]
• Brain-to-brain communication [00:20:18]
• Identifying targets subconsciously [00:33:08]
• Neuroweapons [00:37:11]
• Neurodata and mental privacy [00:44:53]
• Neurodata in criminal cases [00:58:30]
• Effects in the workplace [01:05:45]
• Rapid advances [01:18:03]
• Regulation and cognitive rights [01:24:04]
• Brain-computer interfaces and cognitive enhancement [01:26:24]
• The risks of getting really deep into someone’s brain [01:41:52]
• Best-case and worst-case scenarios [01:49:00]
• Current work in this space [01:51:03]
• Watching kids grow up [01:57:03]

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them.

Learn more, read the summary and find the full transcript on the 80,000 Hours website:

Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

A team of medical researchers and engineers at Google Research has developed a way to use the front-facing camera on a smartphone to monitor a patient’s heart rate. The team has published a paper on the technology on the arXiv preprint server.

Tracking a patient’s over time can reveal clues about their cardiovascular health. The most important measurement is resting heart rate (RHR)—people with an above-normal rate are at a higher risk of heart disease and/or stroke. Persistently high rates, the researchers note, can signal a serious problem.

Over the past several years, personal health device makers have developed wearable external heart monitors, such as necklaces or smartwatches. But these devices are expensive. The researchers have found a cheaper alternative—a deep-learning system that analyzes video from the front-facing camera of a smartphone. The system is called PHRM.

It could be very informative to observe the pixels on your phone under a microscope, but not if your goal is to understand what a whole video on the screen shows. Cognition is much the same kind of emergent property in the brain. It can only be understood by observing how millions of cells act in coordination, argues a trio of MIT neuroscientists. In a new article, they lay out a framework for understanding how thought arises from the coordination of neural activity driven by oscillating electric fields — also known as brain “waves” or “rhythms.”

Historically dismissed solely as byproducts of neural activity, brain rhythms are actually critical for organizing it, write Picower Professor Earl Miller and research scientists Scott Brincat and Jefferson Roy in Current Opinion in Behavioral Science. And while neuroscientists have gained tremendous knowledge from studying how individual brain cells connect and how and when they emit “spikes” to send impulses through specific circuits, there is also a need to appreciate and apply new concepts at the brain rhythm scale, which can span individual, or even multiple, brain regions.

“Spiking and anatomy are important, but there is more going on in the brain above and beyond that,” says senior author Miller, a faculty member in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. “There’s a whole lot of functionality taking place at a higher level, especially cognition.”

A Cornell-led research team has developed an artificial intelligence-powered ring equipped with micro-sonar technology that can continuously—and in real time—track fingerspelling in American Sign Language (ASL).

In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling, which is used in ASL to spell out words without corresponding signs, such as proper nouns, names and technical terms. With further development, the device—believed to be the first of its kind—could revolutionize ASL translation by continuously tracking entire signed words and sentences.

The research is published on the arXiv preprint server.

01:13 How Does Tesla Bot Gen 3 Handle Real-World Tasks?
06:12 How much does the Tesla Bot Gen 3 truly cost?
10:36 How is Tesla planning to sell the Bot Gen 3?
===
New UPDATE! Elon Musk LEAKED Tesla Bot Gen 3 10K Mass Production & All Real-Life Tasks Testing! Recently, Elon Musk confidently announced that the Tesla Bot Optimus can navigate independently in 95% of complex environments and react in just 20 milliseconds!
With a plan to produce 10,000 Tesla Optimus Gen 3 units in 2025, Tesla is leveraging its AI infrastructure, manufacturing capabilities, and real-world testing across more than 1,000 practical tasks to prepare for mass production this year.

New UPDATE! Elon Musk LEAKED Tesla Bot Gen 3 10K Mass Production & All Real-Life Tasks Testing! In today’s episode, we have compiled evidence from official announcements, technical demonstrations to validate the feasibility of this plan and pinpoint the final timeline and pricing for the 2025 production model.
But before we dive into price analysis in Part 2 and exactly launching time in Part 3 of this episode, you should first understand what we expect from this Tesla humanoid robot—and more importantly, whether it’s truly worth the price.
How Does Tesla Bot Gen 3 Handle Real-World Tasks?

New UPDATE! Elon Musk LEAKED Tesla Bot Gen 3 10K Mass Production & All Real-Life Tasks Testing! John Kennedy, nearly seventy, lay motionless on the floor, pain radiating from his hip and spine. His phone was just a few steps away—close, yet out of reach. Then, everything went dark.
A humanoid robot detected his fall. It gently lifted him up, scanned his injuries, and instantly sent an alert to his doctor.
Then came the doctor’s words, they wanted to send him to an assisted living facility.

===
#888999evs #teslacarworld #teslacar #888999 #teslabot #teslaoptimus #teslabotgen2 #teslabotgen3
subcribe: https://bit.ly/3i7gILj