Large-scale 3T MRI data enables fast whole-brain scanning at 0.055T via deep learning super-resolution and image reconstruction.
It could soon be possible to measure changes in depression levels like we can measure blood pressure or heart rate.
In a new study, 10 patients with depression that had resisted treatment were enrolled in a six-month course of deep brain stimulation (DBS) therapy. Previous results from DBS have been mixed, but help from artificial intelligence could soon change that.
Success with DBS relies on stimulating the right tissue, which means getting accurate feedback. Currently, this is based on patients reporting their mood, which can be affected by stressful life events as much as it can be the result of neurological wiring.
An artificial intelligence platform developed by an Israeli startup can reveal whether a patient is at risk of a heart attack by analyzing their routine chest CT scans.
Results from a new study testing Nanox. AI’s HealthCCSng algorithm on such scans found that 58 percent of patients unknowingly had moderate to severe levels of coronary artery calcium (CAC) or plaque.
CAC is the strongest predictor of future cardiac events, and measuring it typically subjects patients to an additional costly scan that is not normally covered by insurance companies.
From attending a meeting to enjoying a live performance or, perhaps, taking a class at the University of Tokyo’s Metaverse School of Engineering, the application of virtual reality is expanding in our daily lives. Earlier this year, virtual reality technologies garnered attention as tech giants, including Meta and Apple, unveiled new VR/AR (virtual reality/augmented reality) headsets. We spoke with VR and AR specialist Takuji Narumi, an associate professor at the Graduate School of Information Science and Technology, to learn about his latest research and what VR’s future has to offer.
At the Avatar Robot Café DAWN ver. β, employees serve customers via a digital screen and engage in conversation using avatars of their choice, such as an alpaca and a man with blue hair.
ChatGPT isn’t just a chatbot anymore.
OpenAI’s latest upgrade grants ChatGPT powerful new abilities that go beyond text. It can tell bedtime stories in its own AI voice, identify objects in photos, and respond to audio recordings. These capabilities represent the next big thing in AI: multimodal models.
“Multimodal is the next generation of these large models, where it can process not just text, but also images, audio, video, and even other modalities,” says Dr. Linxi “Jim” Fan, Senior AI Research Scientist at Nvidia.
OpenAI’s chatbot learns to carry a conversation—and expect competition.
Researchers at Cornell University have developed a tiny, proof of concept robot that moves its four limbs by rapidly igniting a combination of methane and oxygen inside flexible joints.
The device can’t do much more than blow each limb outward with a varying amount of force, but that’s enough to be able to steer and move the little unit. It has enough power to make some very impressive jumps. The ability to navigate even with such limited actuators is reminiscent of hopped-up bristebots.
Electronic control of combustions in the joints allows for up to 100 explosions per second, which is enough force to do useful work. The prototype is only 29 millimeters long and weighs only 1.6 grams, but it can jump up to 56 centimeters and move at almost 17 centimeters per second.
OpenAI recently announced an upgrade to ChatGPT (Apple, Android) that adds two features: AI voice options to hear the chatbot responding to your prompts, and image analysis capabilities. The image function is similar to what’s already available for free with Google’s Bard chatbot.
Even after hours of testing the limits and capabilities of ChatGPT, OpenAI’s chatbot still manages to surprise and scare me at the same time. Yes, I was quite impressed with the web browsing beta offered through ChatGPT Plus, but I remained anxious about the tool’s ramifications for people who write for money online, among many other concerns. The new image feature arriving for OpenAI’s subscribers left me with similarly mixed feelings.
While I’ve not yet had the opportunity to experiment with the new audio capabilities (other great reporters on staff have), I was able to test the soon-to-arrive image features. Here’s how to use the new image search coming to ChatGPT and some tips to help you start out.
Who knows? Maybe this is a way for giving commands to a computer/AI instead of implants if further developed in the future.
The streaming data from these biosensors can be used for health monitoring and diagnosis of neuro-degenerative conditions.
A pair of earbuds can be turned into a tool to record the electrical activity of the brain as well as levels of lactate in the body with the addition of two flexible sensors screen-printed onto a stamp-like flexible surface.
The sensors can communicate with the earbuds, which then wirelessly transmit the data gathered for visualization and further analysis, either on a smartphone or a laptop. The data can be used for long-term health monitoring and to detect long-term neuro-degenerative conditions.
Patreon: https://www.patreon.com/daveshap.
LinkedIn: https://www.linkedin.com/in/dave-shap-automator/
Consulting: https://www.daveshap.io/Consulting.
GitHub: https://github.com/daveshap.
Medium: https://medium.com/@dave-shap.
00:00 — Introduction.
00:38 — Landauer Limit.
02:51 — Quantum Computing.
04:21 — Human Brain Power?
07:03 — Turing Complete Universal Computation?
10:07 — Diminishing Returns.
12:08 — Byzantine Generals Problem.
14:38 — Terminal Race Condition.
17:28 — Metastasis.
20:20 — Polymorphism.
21:45 — Optimal Intelligence.
23:45 — Darwinian Selection “Survival of the Fastest“
26:55 — Speed Chess Metaphor.
29:42 — Conclusion & Recap.
Artificial intelligence and computing power are advancing at an incredible pace. How smart and fast can machines get? This video explores the theoretical limits and cutting-edge capabilities in AI, quantum computing, and more.
We start by looking at the Landauer Limit — the minimum energy required to perform computation. At room temperature, erasing just one bit of information takes 2.85 × 10^−21 joules. This sets limits on efficiency.
Quantum computing offers radical improvements in processing power by utilizing superposition and entanglement. Through quantum parallelism, certain problems can be solved exponentially faster than with classical computing. However, the technology is still in early development.
The human brain is estimated to have the equivalent of 1 exaflop processing power — a billion, billion calculations per second! Yet it uses just 20 watts, making it vastly more energy-efficient than today’s supercomputers. Some theorize the brain may use quantum effects, but this is speculative.
The transition to Artificial General Intelligence (AGI) signifies more than a change in terminology; it represents a major leap in capabilities. It will take many years for AGI to be fully realized, but we are well underway in this evolution. In the meantime, most of the AI applications developed remain classified as NarrowAI.
Simply, AGI is any task that a human can do could be accomplished by general AI. It technically has all the potential of a human brain. It could tackle any problem or task in any area, whether it be music composition or logistics—all the potential actions humans can perform.
This article discusses General AI and highlights how the AI industry is unfolding advancing efforts to develop General AI.