Toggle light / dark theme

X-AR uses wireless signals and computer vision to enable users to perceive things that are invisible to the human eye (i.e., to deliver non-line-of-sight perception). It combines new antenna designs, wireless signal processing algorithms, and AI-based fusion of different sensors.

This design introduces three main innovations:

1) AR-conformal wide-band antenna that tightly matches the shape of the AR headset visor and provides the headset with Radio Frequency (RF) sensing capabilities. The antenna is flexible, lightweight, and fits on existing headsets without obstructing any of their cameras or the user’s field of view.

Premium Robots: https://taimine.com/
Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
A new multimodal artificial intelligence model from Microsoft called Kosmos-1 is able to process both text and visual data to the point of passing a visual IQ test with 26 percent accuracy, and researchers say this is a step towards AGI. Stable Diffusion AI can now read brain waves to reconstruct images that people are thinking about. Stanford has created a world record brain computer interface device with the help of AI to allow patients to type 62 words per minute with their thoughts.

AI News Timestamps:
0:00 Microsoft Kosmos-1 AI & AGI
3:34 AI Neuroscience Tech Reads Brain Waves.
5:43 AI & BCI Breaks Record.

#technology #tech #ai

Dr. Li Jiang is a director of Stanford AIRE program. Many of you think ChatGPT started the era of AI. But, Dr. Jiang says it started already. AI seems much better than we do. It seems it can solve many problems. Then, what can we do? How can we survive from AI? How should we do? Dr. Jiang suggest this method for us who are facing the era of AI.

Stanford DLI Challenge is a unique program that empowers individuals to create cutting-edge digital learning solutions. With guidance from experienced educators and designers, gain hands-on experience with the latest technologies and teaching methods. Sign up now to join a community of educators and designers dedicated to transforming education for the better: https://acceleratelearning.stanford.edu/get-involved/digital…challenge/

00:00 Intro.
00:47 Know AI Thinking.
01:32 3 Things of AI Thinking.
03:45 How Do We Invent New Things?
04:29 5 Steps of Design Thinking.
07:05 I Let My Students Use ChatGPT

EO stands for Entrepreneurship & Opportunities.

AERFAI Summer School on Pattern Recognition in Multimodal Human Interaction — Deep Neural Networks for Speech and Image Processing.
This is the sixth edition in a series of AERFAI Summer Schools devoted to a wide range of topics in the fields of Pattern Recognition and Machine Learning. The focus of this year’s Summer School is to provide the students the most relevant techniques to analyze and understand the information conveyed in human audiovisual communication.

Vídeo disponible en: http://tv.campusdomar.es/en/video/787.html

Scientists Believe, ‘Organoid Intelligence’, Is the Future of Computing. CNN reports that as part of a new field called “organoid intelligence,” a computer powered by human brain cells could shape the future. Organoids are lab-grown tissues capable of brain-like functions, such as forming a network of connections. Brain organoids were first grown in 2012 by Dr. Thomas Hartung, a professor of environmental health and engineering, by altering human skin samples. Brain organoids were first grown in 2012 by Dr. Thomas Hartung, a professor of environmental health and engineering, by altering human skin samples. Computing and artificial intelligence have been driving the technology revolution but they are reaching a ceiling., Dr.

A new program can streamline the process of creating, launching and analysing computational chemistry experiments. This piece of software, called AQME, is distributed for free under an open source licence, and could contribute to making calculations more efficient, as well as accelerating automated analyses.

‘We estimate time savings of around 70% in routine computational chemistry protocols,’ explains lead author Juan Vicente Alegre Requena, at the Institute of Chemical Synthesis and Homogeneous Catalysis (ISQCH) in Zaragoza, Spain. ‘In modern molecular simulations, studying a single reaction usually involves more than 500 calculations,’ he explains. ‘Generating all the input files, launching the calculations and analysing the results requires an extraordinary amount of time, especially when unexpected errors appear.’

Therefore, Alegre and his colleagues decided to code a piece of software to skip several steps and streamline calculations. Among other advantages, AQME works with simple inputs, instead of the optimised 3D chemical structures usually required by other solutions. ‘It’s exceptionally easy,’ says Alegre. ‘AQME is installed in a couple of minutes, then the only indispensable input is as a simple Smiles string.’ Smiles is a system developed by chemist and coder Dave Weininger in the late 1980s, which converts complex chemical structures into a succession of letters and numbers that is machine readable. This cross-compatibility could allow integration with chemical databases and machine-learning solutions, most of which include datasets in Smiles format, explains Alegre.

Computer models are an important tool for studying how the brain makes and stores memories and other types of complex information. But creating such models is a tricky business. Somehow, a symphony of signals—both biochemical and electrical—and a tangle of connections between neurons and other cell types creates the hardware for memories to take hold. Yet because neuroscientists don’t fully understand the underlying biology of the brain, encoding the process into a computer model in order to study it further has been a challenge.

Now, researchers at the Okinawa Institute of Science and Technology (OIST) have altered a commonly used computer model of called a Hopfield network in a way that improves performance by taking inspiration from biology. They found that not only does the new network better reflect how neurons and other cells wire up in the , it can also hold dramatically more memories.

The complexity added to the network is what makes it more realistic, says Thomas Burns, a Ph.D. student in the group of Professor Tomoki Fukai, who heads OIST’s Neural Coding and Brain Computing Unit. “Why would biology have all this complexity? Memory capacity might be a reason,” Mr. Burns says.

Optical computing has been gaining wide interest for machine learning applications because of the massive parallelism and bandwidth of optics. Diffractive networks provide one such computing paradigm based on the transformation of the input light as it diffracts through a set of spatially-engineered surfaces, performing computation at the speed of light propagation without requiring any external power apart from the input light beam. Among numerous other applications, diffractive networks have been demonstrated to perform all-optical classification of input objects.

Researchers at the University of California, Los Angeles (UCLA), led by Professor Aydogan Ozcan, have introduced a “time-lapse” scheme to significantly improve the accuracy of diffractive optical networks on complex input objects. The findings are published in the journal Advanced Intelligent Systems.

In this scheme, the object and/or the diffractive network are moved relative to each other during the exposure of the output detectors. Such a “time-lapse” scheme has previously been used to achieve super-resolution imaging, for example, in , by capturing multiple images of a scene with lateral movements of the camera.