Toggle light / dark theme

Physicists are (temporarily) augmenting reality to crack the code of quantum systems.

Predicting the properties of a molecule or material requires calculating the collective behavior of its . Such predictions could one day help researchers develop new pharmaceuticals or design materials with sought-after properties such as superconductivity. The problem is that electrons can become “quantum mechanically” entangled with one another, meaning they can no longer be treated individually. The entangled web of connections becomes absurdly tricky for even the most powerful computers to unravel directly for any system with more than a handful of particles.

Now, at the Flatiron Institute’s Center for Computational Quantum Physics (CCQ) in New York City and the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have sidestepped the problem. They created a way to simulate entanglement by adding to their computations extra “ghost” electrons that interact with the system’s actual electrons.

Recently, Blake Lemoine a computer scientist and machine learning bias researcher for Google released an interview with Google’s LaMDA a conversation technology and AI. Blake proposes, based on his time testing LaMDA, that it is a super intelligence and sentient. Blake details just what made him come to this conclusion and why he believes we have passed the singularity, last year.

Blake’s links:
https://twitter.com/cajundiscordian.
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

News links:
https://www.vox.com/23167703/google-artificial-intelligence-…l-sentient.
https://blog.google/technology/ai/lamda/

Youtube Membership: https://www.youtube.com/channel/UCz3qvETKooktNgCvvheuQDw/join.

Social robots, robots that can interact with humans and assist them in their daily lives, are gradually being introduced in numerous real-world settings. These robots could be particularly valuable for helping older adults to complete everyday tasks more autonomously, thus potentially enhancing their independence and well-being.

Researchers at University of Bari have been investigating the potential using for ambient assisted living applications for numerous years. Their most recent paper, published in UMAP’22 Adjunct: Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, specifically explores the value of allowing social robots who are assisting seniors to learn the relationships between a user’s routines and his/her .

“Social robots should support with and, at the same time, they should contribute to emotional wellness by considering affective factors in everyday situations,” Berardina De Carolis, Stefano Ferilli and Nicola Macciarulo wrote in their paper. “The main goal of this research is to investigate whether it is possible to learn relations between the user’s affective state state and , made by activities, with the aid of a social robot, Pepper in this case.”

To efficiently navigate real-world environments, robots typically analyze images collected by imaging devices that are integrated within their body. To enhance the performance of robots, engineers have thus been trying to develop different types of highly performing cameras, sensors and artificial vision systems.

Many artificial systems developed so far draw inspiration from the eyes of humans, animals, insects and fish. These systems have different features and characteristics, depending on the in which they are designed to operate in.

Most existing sensors and cameras are designed to work either in on the ground (i.e., in terrestrial environments) or in (i.e., in ). Bio-inspired artificial vision systems that can operate in both terrestrial and aquatic environments, on the other hand, remain scarce.

Researchers at Oxford University’s Department of Materials, working in collaboration with colleagues from Exeter and Munster, have developed an on-chip optical processor capable of detecting similarities in datasets up to 1,000 times faster than conventional machine learning algorithms running on electronic processors.

The new research published in Optica took its inspiration from Nobel Prize laureate Ivan Pavlov’s discovery of classical conditioning. In his experiments, Pavlov found that by providing another stimulus during feeding, such as the sound of a bell or metronome, his dogs began to link the two experiences and would salivate at the sound alone. The repeated associations of two unrelated events paired together could produce a learned response—a conditional reflex.

Co-first author Dr. James Tan You Sian, who did this work as part of his DPhil in the Department of Materials, University of Oxford, said, “Pavlovian associative learning is regarded as a basic form of learning that shapes the behavior of humans and animals—but adoption in AI systems is largely unheard of. Our research on Pavlovian learning in tandem with optical parallel processing demonstrates the exciting potential for a variety of AI tasks.”

India is a developing country where Robotics is still lagging because of the lack of availability of Robotics components, 3D printed parts and suitable quality motors. The science of Robotics requires a high-end technology to be implemented by researchers or Robotics scientists.

Despite all these, if a teacher (not a Robotics Scientist) can do something in the field and able to develop a prototype that is comparable with the Robots developed by big facilitated and resourceful companies with the help of their best engineers on a huge budget, then the person deserves the appreciation.

A computer science teacher in Kendriya Vidyalaya, IIT Bombay, Dinesh Kunwar Patel has developed the world’s first social and educational humanoid Robot ‘Shalu’ that can speak 47 languages, including 9 Indian and 38 foreign languages. The homemade Robot ‘Shalu’ is made of waste materials, including cardboard, wood, and aluminium.

AI Dungeon project first became available in December 2019 and attracted attention due to its advanced artificial intelligence — it generated coherent text adventures, where you could perform any action by typing it in the input window. Now AI Dungeon has appeared on Steam.

The gameplay looks like 1970s text quests: you’re told what’s going on, and you write what you’re going to do. But while the 1970s quests only accepted tightly constrained answers, AI Dungeon tries to adjust by all means what you type in. You only have to specify the type of input: action, word, or event.

AI Dungeon also allows you to generate a world, fill it with details of your choice, or dive into the worlds of other players. In addition, you can control the progress of an incident: force II to create a paragraph again or even rewrite it completely by hand. Occurrences allow you to go through with your friends.

Dubbed GRACE, the robot hand can even bend fingers and make realistic human movements.

Scientists have developed a new type of artificial muscle that can lift 1,000 times its own weight * 3D actuators were combined to form a real-life robot hand that could lift 8kg * The high-strength properties could be applied to create higher capabilities in other body parts and a range of devices.

A team of researchers from the Italian Institute of Technology has just developed a new class of high-strength artificial muscles that can stretch and contract like a human muscle in a way that has never been done before. According to a recent research paper, the muscles perform with a level of versatility and grace closely matched to life-like movements, and provide a boost in the development of three-dimensional functional devices such as artificial body parts. class of strong pneumatic artificial muscles has been developed and combined to form a robot hand that can lift up to thousand times its own weight.

The autonomous, miniaturized robot could mimic movements used in surgery in space remotely.

MIRA, short for miniaturized in vivo robotic assistant\.


An autonomous, miniaturized robot could soon perform simulated tasks that mimic movements used in surgery without the help of doctors or astronauts.

Meet MIRA, short for miniaturized in vivo robotic assistant. Invented by Nebraska Engineering Professor Shane Farritor, the surgical robot is being readied for a 2024 test mission aboard the International Space Station. For this, NASA recently awarded the University of Nebraska-Lincoln $100,000 through the Established Program to Stimulate Competitive Research (EPSCoR) at the University of Nebraska Omaha.