Toggle light / dark theme

Computational neuroscientist Kanaka Rajan, leader in using AI and machine learning to study the brain, to join Harvard Medical School faculty and serve as a founding faculty member at the Kemper Institute

CAMBRIDGE, MA —The Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University announces the appointment of Dr. Kanaka Rajan, the first faculty member hired within the recently launched Kempner Institute. As a founding faculty member at the Kempner, Dr. Rajan will serve as an institute investigator. She will also have a dual appointment, serving as a member of the faculty in the Department of Neurobiology at Harvard Medical School.

Working jointly with the HMS Department of Neurobiology and the Kempner Institute, Dr. Rajan will support the intersecting research, scientific, and educational missions of both communities. Dr. Rajan starts in September 2023.

“We are thrilled to have Dr. Rajan join the Kempner, where she will play a key role in helping to shape and advance the institute’s research program,” said Kempner Co-Director Bernardo Sabatini. “She is a true leader in the field, using innovative techniques to tackle big, difficult questions, and expanding the possibilities for how we use artificial intelligence and machine learning to understand the enduring mysteries of the brain.”

Exploring the effects of hardware implementation on the exploration space of evolvable robots

Evolutionary robotics is a sub-field of robotics aimed at developing artificial “organisms” that can improve their capabilities and body configuration in response to their surroundings, just as humans and animals evolve, adapting their skills and appearance over time. A growing number of roboticists have been trying to develop these evolvable robotic systems, leveraging recent artificial intelligence (AI) advances.

A key challenge in this field is to effectively transfer robots from simulations to real-world environments without compromising their performance and abilities. A paper by researchers at University of York, Edinburgh Napier University, Vrije Universiteit Amsterdam, University of the West of England and University of Sunderland, published in Frontiers in Robotics and AI, investigated the impact that hardware can have on the development space of evolvable robots.

“One of the greatest challenges for evolutionary robotics is bringing it into the hardware space and creating real, useful robots,” Mike Angus, a research engineer who designed hardware for the study, told Tech Xplore.

Toyota Robots That Do Housework!

“Operating and navigating in home environments is very challenging for robots. Every home is unique, with a different combination of objects in distinct configurations that change over time. To address the diversity a robot faces in a home environment, we teach the robot to perform arbitrary tasks with a variety of objects, rather than program the robot to perform specific predefined tasks with specific objects. In this way, the robot learns to link what it sees with the actions it is taught. When the robot sees a specific object or scenario again, even if the scene has changed slightly, it knows what actions it can take with respect to what it sees.

We teach the robot using an immersive telepresence system, in which there is a model of the robot, mirroring what the robot is doing. The teacher sees what the robot is seeing live, in 3D, from the robot’s sensors. The teacher can select different behaviors to instruct and then annotate the 3D scene, such as associating parts of the scene to a behavior, specifying how to grasp a handle, or drawing the line that defines the axis of rotation of a cabinet door. When teaching a task, a person can try different approaches, making use of their creativity to use the robot’s hands and tools to perform the task. This makes leveraging and using different tools easy, allowing humans to quickly transfer their knowledge to the robot for specific situations.

Historically, robots, like most automated cars, continuously perceive their surroundings, predict a safe path, then compute a plan of motions based on this understanding. At the other end of the spectrum, new deep learning methods compute low-level motor actions directly from visual inputs, which requires a significant amount of data from the robot performing the task. We take a middle ground. Our teaching system only needs to understand things around it that are relevant to the behavior being performed. Instead of linking low-level motor actions to what it sees, it uses higher-level behaviors. As a result, our system does not need prior object models or maps. It can be taught to associate a given set of behaviors to arbitrary scenes, objects, and voice commands from a single demonstration of the behavior. This also makes the system easy to understand and makes failure conditions easy to diagnose and reproduce.”

How Companies Can Cope With the Risks of Generative AI Tools

Everyone’s experienced the regret of telling a secret they should’ve kept. Once that information is shared, it can’t be taken back. It’s just part of the human experience.

Now it’s part of the AI experience, too. Whenever someone shares something with a generative AI tool — whether it’s a transcript they’re trying to turn into a paper or financial data they’re attempting to analyze — it cannot be taken back.

Generative AI solutions such as ChatGPT and Google’s Bard have been dominating headlines. The technologies show massive promise for a myriad of use cases and have already begun to change the way we work. But along with these big new opportunities come big risks.

AI and design: Exploring the synergy of creativity and technology

Generative AI is dominating the conversation in 2023, and the design community is no exception to its transformative potential. Product innovations fueled by emerging AI capabilities have the potential to unlock new opportunities and put the power of real-time intelligence in customers’ hands like never before.

As a design leader focused on creating innovative products and solutions for millions of our consumers and for thousands of our employees, I find AI’s potential particularly exciting for the design discipline. New technological advances like generative AI, computer vision, natural language processing and large language models can augment, complement and elevate the capabilities of designers, enabling them to focus on work that delivers maximum value to their users. At the same time, there are ongoing and important conversations about designing and implementing new safeguards and frameworks to mitigate risk and ensure the responsible application of AI.

Let’s take a closer look at the dynamic intersection of AI and design, focusing on how AI-enhanced design tools can enhance designer workflows, improve outputs and fuel product innovation.

AI performs comparably to human readers of mammograms

Using a standardized assessment, researchers in the UK compared the performance of a commercially available artificial intelligence (AI) algorithm with human readers of screening mammograms. Results of their findings were published in Radiology.

Mammographic does not detect every . False-positive interpretations can result in women without cancer undergoing unnecessary imaging and biopsy. To improve the sensitivity and specificity of screening mammography, one solution is to have two readers interpret every mammogram.

According to the researchers, double reading increases cancer detection rates by 6 to 15% and keeps recall rates low. However, this strategy is labor-intensive and difficult to achieve during reader shortages.

Spies are Using New Malware to Target Mobile Devices in Ukraine

This post is also available in: he עברית (Hebrew)

Ukraine’s security agency claims that the Russian military intelligence service GRU can access compromised Android devices with a new malware called Infamous Chisel, which is associated with the threat actor Sandworm, previously attributed to the Russian GRU’s Main Centre for Special Technologies (GTsST).

Sandworm uses this new malware to target Android devices used by the Ukrainian military, enables unauthorized access to compromised devices, and is designed to scan files, monitor traffic, and steal information.

/* */