Toggle light / dark theme

Bothering And Even Spying On Your Neighbors Via An AI Self-Driving Car

Speaking of cars, consider the future of transportation and mobility, entailing the advent of self-driving cars.

It would seem that self-driving cars will be a welcomed boon to humanity. Predictions are that the regrettable 40,000 annual fatalities due to car crashes in the United States alone will be reduced enormously, and likewise, the estimated 2.3 million car crash injuries will nearly disappear.

What’s not to like about the emergence of self-driving cars?

That brings up this intriguing question: Could the advent of AI-based true self-driving cars somehow get intermingled into the act of bothering a neighbor?

This seems like a rather curious question and defies the aura of goodness that surrounds the self-driving car realm.

Full Story:


How health care AI could help train tomorrow’s physicians

As the medical community’s understanding of the application of augmented intelligence (AI) in health care grows, there remains the question of how AI—often called artificial intelligence—should be incorporated into physician training. The term augmented intelligence is preferred because it recognizes the enhancement, rather than replacement, of human capabilities.

Understanding how AI can affect patients may help learners appreciate its relevance, he noted, adding that the National Board of Medical Examiners exam now tests physicians-in-training on health systems science, and there are questions about health care AI specifically.

But AI doesn’t just relate to systems issues. It also has a home within evidence-based medicine (EBM).

Full Story:


“Understandably, there have also been many who have been concerned about fitting new content into already overcrowded curricula,” Dr. James said. This can include figuring out who on the faculty will take on teaching new content.

Brain cell differences could be key to learning in humans and AI

Imperial researchers have found that variability between brain cells might speed up learning and improve the performance of the brain and future artificial intelligence (AI).

The new study found that by tweaking the electrical properties of individual cells in simulations of brain networks, the networks learned faster than simulations with identical cells.

They also found that the networks needed fewer of the tweaked cells to get the same results and that the method is less energy-intensive than models with identical cells.

Full Story:


The research is published in Nature Communications.

Why is a neuron like a snowflake?

Neuroscientists roll out first comprehensive atlas of brain cells

A slew of new studies now shows that the area of the brain responsible for initiating this action — the primary motor cortex, which controls movement — has as many as 116 different types of cells that work together to make this happen.

The 17 studies, appearing online Oct. 6 in the journal Nature, are the result of five years of work by a huge consortium of researchers supported by the National Institutes of Health’s Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative to identify the myriad of different cell types in one portion of the brain. It is the first step in a long-term project to generate an atlas of the entire brain to help understand how the neural networks in our head control our body and mind and how they are disrupted in cases of mental and physical problems.

“If you think of the brain as an extremely complex machine, how could we understand it without first breaking it down and knowing the parts?” asked cellular neuroscientist Helen Bateup, a University of California, Berkeley, associate professor of molecular and cell biology and co-author of the flagship paper that synthesizes the results of the other papers. “The first page of any manual of how the brain works should read: Here are all the cellular components, this is how many of them there are, here is where they are located and who they connect to.”

Artificial intelligence can help halve road deaths by 2030

The Sustainable Development Goals (SDGs) include a call for action to halve the annual rate of road deaths globally and ensure access to safe, affordable, and sustainable transport for everyone by 2030.

According to the newly launched initiative, faster progress on AI is vital to make this happen, especially in low and middle-income countries, where the most lives are lost on the roads each year.

According to the World Health Organization (WHO), approximately 1.3 million people die annually as a result of road traffic crashes. Between 20 and 50 million more suffer non-fatal injuries, with many incurring a disability.

Full Story:


A woman rushes across a busy road in Brazil., by PAHO

AI can help in different ways, including better collection and analysis of crash data, enhancing road infrastructure, increasing the efficiency of post-crash response, and inspiring innovation in the regulatory frameworks.

Google AI Introduces ‘FLAN’: An Instruction-Tuned Generalizable Language (NLP) Model To Perform Zero-Shot Tasks

Google AI Introduces FLAN: An Instruction-Tuned Generalizable Language (NLP) Model To Perform Zero-Shot Tasks


To generate meaningful text, a machine learning model needs a lot of knowledge about the world and should have the ability to abstract them. While language models that have been trained to accomplish this are becoming increasingly capable of acquiring this knowledge automatically as they grow, it is unclear how to unlock this knowledge and apply it to specific real-world activities.

Fine-tuning is one well-established method for doing so. It involves training a pretrained model like BERT or T5 on a labeled dataset to adjust it to a downstream job. However, it has a large number of training instances and stored model weights for each downstream job, which is not always feasible, especially for large models.

A recent Google study looks into a simple technique known as instruction fine-tuning, sometimes known as instruction tuning. This entails fine-tuning a model to make it more receptive to performing NLP (Natural language processing) tasks in general rather than a specific task.

Artificial intelligence is evolving all by itself

Circa 2020


Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.

“While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.”

Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.

Could the biggest greenhouse in the US be the future of farming?

As well as high-tech greenhouses, vertical farms, where food is grown indoors in vertically stacked beds without soil or natural light, are growing in popularity. NextOn operates a vertical farm in an abandoned tunnel beneath a mountain in South Korea. US company AeroFarms plans to build a 90,000-square-foot indoor vertical farm in Abu Dhabi, and Berlin-based Infarm has brought modular vertical farms directly to grocery stores, growing fresh produce in Tokyo stores.


AppHarvest says its greenhouse in Morehead, Kentucky, uses robotics and artificial intelligence to grow millions of tons of tomatoes, using 90% less water than in open fields.