Toggle light / dark theme

A gene that plays a key role in regulating how bodies change across the 24-hour day also influences memory formation, allowing mice to consolidate memories better during the day than at night. Researchers at Penn State tested the memory of mice during the day and at night, then identified genes whose activity fluctuated in a memory-related region of the brain in parallel with memory performance.

Experiments showed that the gene, Period 1, which is known to be involved in the body’s circadian clock, is crucial for improved daytime . A paper describing the research was published online in the journal Neuropsychopharmacology.

The research demonstrates a link between the and memory formation and begins to piece together the that help form and keep memories. Understanding these mechanisms and the influence of time of day on memory formation could help researchers to determine how and when people learn best.

Credit: Hyundai Motor Group.

During a press conference held yesterday in Seoul, South Korea, Hyundai Motor Group revealed plans for a new generation of high-tech cars incorporating nanoscale features, which it hopes to begin mass producing by 2025–2026.

Nanotechnology is defined as materials or devices that work on a scale smaller than one hundred nanometres (nm). A nanometre is one billionth of a metre or about 100,000 times narrower than a human hair. Individual atoms, for comparison, tend to range in size from 0.1 to 0.5 nm. Many interesting and unique physical effects become possible at this level of detail, making nanotechnology a highly promising technology of the future.

Human cerebral organoids are three-dimensional biological cultures grown in the laboratory to mimic as closely as possible the cellular composition, structure, and function of the corresponding organ, the brain. For now, cerebral organoids lack blood vessels and other characteristics of the human brain, but are also capable of having coordinated electrical activity. They have been usefully employed for the study of several diseases and the development of the nervous system in unprecedented ways. Research on human cerebral organoids is proceeding at a very fast pace and their complexity is bound to improve. This raises the question of whether cerebral organoids will also be able to develop the unique feature of the human brain, consciousness. If this is the case, some ethical issues would arise. In this article, we discuss the necessary neural correlates and constraints for the emergence of consciousness according to some of the most debated neuroscientific theories. Based on this, we consider what the moral status of a potentially conscious brain organoid might be, in light of ethical and ontological arguments. We conclude by proposing a precautionary principle and some leads for further investigation. In particular, we consider the outcomes of some very recent experiments as entities of a potential new kind.

It’s becoming increasingly clear that courts, not politicians, will be the first to determine the limits on how AI is developed and used in the US.

Last week, the Federal Trade Commission opened an investigation into whether OpenAI violated consumer protection laws by scraping people’s online data to train its popular AI chatbot ChatGPT. Meanwhile, artists, authors, and the image company Getty are suing AI companies such as OpenAI, Stability AI, and Meta, alleging that they broke copyright laws by training their models on their work without providing any recognition or payment.

OpenAI is releasing the Android version of the app for ChatGPT next week after launching on iOS in May.

Since launching in November, OpenAI’s ChatGPT tool has reached a number of users at a rate that’s astounding for anything outside of Threads — now the company says it’s ready to release an app for Android.

The ChatGPT for Android app is launching a few months after the free iOS app brought the chatbot to iPhones and iPads.

In a tweet, the company announced ChatGPT for Android is rolling out next week without listing a specific day and linked to a preorder page in the Google Play Store where you can register to get it installed once the app is available.


The Biden administration announced on Friday a voluntary agreement with seven leading AI companies, including Amazon, Google, and Microsoft. The move, ostensibly aimed at managing the risks posed by AI and protecting Americans’ rights and safety, has provoked a range of questions, the foremost being: What does the new voluntary AI agreement mean?

At first glance, the voluntary nature of these commitments looks promising. Regulation in the technology sector is always contentious, with companies wary of stifling growth and governments eager to avoid making mistakes. By sidestepping the direct imposition of command and control regulation, the administration can avoid the pitfalls of imposing… More.


That said, it’s not an entirely hollow gesture. It does emphasize important principles of safety, security, and trust in AI, and it reinforces the notion that companies should take responsibility for the potential societal impact of their technologies. Moreover, the administration’s focus on a cooperative approach, involving a broad range of stakeholders, hints at a potentially promising direction for future AI governance. However, we should also not forget the risk of government growing too cozy with industry.

Still, let’s not mistake this announcement for a seismic shift in AI regulation. We should consider this a not-very-significant step on the path to responsible AI. At the end of the day, what the government and these companies have done is put out a press release.

Apple is secretly developing its own generative AI tools to challenge OpenAI’s GPT and other language models like Google’s Bard, reports Mark Gurman of Bloomberg News. Internally dubbed “Apple GPT,” the company’s ChatGPT clone is already being used by some employees for work tasks.

Gurman reveals that a cross-functional collaboration is underway at Apple, with teams from AI, software engineering, and cloud infrastructure working together on this covert project. This initiative is driven by concerns within Apple that its devices may lose their essential status if the company lags behind its competitors in AI.

AI has been a component of Apple’s devices for years, primarily operating behind the scenes to enhance photo and video quality, and to power features such as crash detection. More recently, Apple has begun to introduce AI-powered enhancements to its iOS.