Toggle light / dark theme

A joint research project’s findings have just been published in the journal Nature Materials from engineers from MIT, Caltech, and ETH Zurich that has yielded a “nano-architectured” material that could prove stronger than Kevlar and steel. This material, once scaled, could provide a means of developed lightweight, protective coverings, blast shields, and other impact-resistance materials and armors for various industries.

The material is less than a width of a human hair, but still able to prevent the tiny, high-speed particles from penetrating it. According to the researchers behind the project, when compared with steel Kevlar, aluminum rother impact-resistant materials of comparable weight, the new nanotech armor outperforms them all.

A biological method that produces metal nanoclusters using the electroactive bacterium Geobacter sulfurreducens could provide a cheap and sustainable solution to high-performance catalyst synthesis for various applications such as water splitting.

Metal nanoclusters contain fewer than one hundred atoms and are much smaller than nanoparticles. They have unique electronic properties but also feature numerous active sites available for catalysis on their surface. There are several synthetic methods for making nanoclusters, but most require multiple steps involving and harsh temperature and pressure conditions.

Biological methods are expected to deliver ecofriendly alternatives to conventional chemical synthesis. Yet, to date, they have only led to large nanoparticles in a wide range of sizes. “We found a way to control the size of the nanoclusters,” says Rodrigo Jimenez-Sandoval, a Ph.D. candidate in Pascal Saikaly’s group at KAUST.

The rise of artificial general intelligence — now seen as inevitable in Silicon Valley — will bring change that is “orders of magnitude” greater than anything the world has yet seen, observers say. But are we ready?

AGI — defined as artificial intelligence with human cognitive abilities, as opposed to more narrow artificial intelligence, such as the headline-grabbing ChatGPT — could free people from menial tasks and usher in a new era of creativity.

But such a historic paradigm shift could also threaten jobs and raise insurmountable social issues, experts warn.

Recently biologists discovered how to generate new neurons in the adult brain. This is an incredible breakthrough that has enormous potential to revolutionize neurodegenerative disease research. By generating genetically-mutated mice with a unique gene that activates dormant neural stem cells, scientists were able to generate new neurons in the brain. For years, scientists have been searching for ways to promote the growth of new neurons in the brain, especially in individuals with neurodegenerative diseases such as Alzheimer’s and Parkinson’s. This new discovery could lead to new treatments and therapies that could help to restore brain function and improve the quality of life for millions of people around the world.

Leslie Samuel, founder of Interactive Biology, gives some context for the importance of genetic trading between organisms for scientific research, and notes how the loss of nerve cells in the brain is one of the hallmarks of neurodegenerative diseases. The ability to generate new neurons in the adult brain could be a game-changer in the field of neurology.

Leslie’s Thoughts

LoRA: Low-Rank Adaptation of Large Language Model🚀 Introducing ChatLLaMA: Your Personal AI Assistant Powered by LoRA! 🤖 🌟 We’re excited to announce that you can now create custom personal assistants that run directly on your GPUs! ChatLLaMA utilizes LoRA, trained on Anthropic’s HH dataset, to model seamless convos between an AI assistant & users. Plus, the RLHF version of LoRA is coming soon! 🔥 📚 Know any high-quality dialogue-style datasets? Share them with us, and we’ll train ChatLLaMA on them! 🌐 ChatLLaMA is currently available for 30B and 13B models, with the 7B version coming soon. 🤔 Have questions or need help setting up ChatLLaMA? Join our Discord group & ask! Let’s revolutionize AI-assisted conversations together! 🌟 Disclaimer: — trained for research, — no foundation model weights, — the post was ran through gpt4 to make it more coherent.

Language models (LMs) have been extensively utilized for various aided writing activities, including text summarization, code completion, and paraphrasing. LMs are effective tools for creating both natural and programming languages. Most LMs must be able to develop the next token from the sequence of earlier tokens to be useful in a wide range of applications. Due to the significance of this operation, pretraining has concentrated on improving the model’s perplexity in predicting the next token given the last tokens. However, they do have extra information that they are not using during pretraining.

For instance, they entirely disregard the following tokens while training the model to predict one token and only condition on the prefix (prior tokens) (suffix). There are alternative approaches to include the suffix in pretraining that have yet to be discussed in the literature, even though it cannot be utilized as an input to the model. They want to increase the pretraining data’s usefulness while maintaining the underlying LM’s autoregressive properties. Their strategy calls for more modeling, which at first glance could appear useless. After all, an autoregressive left-to-right LM is a primary artifact created during pretraining, and the pretraining aim closely resembles how the LM is used.

Yet, there are two reasons to explore different training objectives. Data efficiency is discussed in the first. The LM is trained using a sparse, inexpensive signal that generates a probability distribution over all potential next-token selections. However, it is only supervised using the actual next token from the training set. What if a more intense kind of supervision was used during training, where the probability distribution for the next tokens was compared to a different probability distribution? The second justification relates to other connected responsibilities. For instance, the user may prefer to fill in or edit an existing sequence of tokens in many real-world settings rather than creating text entirely from scratch.

Hundreds of books created by artificial intelligence (AI) tool ChatGPT are flooding Amazon, showing the way the technology can be adopted to produce books at scale.

Nearly 300 titles that claim to be written solely by or in collaboration with ChatGPT are listed on the online bookseller’s website, across a range of genres including non-fiction, fantasy and self-help.

Many of the books appear to be published using Amazon’s Kindle Direct Publishing tool, which allows users to quickly create, publish and promote their work using a modern-day equivalent of the self-publishing model.