Page 7

Mar 20, 2023

New Ultralight Material Is Tougher than Steel and Kevlar

Posted by in categories: nanotechnology, particle physics

A joint research project’s findings have just been published in the journal Nature Materials from engineers from MIT, Caltech, and ETH Zurich that has yielded a “nano-architectured” material that could prove stronger than Kevlar and steel. This material, once scaled, could provide a means of developed lightweight, protective coverings, blast shields, and other impact-resistance materials and armors for various industries.

The material is less than a width of a human hair, but still able to prevent the tiny, high-speed particles from penetrating it. According to the researchers behind the project, when compared with steel Kevlar, aluminum rother impact-resistant materials of comparable weight, the new nanotech armor outperforms them all.

Mar 20, 2023

Electroactive bacterium generates well-defined nanosized metal catalysts with remarkable water-splitting performance

Posted by in categories: biological, chemistry, nanotechnology, particle physics, sustainability

A biological method that produces metal nanoclusters using the electroactive bacterium Geobacter sulfurreducens could provide a cheap and sustainable solution to high-performance catalyst synthesis for various applications such as water splitting.

Metal nanoclusters contain fewer than one hundred atoms and are much smaller than nanoparticles. They have unique electronic properties but also feature numerous active sites available for catalysis on their surface. There are several synthetic methods for making nanoclusters, but most require multiple steps involving and harsh temperature and pressure conditions.

Continue reading “Electroactive bacterium generates well-defined nanosized metal catalysts with remarkable water-splitting performance” »

Mar 20, 2023

How AI could upend the world even more than electricity or the internet

Posted by in categories: employment, internet, robotics/AI

The rise of artificial general intelligence — now seen as inevitable in Silicon Valley — will bring change that is “orders of magnitude” greater than anything the world has yet seen, observers say. But are we ready?

AGI — defined as artificial intelligence with human cognitive abilities, as opposed to more narrow artificial intelligence, such as the headline-grabbing ChatGPT — could free people from menial tasks and usher in a new era of creativity.

But such a historic paradigm shift could also threaten jobs and raise insurmountable social issues, experts warn.

Mar 20, 2023

A Cognitive Revolution in Animal Research

Posted by in category: neuroscience

Animal ‘personalities’ are forcing scientists to rethink basic research.

Mar 20, 2023

DNA synthesis technologies to close the gene writing gap Reviews Chemistry

Posted by in categories: biotech/medical, chemistry

There is increasing demand for synthetic DNA. However, our ability to make, or write, DNA lags behind our ability to sequence, or read, it. This Review discusses commercialized DNA synthesis technologies in the pursuit of closing the DNA writing gap.

Mar 20, 2023

Is Poland’s tap water really protected by clams?

Posted by in category: biological

Using living organisms to ensure water safety.

There’s a lot of articles written about how tap water in Warsaw is constantly tested by a small team of clams. It felt like a hoax to me: so I went to find out. ▪ Thanks to MPWiK Warsaw:

Continue reading “Is Poland’s tap water really protected by clams?” »

Mar 20, 2023

Biologists Figured Out How to Generate New Neurons in the Adult Brain, Revolutionizing Neurodegenerative Disease Research

Posted by in categories: biotech/medical, genetics, neuroscience

Recently biologists discovered how to generate new neurons in the adult brain. This is an incredible breakthrough that has enormous potential to revolutionize neurodegenerative disease research. By generating genetically-mutated mice with a unique gene that activates dormant neural stem cells, scientists were able to generate new neurons in the brain. For years, scientists have been searching for ways to promote the growth of new neurons in the brain, especially in individuals with neurodegenerative diseases such as Alzheimer’s and Parkinson’s. This new discovery could lead to new treatments and therapies that could help to restore brain function and improve the quality of life for millions of people around the world.

Leslie Samuel, founder of Interactive Biology, gives some context for the importance of genetic trading between organisms for scientific research, and notes how the loss of nerve cells in the brain is one of the hallmarks of neurodegenerative diseases. The ability to generate new neurons in the adult brain could be a game-changer in the field of neurology.

Leslie’s Thoughts

Continue reading “Biologists Figured Out How to Generate New Neurons in the Adult Brain, Revolutionizing Neurodegenerative Disease Research” »

Mar 20, 2023

LoRA Weights

Posted by in categories: robotics/AI, transportation

LoRA: Low-Rank Adaptation of Large Language Model🚀 Introducing ChatLLaMA: Your Personal AI Assistant Powered by LoRA! 🤖 🌟 We’re excited to announce that you can now create custom personal assistants that run directly on your GPUs! ChatLLaMA utilizes LoRA, trained on Anthropic’s HH dataset, to model seamless convos between an AI assistant & users. Plus, the RLHF version of LoRA is coming soon! 🔥 📚 Know any high-quality dialogue-style datasets? Share them with us, and we’ll train ChatLLaMA on them! 🌐 ChatLLaMA is currently available for 30B and 13B models, with the 7B version coming soon. 🤔 Have questions or need help setting up ChatLLaMA? Join our Discord group & ask! Let’s revolutionize AI-assisted conversations together! 🌟 Disclaimer: — trained for research, — no foundation model weights, — the post was ran through gpt4 to make it more coherent.

Mar 20, 2023

Microsoft Researchers Propose A New AI Method That Uses Both Forward And Backward Language Models To Meet In The Middle And Improve The Training Data Efficiency

Posted by in category: robotics/AI

Language models (LMs) have been extensively utilized for various aided writing activities, including text summarization, code completion, and paraphrasing. LMs are effective tools for creating both natural and programming languages. Most LMs must be able to develop the next token from the sequence of earlier tokens to be useful in a wide range of applications. Due to the significance of this operation, pretraining has concentrated on improving the model’s perplexity in predicting the next token given the last tokens. However, they do have extra information that they are not using during pretraining.

For instance, they entirely disregard the following tokens while training the model to predict one token and only condition on the prefix (prior tokens) (suffix). There are alternative approaches to include the suffix in pretraining that have yet to be discussed in the literature, even though it cannot be utilized as an input to the model. They want to increase the pretraining data’s usefulness while maintaining the underlying LM’s autoregressive properties. Their strategy calls for more modeling, which at first glance could appear useless. After all, an autoregressive left-to-right LM is a primary artifact created during pretraining, and the pretraining aim closely resembles how the LM is used.

Yet, there are two reasons to explore different training objectives. Data efficiency is discussed in the first. The LM is trained using a sparse, inexpensive signal that generates a probability distribution over all potential next-token selections. However, it is only supervised using the actual next token from the training set. What if a more intense kind of supervision was used during training, where the probability distribution for the next tokens was compared to a different probability distribution? The second justification relates to other connected responsibilities. For instance, the user may prefer to fill in or edit an existing sequence of tokens in many real-world settings rather than creating text entirely from scratch.

Continue reading “Microsoft Researchers Propose A New AI Method That Uses Both Forward And Backward Language Models To Meet In The Middle And Improve The Training Data Efficiency” »

Mar 19, 2023

Books: Hundreds of books created by artificial intelligence (AI) tool ChatGPT are flooding Amazon, showing the way the technology can be adopted to produce books at scale

Posted by in category: robotics/AI

Hundreds of books created by artificial intelligence (AI) tool ChatGPT are flooding Amazon, showing the way the technology can be adopted to produce books at scale.

Nearly 300 titles that claim to be written solely by or in collaboration with ChatGPT are listed on the online bookseller’s website, across a range of genres including non-fiction, fantasy and self-help.

Many of the books appear to be published using Amazon’s Kindle Direct Publishing tool, which allows users to quickly create, publish and promote their work using a modern-day equivalent of the self-publishing model.

Page 7 of 8,842First4567891011Last