Toggle light / dark theme

GPT4 Can Replace Jobs

😗 I am actually pretty happy about this because full automation will simply life rather than needing as much education the AI can do most of the work much like the star trek computer. Full automation will allow for more freedom even from common tasks allowing the AI to most of the thinking and tasks.


A senior developer tested GPT4 for programming. GPT4 gave the Terraform script code for a single instance of the Fargate API. GPT4 knows that the code will not scale to 10,000 requests per second. It then describes how to create an auto-scaling group and make the modifications to scale the code with AWS and configure the application load balancer.

NOTE: his prompt was way more detailed than an ordinary person would produce. An ordinary person would not be able to verify the results either. You can make the case for 10x or 100x programmer productivity. A senior developer can become a programming lead or manager guiding the AI prompt requests from the equivalent of multiple programming teams.

The advantage will not be to let people who do not know a topic to play with powerful tools. The advantage is to increase the productivity and capacity of competent people to do more in areas that they understand. The AI tools will uplevel the productivity in areas where you know what can and should be done. You do not want someone who does not know how to drive behind the wheel of a Formula One race car.

AI Develops Cancer Drug in 30 Days, Predicts Life Expectancy with 80% Accuracy

AI technologies invented by scientists at the University of British Columbia and B.C. Cancer has succeeded in discovering a previously-unknown treatment pathway for an aggressive form of liver cancer, designing a new drug to treat it in the process.

The team also deployed AI to determine a patient’s life expectancy, by having it analyze doctors’ notes. The AI reportedly has an 80 percent accuracy rate in its predictions.

The medical advances came about thanks to AlphaFold, a protein structure database featuring AI analysis that can design potential medicines. The team’s work focused on hepatocellular carcinoma (HCC), which is a common and aggressive form of liver cancer.

ChatGPT stopped working for users worldwide

“I feel like a child who lost a parent in a shopping mall; please give me back my precious ChatGPT,” said one user.

ChatGPT, a viral chatbot from OpenAI, stopped working Monday (20 March), with user complaints pouring in around 4:09 AM EDT (8:09 AM GMT), according to Downdetector, a website that tracks outages.

“Literally two minutes after paid subscription. Not cool,” one user complained.

How AI could upend the world even more than electricity or the internet

The rise of artificial general intelligence — now seen as inevitable in Silicon Valley — will bring change that is “orders of magnitude” greater than anything the world has yet seen, observers say. But are we ready?

AGI — defined as artificial intelligence with human cognitive abilities, as opposed to more narrow artificial intelligence, such as the headline-grabbing ChatGPT — could free people from menial tasks and usher in a new era of creativity.

But such a historic paradigm shift could also threaten jobs and raise insurmountable social issues, experts warn.

LoRA Weights

LoRA: Low-Rank Adaptation of Large Language Model🚀 Introducing ChatLLaMA: Your Personal AI Assistant Powered by LoRA! đŸ€– 🌟 We’re excited to announce that you can now create custom personal assistants that run directly on your GPUs! ChatLLaMA utilizes LoRA, trained on Anthropic’s HH dataset, to model seamless convos between an AI assistant & users. Plus, the RLHF version of LoRA is coming soon! đŸ”„ 📚 Know any high-quality dialogue-style datasets? Share them with us, and we’ll train ChatLLaMA on them! 🌐 ChatLLaMA is currently available for 30B and 13B models, with the 7B version coming soon. đŸ€” Have questions or need help setting up ChatLLaMA? Join our Discord group & ask! Let’s revolutionize AI-assisted conversations together! 🌟 Disclaimer: — trained for research, — no foundation model weights, — the post was ran through gpt4 to make it more coherent.

Microsoft Researchers Propose A New AI Method That Uses Both Forward And Backward Language Models To Meet In The Middle And Improve The Training Data Efficiency

Language models (LMs) have been extensively utilized for various aided writing activities, including text summarization, code completion, and paraphrasing. LMs are effective tools for creating both natural and programming languages. Most LMs must be able to develop the next token from the sequence of earlier tokens to be useful in a wide range of applications. Due to the significance of this operation, pretraining has concentrated on improving the model’s perplexity in predicting the next token given the last tokens. However, they do have extra information that they are not using during pretraining.

For instance, they entirely disregard the following tokens while training the model to predict one token and only condition on the prefix (prior tokens) (suffix). There are alternative approaches to include the suffix in pretraining that have yet to be discussed in the literature, even though it cannot be utilized as an input to the model. They want to increase the pretraining data’s usefulness while maintaining the underlying LM’s autoregressive properties. Their strategy calls for more modeling, which at first glance could appear useless. After all, an autoregressive left-to-right LM is a primary artifact created during pretraining, and the pretraining aim closely resembles how the LM is used.

Yet, there are two reasons to explore different training objectives. Data efficiency is discussed in the first. The LM is trained using a sparse, inexpensive signal that generates a probability distribution over all potential next-token selections. However, it is only supervised using the actual next token from the training set. What if a more intense kind of supervision was used during training, where the probability distribution for the next tokens was compared to a different probability distribution? The second justification relates to other connected responsibilities. For instance, the user may prefer to fill in or edit an existing sequence of tokens in many real-world settings rather than creating text entirely from scratch.

Books: Hundreds of books created by artificial intelligence (AI) tool ChatGPT are flooding Amazon, showing the way the technology can be adopted to produce books at scale

Hundreds of books created by artificial intelligence (AI) tool ChatGPT are flooding Amazon, showing the way the technology can be adopted to produce books at scale.

Nearly 300 titles that claim to be written solely by or in collaboration with ChatGPT are listed on the online bookseller’s website, across a range of genres including non-fiction, fantasy and self-help.

Many of the books appear to be published using Amazon’s Kindle Direct Publishing tool, which allows users to quickly create, publish and promote their work using a modern-day equivalent of the self-publishing model.

OpenAI CEO cautions AI like ChatGPT could cause disinformation, cyber-attacks

Society has a limited amount of time “to figure out how to react” and “regulate” AI, says Sam Altman.

OpenAI CEO Sam Altman has cautioned that his company’s artificial intelligence technology, ChatGPT, poses serious risks as it reshapes society.

He emphasized that regulators and society must be involved with the technology, according to an interview telecasted by ABC News on Thursday night.


Interesting Engineering is a cutting edge, leading community designed for all lovers of engineering, technology and science.

Artificial leaf can produce 40 volts of electricity from wind or rain

This process of harvesting energy from rain is new.

Researchers in Italy have engineered an artificial leaf that can be embedded within plants to create electricity from raindrops or wind. It functions extremely well under rainy or windy conditions to light up LED lights and power itself, according to a report by IEEE Spectrum.

Fabian Meder, a researcher studying bioinspired soft robotics at the Italian Institute of Technology (IIT) in Genoa, Italy, told the science news outlet that the system could be practical for agricultural applications and remote environmental monitoring in order to observe plant health or monitor climate conditions.


Coldsnowstorm/iStock.

It functions extremely well under rainy or windy conditions to light up LED lights and power itself, according to a report by IEEE Spectrum published on Wednesday.

COQUI : A Generative AI Speech Innovation Will Revolutionize This Market

Since the recent announcements of OpenView’s ChatGPT, Google’s Bard, and Baidu’s ChatBot, the industry has been in a frenzy advancing Generative AI products and solutions. Brainy Insights estimates that the generative AI market will grow from USD $8.65 billion in 2022 and reach USD 4188.62 billion by 2032. This translates to over 36% CAGR making generative AI one of the next hottest areas to elevate AI innovations. The software segment will account for the highest revenue share of 65.0% in 2021 and is expected to retain its position over the forecast period.

What is Generative AI?


Generative AI is a form of AI that produce various types of content including text, imagery, audio and synthetic data. The recent buzz around generative AI has been driven by the simplicity of new user interfaces for creating high-quality text, graphics and videos in a matter of seconds. Although not a new technology, the introduction of generative adversarial networks, or GANs which is a type of machine learning algorithm has advanced the innovations in using this form of AI.

COQUI — Generative AI will Revolutionize Voice

The exciting news is that former Mozillians have just raised $3.3M for Coqui, generative AI speech synthesis for all creatives. Prior to founding COQUI, the CEO Kelly Davis led the Mozilla Machine Learning Group, which focused on speech technology. Before that, he worked at the Max Plank Institute for Gravitational Physics and also did his Ph.D. work in Superstring Theory.