Toggle light / dark theme

😲


In 1997, IBM’s Deep Blue defeated the reigning world champion chess player, Garry Kasparov. In 2016, Google’s AlphaGo defeated one of the worlds top Go players in a five-game match. Today, OpenAI released GPT-4, which it claims beats 90% of humans who take the bar to become a lawyer, and 99% of students who compete in the Biology Olympiad, an international competition that tests the knowledge and skills of high school students in the field of biology.

In fact, it scores in the top ranks for at least 34 different tests of ability in fields as diverse as macroeconomics, writing, math, and — yes — vinology.

“GPT-4 exhibits human-level performance on the majority of these professional and academic exams,” says OpenAI.

Studying Our Ocean’s History To Understanding Its Future — Dr. Emily Osborne, PhD, Ocean Chemistry & Ecosystems Division, National Oceanic and Atmospheric Administration (NOAA)


Dr Emily Osborne, Ph.D. (https://www.aoml.noaa.gov/people/emily-osborne/) is a Research Scientist, in the Ocean Chemistry and Ecosystems Division, at the Atlantic Oceanographic and Meteorological Laboratory.

The Atlantic Oceanographic and Meteorological Laboratory (AOML), a federal research laboratory, is part of the National Oceanic and Atmospheric Administration’s (NOAA) Office of Oceanic and Atmospheric Research (OAR), located in Miami in the United States. AOML’s research spans tropical cyclone and hurricanes, coastal ecosystems, oceans and human health, climate studies, global carbon systems, and ocean observations. It is one of ten NOAA Research Laboratories.

With a B.S. in Geology from the College of Charleston and a Ph.D. in Marine Science from University of South Carolina, Dr. Osborne is currently involved in investigating regional and global biogeochemical issues related to ocean health and climate through the use of a combination of paleoceanographic approaches, new autonomous sensors, and conventional measurements on large multi-disciplinary oceanographic cruises.

Paleoceanography is the study of the history of the oceans in the geologic past with regard to circulation, chemistry, biology, geology and patterns of sedimentation and biological productivity. Paleoceanographic studies using environment models and different proxies enable the scientific community to assess the role of the oceanic processes in the global climate by the re-construction of past climate at various intervals.

😗 I am actually pretty happy about this because full automation will simply life rather than needing as much education the AI can do most of the work much like the star trek computer. Full automation will allow for more freedom even from common tasks allowing the AI to most of the thinking and tasks.


A senior developer tested GPT4 for programming. GPT4 gave the Terraform script code for a single instance of the Fargate API. GPT4 knows that the code will not scale to 10,000 requests per second. It then describes how to create an auto-scaling group and make the modifications to scale the code with AWS and configure the application load balancer.

NOTE: his prompt was way more detailed than an ordinary person would produce. An ordinary person would not be able to verify the results either. You can make the case for 10x or 100x programmer productivity. A senior developer can become a programming lead or manager guiding the AI prompt requests from the equivalent of multiple programming teams.

The advantage will not be to let people who do not know a topic to play with powerful tools. The advantage is to increase the productivity and capacity of competent people to do more in areas that they understand. The AI tools will uplevel the productivity in areas where you know what can and should be done. You do not want someone who does not know how to drive behind the wheel of a Formula One race car.

AI technologies invented by scientists at the University of British Columbia and B.C. Cancer has succeeded in discovering a previously-unknown treatment pathway for an aggressive form of liver cancer, designing a new drug to treat it in the process.

The team also deployed AI to determine a patient’s life expectancy, by having it analyze doctors’ notes. The AI reportedly has an 80 percent accuracy rate in its predictions.

The medical advances came about thanks to AlphaFold, a protein structure database featuring AI analysis that can design potential medicines. The team’s work focused on hepatocellular carcinoma (HCC), which is a common and aggressive form of liver cancer.

“I feel like a child who lost a parent in a shopping mall; please give me back my precious ChatGPT,” said one user.

ChatGPT, a viral chatbot from OpenAI, stopped working Monday (20 March), with user complaints pouring in around 4:09 AM EDT (8:09 AM GMT), according to Downdetector, a website that tracks outages.

“Literally two minutes after paid subscription. Not cool,” one user complained.

The rise of artificial general intelligence — now seen as inevitable in Silicon Valley — will bring change that is “orders of magnitude” greater than anything the world has yet seen, observers say. But are we ready?

AGI — defined as artificial intelligence with human cognitive abilities, as opposed to more narrow artificial intelligence, such as the headline-grabbing ChatGPT — could free people from menial tasks and usher in a new era of creativity.

But such a historic paradigm shift could also threaten jobs and raise insurmountable social issues, experts warn.

LoRA: Low-Rank Adaptation of Large Language Model🚀 Introducing ChatLLaMA: Your Personal AI Assistant Powered by LoRA! 🤖 🌟 We’re excited to announce that you can now create custom personal assistants that run directly on your GPUs! ChatLLaMA utilizes LoRA, trained on Anthropic’s HH dataset, to model seamless convos between an AI assistant & users. Plus, the RLHF version of LoRA is coming soon! 🔥 📚 Know any high-quality dialogue-style datasets? Share them with us, and we’ll train ChatLLaMA on them! 🌐 ChatLLaMA is currently available for 30B and 13B models, with the 7B version coming soon. 🤔 Have questions or need help setting up ChatLLaMA? Join our Discord group & ask! Let’s revolutionize AI-assisted conversations together! 🌟 Disclaimer: — trained for research, — no foundation model weights, — the post was ran through gpt4 to make it more coherent.

Language models (LMs) have been extensively utilized for various aided writing activities, including text summarization, code completion, and paraphrasing. LMs are effective tools for creating both natural and programming languages. Most LMs must be able to develop the next token from the sequence of earlier tokens to be useful in a wide range of applications. Due to the significance of this operation, pretraining has concentrated on improving the model’s perplexity in predicting the next token given the last tokens. However, they do have extra information that they are not using during pretraining.

For instance, they entirely disregard the following tokens while training the model to predict one token and only condition on the prefix (prior tokens) (suffix). There are alternative approaches to include the suffix in pretraining that have yet to be discussed in the literature, even though it cannot be utilized as an input to the model. They want to increase the pretraining data’s usefulness while maintaining the underlying LM’s autoregressive properties. Their strategy calls for more modeling, which at first glance could appear useless. After all, an autoregressive left-to-right LM is a primary artifact created during pretraining, and the pretraining aim closely resembles how the LM is used.

Yet, there are two reasons to explore different training objectives. Data efficiency is discussed in the first. The LM is trained using a sparse, inexpensive signal that generates a probability distribution over all potential next-token selections. However, it is only supervised using the actual next token from the training set. What if a more intense kind of supervision was used during training, where the probability distribution for the next tokens was compared to a different probability distribution? The second justification relates to other connected responsibilities. For instance, the user may prefer to fill in or edit an existing sequence of tokens in many real-world settings rather than creating text entirely from scratch.