Menu

Blog

Archive for the ‘robotics/AI’ category: Page 459

Mar 15, 2023

Karl Friston — World Renowned Researcher — Joins Verses Technologies as Chief Scientist

Posted by in categories: mathematics, physics, robotics/AI

He was ranked the number 1 most influential neuroscientist in the world by Semantic Scholar in 2016, and has received numerous awards and accolades for his work. His appointment as chief scientist of Verses not only validates their platform’s framework for advancing AI implementations but also highlights the company’s commitment to expanding the frontier of AI research and development.

Friston is short listed for a Nobel Prize, is one of the most cited scientists in human history with over 260,000 academic citations, and invented all of the mathematics behind the fMRI scan. As one pundit put it, “what Einstein was to physics, Friston is to Intelligence.”

Indeed Friston’s expertise will be invaluable in helping the company execute its vision of deploying a plethora of technologies working toward a smarter world through AI.

Mar 15, 2023

Researchers From Stanford And DeepMind Come Up With The Idea of Using Large Language Models LLMs as a Proxy Reward Function

Posted by in categories: cybercrime/malcode, internet, robotics/AI

With the development of computing and data, autonomous agents are gaining power. The need for humans to have some say over the policies learned by agents and to check that they align with their goals becomes all the more apparent in light of this.

Currently, users either 1) create reward functions for desired actions or 2) provide extensive labeled data. Both strategies present difficulties and are unlikely to be implemented in practice. Agents are vulnerable to reward hacking, making it challenging to design reward functions that strike a balance between competing goals. Yet, a reward function can be learned from annotated examples. However, enormous amounts of labeled data are needed to capture the subtleties of individual users’ tastes and objectives, which has proven expensive. Furthermore, reward functions must be redesigned, or the dataset should be re-collected for a new user population with different goals.

New research by Stanford University and DeepMind aims to design a system that makes it simpler for users to share their preferences, with an interface that is more natural than writing a reward function and a cost-effective approach to define those preferences using only a few instances. Their work uses large language models (LLMs) that have been trained on massive amounts of text data from the internet and have proven adept at learning in context with no or very few training examples. According to the researchers, LLMs are excellent contextual learners because they have been trained on a large enough dataset to incorporate important commonsense priors about human behavior.

Mar 15, 2023

Microsofts latest layoffs could be the beginning of the end for ‘ethical AI’

Posted by in categories: ethics, robotics/AI

Microsoft’s latest layoffs throw ethics out the window and we should all be worried.

Mar 15, 2023

AI Might Be Seemingly Everywhere, but There Are Still Plenty of Things It Can’t Do—For Now

Posted by in categories: internet, robotics/AI

These days, we don’t have to wait long until the next breakthrough in artificial intelligence impresses everyone with capabilities that previously belonged only in science fiction.

In 2022, AI art generation tools such as Open AI’s DALL-E 2, Google’s Imagen, and Stable Diffusion took the internet by storm, with users generating high-quality images from text descriptions.

Continue reading “AI Might Be Seemingly Everywhere, but There Are Still Plenty of Things It Can’t Do—For Now” »

Mar 15, 2023

Google AI just announced the PaLM API!

Posted by in category: robotics/AI

It will be released with a new tool called MakerSuite, which lets you prototype ideas, do prompt engineering, synthetic data generation and custom-model tuning. Waitlist available soon.

Mar 15, 2023

Now Microsoft has a new AI model

Posted by in category: robotics/AI

Microsoft’s Kosmos-1 can take image and audio prompts, paving the way for the next stage beyond ChatGPT’s text prompts.

Microsoft has unveiled Kosmos-1, which it describes as a multimodal large language model (MLLM) that can not only respond to language prompts but also visual cues, which can be used for an array of tasks, including image captioning, visual question answering, and more.

OpenAI’s ChatGPT has helped popularize the concept of LLMs, such as the GPT (Generative Pre-trained Transformer) model, and the possibility of transforming a text prompt or input into an output.

Mar 15, 2023

Morgan Stanley is testing an OpenAI-powered chatbot for its 16,000 financial advisors

Posted by in categories: finance, robotics/AI

The bank has been testing the artificial intelligence tool with 300 advisors and plans to roll it out widely in the coming months, according to Jeff McMillan, head of analytics, data and innovation at the firm’s wealth management division.

Morgan Stanley’s move is one of the first announcements by a financial incumbent after the success of OpenAI’s ChatGPT, which went viral late last year by generating human-sounding responses to questions. The bank is a juggernaut in wealth management with more than $4.2 trillion in client assets. The promise and perils of artificial intelligence have been written about for years, but seemingly only after ChatGPT did mainstream users understand the ramifications of the technology.

The idea behind the tool, which has been in development for the past year, is to help the bank’s 16,000 or so advisors tap the bank’s enormous repository of research and data, said McMillan.

Mar 15, 2023

Unlocking the Secrets of Deep Learning with Tensorleap’s Explainability Platform

Posted by in categories: futurism, robotics/AI

Deep Learning (DL) advances have cleared the way for intriguing new applications and are influencing the future of Artificial Intelligence (AI) technology. However, a typical concern for DL models is their explainability, as experts commonly agree that Neural Networks (NNs) function as black boxes. We do not precisely know what happens inside, but we know that the given input is somehow processed, and as a result, we obtain something as output. For this reason, DL models can often be difficult to understand or interpret. Understanding why a model makes certain predictions or how to improve it can be challenging.

This article will introduce and emphasize the importance of NN explainability, provide insights into how to achieve it, and suggest tools that could improve your DL model’s performance.

Mar 15, 2023

Top 7 AI Examples In Healthcare — The Medical Futurist

Posted by in categories: biotech/medical, robotics/AI

Artificial intelligence is no longer a futuristic idea. It’s already here, and it has turned out to be a powerful, disruptive force in healthcare fueling some of the most innovative diagnostic tools of today.

Let’s take a look at 7 examples where AI has started to transform healthcare!

Mar 14, 2023

GPT-4

Posted by in categories: robotics/AI, supercomputing

Its Up!


We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%. We’ve spent 6 months iteratively aligning GPT-4 using lessons from our adversarial testing program as well as ChatGPT, resulting in our best-ever results (though far from perfect) on factuality, steerability, and refusing to go outside of guardrails.

Over the past two years, we rebuilt our entire deep learning stack and, together with Azure, co-designed a supercomputer from the ground up for our workload. A year ago, we trained GPT-3.5 as a first “test run” of the system. We found and fixed some bugs and improved our theoretical foundations. As a result, our GPT-4 training run was (for us at least!) unprecedentedly stable, becoming our first large model whose training performance we were able to accurately predict ahead of time. As we continue to focus on reliable scaling, we aim to hone our methodology to help us predict and prepare for future capabilities increasingly far in advance—something we view as critical for safety.

Continue reading “GPT-4” »

Page 459 of 2,042First456457458459460461462463Last