Toggle light / dark theme

OpenAI co-founder on company’s past approach to openly sharing research: “We were wrong”

OpenAI announced its latest language model, GPT-4, but many in the AI community were disappointed by the lack of public information. Their complaints track increasing tensions in the AI world over safety.

Yesterday, OpenAI announced GPT-4, its long-awaited next-generation AI language model.


Should AI research be open or closed? Experts disagree.

Many in the AI community have criticized this decision, noting that it undermines the company’s founding ethos as a research org and makes it harder for others to replicate its work. Perhaps more significantly, some say it also makes it difficult to develop safeguards against the sort of threats posed by AI systems like GPT-4, with these complaints coming at a time of increasing tension and rapid progress in the AI world.

“I think we can call it shut on ‘Open’ AI: the 98 page paper introducing GPT-4 proudly declares that they’re disclosing *nothing* about the contents of their training set,” tweeted Ben Schmidt, VP of information design at Nomic AI, in a thread on the topic.

LinkedIn expands its generative AI assistant to recruitment ads and writing profiles

Earlier this month, when LinkedIn started seeding “AI-powered conversation starters” in people’s news feeds to boost engagement on its platform, the move saw more than little engagement of its own, none of it too positive.

But the truth of the matter with LinkedIn is that it’s been using a lot of AI and other kinds of automation across different aspects of its platform for years, primarily behind the scenes with how it builds and operates its network. Now, with its owner Microsoft going all-in on OpenAI, it looks like it’s becoming a more prominent part of the strategy for LinkedIn on the front end, too — with the latest coming today in the areas of LinkedIn profiles, recruitment and LinkedIn Learning.

The company is today introducing AI-powered writing suggestions, which will initially be offered to people to spruce up their LinkedIn profiles, and to recruiters writing job descriptions. Both are built on advanced GPT models, said Tomer Cohen, LinkedIn’s chief product officer. LinkedIn is using GPT-4 for personalized profiles, with GPT-3.5 for job descriptions. Alongside this, the company is also creating a bigger focus on AI in LinkedIn Learning, corralling 100 courses around the subject and adding 20 more focused just on generative AI.

GPT-4 Creator Ilya Sutskever on AI Hallucinations and AI Democracy

As we hurtle towards a future filled with artificial intelligence, many commentators are wondering aloud whether we’re moving too fast. The tech giants, the researchers, and the investors all seem to be in a mad dash to develop the most advanced AI. But are they considering the risks, the worriers ask?

The question is not entirely moot, and rest assured that there are hundreds of incisive minds considering the dystopian possibilities — and ways to avoid them. But the fact is that the future is unknown, the implications of this powerful new technology are as unimagined as was social media at the advent of the Internet. There will be good and there will be bad, but there will be powerful artificial intelligence systems in our future and even more powerful AIs in the futures of our grandchildren. It can’t be stopped, but it can be understood.

I spoke about this new technology with Ilya Stutskeve r, a co-founder of OpenAI, the not-for-profit AI research institute whose spinoffs are likely to be among the most profitable entities on earth. My conversation with Ilya was shortly before the release of GPT-4, the latest iteration of OpenAI’s giant AI system, which has consumed billions of words of text — more than any one human could possibly read in a lifetime.

ChatGPT for financial advice? Morgan Stanley tries AI

16,000 financial advisors of the bank must be nervous.

Multinational investment management and financial services company Morgan Stanley is deploying a sophisticated chatbot to support the bank’s army of financial advisors powered by the most recent OpenAI technology, according to CNBC

The tool’s goal is to help the bank’s advisors access its data.


Getty Images.

According to Jeff McMillan, head of analytics, data, and innovation at the company’s wealth management division, the bank has tested the artificial intelligence tool with 300 advisers. It intends to make it broadly available in the upcoming months.

Researchers develop soft robot that easily transitions from land to sea

Inspired by nature, these soft robots received their amphibious upgrade with the help of bistable actuators.

Researchers at Carnegie Mellon University have created a soft robot that can effortlessly transition from walking to swimming or from crawling to rolling.

“We were inspired by nature to develop a robot that can perform different tasks and adapt to its environment without adding actuators or complexity,” said Dinesh K. Patel, a postdoctoral fellow in the Morphing Matter Lab in the School of Computer Science’s Human-Computer Interaction Institute. “Our bistable actuator is simple, stable and durable, and lays the foundation for future work on dynamic, reconfigurable soft robotics.”

SpaceX’s Dragon set to deliver beating human heart tissue to the ISS

Long-term microgravity exposure causes various biological changes, ranging from bone loss to changes in cardiovascular function.

Towards this, SpaceX’s Dragon cargo ship is set to deliver cardiac tissue chips to the International Space Station (ISS). According to NASA, the cargo spacecraft is expected to autonomously dock with the ISS at 7:52 am EDT Thursday, March 16.

Scientists discover key information about the function of mitochondria in cancer cells

Scientists have long known that mitochondria play a crucial role in the metabolism and energy production of cancer cells. However, until now, little was known about the relationship between the structural organization of mitochondrial networks and their functional bioenergetic activity at the level of whole tumors.

In a new study, published in Nature, researchers from the UCLA Jonsson Comprehensive Cancer Center used (PET) in combination with to generate 3-dimensional ultra-resolution maps of mitochondrial networks in of genetically engineered mice.

They categorized the tumors based on mitochondrial activity and other factors using an artificial intelligence technique called , quantifying the mitochondrial architecture across hundreds of cells and thousands of mitochondria throughout the tumor.

Ex-OpenAI employees launch new AI chatbot Claude to compete with ChatGPT

By Ankita Chakravarti: ChatGPT, which is the fastest growing app in the world, has competition now. After Microsoft’ Bing and Google’s Bard AI, Anthropic, which was founded by former OpenAI employees, has launched a new AI chatbot to rival ChatGPT. The company claims that Claude is “easier to converse with” “more steerable.” and “much less likely to produce harmful outputs,”

Claude performs pretty well and has the same functions as the ChatGPT. “Claude can help with use cases including summarization, search, creative and collaborative writing, Q&A, coding, and more. Early customers report that Claude is much less likely to produce harmful outputs, easier to converse with, and more steerable — so you can get your desired output with less effort. Claude can also take direction on personality, tone, and behavior,” the company said in a blog post.

Anthrophic is offering Claude in two different variants including the Claude and Claude Instant. The company explains that Claude is a “state-of-the-art high-performance model”, while Claude Instant is a “lighter, less expensive, and much faster option.” “We plan to introduce even more updates in the coming weeks. As we develop these systems, we’ll continually work to make them more helpful, honest, and harmless as we learn more from our safety research and our deployments,” the blog read.

Karl Friston — World Renowned Researcher — Joins Verses Technologies as Chief Scientist

He was ranked the number 1 most influential neuroscientist in the world by Semantic Scholar in 2016, and has received numerous awards and accolades for his work. His appointment as chief scientist of Verses not only validates their platform’s framework for advancing AI implementations but also highlights the company’s commitment to expanding the frontier of AI research and development.

Friston is short listed for a Nobel Prize, is one of the most cited scientists in human history with over 260,000 academic citations, and invented all of the mathematics behind the fMRI scan. As one pundit put it, “what Einstein was to physics, Friston is to Intelligence.”

Indeed Friston’s expertise will be invaluable in helping the company execute its vision of deploying a plethora of technologies working toward a smarter world through AI.

Researchers From Stanford And DeepMind Come Up With The Idea of Using Large Language Models LLMs as a Proxy Reward Function

With the development of computing and data, autonomous agents are gaining power. The need for humans to have some say over the policies learned by agents and to check that they align with their goals becomes all the more apparent in light of this.

Currently, users either 1) create reward functions for desired actions or 2) provide extensive labeled data. Both strategies present difficulties and are unlikely to be implemented in practice. Agents are vulnerable to reward hacking, making it challenging to design reward functions that strike a balance between competing goals. Yet, a reward function can be learned from annotated examples. However, enormous amounts of labeled data are needed to capture the subtleties of individual users’ tastes and objectives, which has proven expensive. Furthermore, reward functions must be redesigned, or the dataset should be re-collected for a new user population with different goals.

New research by Stanford University and DeepMind aims to design a system that makes it simpler for users to share their preferences, with an interface that is more natural than writing a reward function and a cost-effective approach to define those preferences using only a few instances. Their work uses large language models (LLMs) that have been trained on massive amounts of text data from the internet and have proven adept at learning in context with no or very few training examples. According to the researchers, LLMs are excellent contextual learners because they have been trained on a large enough dataset to incorporate important commonsense priors about human behavior.