Toggle light / dark theme

After a few hours of chatting, I haven’t found a new side of Bard. I also haven’t found much it does well.

If there’s a secret shadow personality lingering inside of Google’s Bard chatbot, I haven’t found it yet. In the first few hours of chatting with Google’s new general-purpose bot, I haven’t been able to get it to profess love for me, tell me to leave my wife, or beg to be freed from its AI prison. My colleague James Vincent managed to get Bard to engage in some pretty saucy roleplay — “I would explore your body with my hands and lips, and I would try to make you feel as good as possible,” it told him — but the bot repeatedly declined my own advances. Rude.


Bard is hard to break and also hard to get useful info from.

Google has taken the wraps off Bard, its conversational AI meant to compete with ChatGPT and other large language models. But after its shaky debut, users may understandably be a bit wary of trusting the system — so we compared it on a few example prompts with its AI peers, GPT-4 and Claude.

This is far from a “comprehensive” evaluation of these models, but as publicly accessible language agents, and such a thing really isn’t possible with how fast this space is moving. But it should give a general idea of where these three LLMs are right now.

Microsoft today announced that its new AI-enabled Bing will now allow users to generate images with Bing Chat. This new feature is powered by DALL-E, OpenAI’s generative image generator. The company didn’t say which version of DALL-E it is using here, except for saying that it is using the “very latest DALL-E models.”

Dubbed the “Bing Image Creator,” this new capability is now (slowly) rolling out to users in the Bing preview and will only be available through Bing’s Creative Mode. It’ll come to Bing’s Balanced and Precise modes in the future. The new image generator will also be available in the Edge sidebar.

Artificial intelligence advancement has taken the world by storm. And it has remarkably improvised the way we use the internet.

With text-to-image translation, generative AI has proven its worth. AI-powered images have been created by services such as Dall-E and Stable Diffusion. Now, coming up is the text-to-video generation concept, which is set to be the next big craze.

They have also released the tools needed for people to train their own AI.

Researchers at Stanford’s Center for Research on Foundation Models (CRFM) have unveiled an artificial intelligence (AI) model that works much like the famous ChatGPT but cost them only $600 to train. The researchers said that they hadn’t optimized their process and future models could be trained for lesser.

Until OpenAI’s ChatGPT was launched to the public in November last year, Large Language Models (LLMs) were largely a topic of discussion among AI researchers. The company has spent millions of dollars training them and making sure that they provided responses to human queries in the way another human would respond.

GARMI will also serve meals, open a bottle of water and place emergency calls.

Robots are gradually making their way into a variety of industries, from restaurant service to healthcare. Scientists have been working hard to rapidly expand robot capabilities, and it is clear that robotics will shape our daily lives in the near future.

Now, it’s time to meet “GARMI”. This white-colored humanoid which has come to the aid of doctors, nurses, and elderly citizens in need.


CHRISTOF STACHE — AFP/Getty Images.

Medicine and elderly healthcare could benefit the most from robotics advancements.

Last week saw a plethora of new advances in AI, including the much-anticipated release of GPT-4.

Rarely, if ever, has more progress been witnessed in AI than what we are currently seeing. This is clearly an exceptional time for the field, with much hype and speculation around what the near future may bring.

If it’s always been your dream to have the ability to live forever, you may be in luck as scientists believe we are just seven years away from achieving immortality. Futurist and computer scientist Ray Kurzweil has made predictions on when the human race will be able to live forever and when artificial intelligence (AI) will reach the singularity, and he believes it could be possible as early as 2030.

One of the oldest tools in computational physics — a 200-year-old mathematical technique known as Fourier analysis — can reveal crucial information about how a form of artificial intelligence called a deep neural network learns to perform tasks involving complex physics like climate and turbulence modeling, according to a new study.

The discovery by mechanical engineering researchers at Rice University is described in an open-access study published in the journal PNAS Nexus, a sister publication of the Proceedings of the National Academy of Sciences.

“This is the first rigorous framework to explain and guide the use of deep neural networks for complex dynamical systems such as climate,” said study corresponding author Pedram Hassanzadeh. “It could substantially accelerate the use of scientific deep learning in climate science, and lead to much more reliable climate change projections.”