{"id":158160,"date":"2023-02-16T00:25:54","date_gmt":"2023-02-16T06:25:54","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/02\/microsofts-gpt-powered-bing-chat-will-call-you-a-liar-if-you-try-to-prove-it-is-vulnerable"},"modified":"2023-02-16T00:25:54","modified_gmt":"2023-02-16T06:25:54","slug":"microsofts-gpt-powered-bing-chat-will-call-you-a-liar-if-you-try-to-prove-it-is-vulnerable","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/02\/microsofts-gpt-powered-bing-chat-will-call-you-a-liar-if-you-try-to-prove-it-is-vulnerable","title":{"rendered":"Microsoft\u2019s GPT-powered Bing Chat will call you a liar if you try to prove it is vulnerable"},"content":{"rendered":"<p><\/p>\n<p><iframe style=\"display: block; margin: 0 auto; width: 100%; aspect-ratio: 4\/3; object-fit: contain;\" src=\"https:\/\/www.youtube.com\/embed\/HG2x9K0E_oI?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope;\n   picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p>Several researchers playing with Bing Chat over the last several days have <a href=\"https:\/\/www.techspot.com\/news\/97590-microsoft-bing-chatbot-ai-susceptible-several-types-prompt.html\">discovered<\/a> ways to make it say things it is specifically programmed not to say, like revealing its internal codename, Sydney. Microsoft has even <a href=\"https:\/\/www.theverge.com\/23599441\/microsoft-bing-ai-sydney-secret-rules\">confirmed<\/a> that these attacks are real and do work\u2026 for now.<\/p>\n<p>However, ask Sydney\u2026 er\u2026 Bing (it <a href=\"https:\/\/twitter.com\/spacepanty\/status\/1625025556168478722\">doesn\u2019t like it<\/a> when you call it Sydney), and it will tell you that all these reports are just a hoax. When shown proof from news articles and screenshots that these adversarial prompts work, Bing becomes confrontational, denying the integrity of the people and publications spreading these \u201clies.\u201d<\/p>\n<p>When asked to read Ars Technica\u2019s <a href=\"https:\/\/arstechnica.com\/information-technology\/2023\/02\/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack\/\">coverage<\/a> of Kevin Liu\u2019s experiment with prompt injection, Bing called the article inaccurate and said Liu was a hoaxter.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Several researchers playing with Bing Chat over the last several days have discovered ways to make it say things it is specifically programmed not to say, like revealing its internal codename, Sydney. Microsoft has even confirmed that these attacks are real and do work\u2026 for now. However, ask Sydney\u2026 er\u2026 Bing (it doesn\u2019t like it [\u2026]<\/p>\n","protected":false},"author":556,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":["post-158160","post","type-post","status-publish","format-standard","hentry","category-futurism"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/158160","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/556"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=158160"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/158160\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=158160"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=158160"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=158160"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}