{"id":157222,"date":"2023-02-07T13:31:45","date_gmt":"2023-02-07T19:31:45","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/02\/what-chatgpt-and-generative-ai-mean-for-science"},"modified":"2023-02-07T13:31:45","modified_gmt":"2023-02-07T19:31:45","slug":"what-chatgpt-and-generative-ai-mean-for-science","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/02\/what-chatgpt-and-generative-ai-mean-for-science","title":{"rendered":"What ChatGPT and generative AI mean for science"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/what-chatgpt-and-generative-ai-mean-for-science2.jpg\"><\/a><\/p>\n<p>Setting boundaries for these tools, then, could be crucial, some researchers say. Edwards suggests that existing laws on discrimination and bias (as well as planned regulation of dangerous uses of AI) will help to keep the use of LLMs honest, transparent and fair. \u201cThere\u2019s loads of law out there,\u201d she says, \u201cand it\u2019s just a matter of applying it or tweaking it very slightly.\u201d<\/p>\n<p>At the same time, there is a push for LLM use to be transparently disclosed. Scholarly publishers (including the publisher of <i>Nature<\/i>) have said that <a href=\"https:\/\/www.theguardian.com\/science\/2023\/jan\/26\/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers\">scientists should disclose the use of LLMs in research papers<\/a> (see also <a href=\"https:\/\/www.nature.com\/articles\/d41586-023-00191-1\">Nature <b>613<\/b>, 612; 2023<\/a>); and teachers have said they expect similar behaviour from their students. The journal <i>Science<\/i> has gone further, saying that no text generated by ChatGPT or any other AI tool can be used in a paper<sup><a href=\"https:\/\/www.nature.com\/articles\/d41586-023-00340-6#ref-CR5\">5<\/a><\/sup>.<\/p>\n<p>One key technical question is whether AI-generated content can be spotted easily. Many researchers are working on this, with the central idea to use LLMs themselves to spot the output of AI-created text.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Setting boundaries for these tools, then, could be crucial, some researchers say. Edwards suggests that existing laws on discrimination and bias (as well as planned regulation of dangerous uses of AI) will help to keep the use of LLMs honest, transparent and fair. \u201cThere\u2019s loads of law out there,\u201d she says, \u201cand it\u2019s just a [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1496,6,224],"tags":[],"class_list":["post-157222","post","type-post","status-publish","format-standard","hentry","category-law","category-robotics-ai","category-science"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/157222","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=157222"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/157222\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=157222"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=157222"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=157222"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}