{"id":154881,"date":"2023-01-08T06:23:07","date_gmt":"2023-01-08T12:23:07","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/01\/we-need-to-build-better-bias-in-ai"},"modified":"2023-01-08T06:23:07","modified_gmt":"2023-01-08T12:23:07","slug":"we-need-to-build-better-bias-in-ai","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/01\/we-need-to-build-better-bias-in-ai","title":{"rendered":"We need to build better bias in AI"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/we-need-to-build-better-bias-in-ai2.jpg\"><\/a><\/p>\n<p><em>Check out all the on-demand sessions from the Intelligent Security Summit <a href=\"https:\/\/avolio.swapcard.com\/intelligentsecuritysummit2022\/registrations\/Start?utm_source=vb&utm_medium=boiler&utm_content=ondemand&utm_campaign=IS22_BoilerPlates\">here<\/a><\/em>.<\/p>\n<p>At their best, AI systems extend and augment the work we do, helping us to realize our goals. At their worst, they undermine them. We\u2019ve all heard of high-profile instances of AI bias, like <a href=\"https:\/\/venturebeat.com\/ai\/aws-ml-engineering-manager-evaluates-the-social-impact-of-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">Amazon\u2019s<\/a> machine learning (ML) recruitment engine that discriminated against women or the racist results from Google Vision. These cases don\u2019t just harm individuals; they work against their creators\u2019 original intentions. Quite rightly, these examples attracted public outcry and, as a result, shaped perceptions of AI bias into something that is categorically bad and that we need to eliminate.<\/p>\n<p>While most people agree on the need to build high-trust, fair AI systems, taking all bias out of AI is <a href=\"https:\/\/venturebeat.com\/ai\/the-dynamic-duo-strong-ui-and-bias-free-ai-technology\/\">unrealistic<\/a>. In fact, as the new wave of ML models go beyond the deterministic, they\u2019re actively being designed with some level of subjectivity built in. Today\u2019s most sophisticated systems are synthesizing inputs, contextualizing content and interpreting results. Rather than trying to eliminate bias entirely, organizations should seek to understand and measure subjectivity better.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Check out all the on-demand sessions from the Intelligent Security Summit here. At their best, AI systems extend and augment the work we do, helping us to realize our goals. At their worst, they undermine them. We\u2019ve all heard of high-profile instances of AI bias, like Amazon\u2019s machine learning (ML) recruitment engine that discriminated against [\u2026]<\/p>\n","protected":false},"author":662,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6,1492],"tags":[],"class_list":["post-154881","post","type-post","status-publish","format-standard","hentry","category-robotics-ai","category-security"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/154881","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/662"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=154881"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/154881\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=154881"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=154881"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=154881"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}