Toggle light / dark theme

Data is the strongest currency in marketing and there may be too much of it

Marketing and the need for data rules

Legislators and decision-makers worldwide have also been active in regulating data although it’s almost impossible to keep pace with change in many places. The genuine exploitation of data requires rules and regulations, as growth always increases the potential for misuse. The task of technology companies is to build data pipelines that ensure the trust and security of AI and analytics.

Data is the new currency for businesses, and the overwhelming growth rate of it can be intimidating. The key challenge is to harness data in a way that benefits both marketers and consumers who produce it. And in doing this, manage the “big data” in an ethically correct and consumer-friendly way. Luckily, there are many great services for analyzing data, effective regulation to protect consumers’ rights and a never-ending supply of information at our hands to make better products and services. The key for businesses is to embrace these technologies so that they can avoid sinking in their own data.

Who’s liable for AI-generated lies?

**Who will be liable** for harmful speech generated by large language models? As advanced AIs such as OpenAI’s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation — and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots — the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can’t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.

Indeed, OpenAI is concerned enough about the risks of its models going “totally off the rails,” as its documentation puts it at one point (in reference to a response example in which an abusive customer input is met with a very troll-esque AI reply), to offer a free content filter that “aims to detect generated text that could be sensitive or unsafe coming from the API” — and to recommend that users don’t return any generated text that the filter deems “unsafe.” (To be clear, its documentation defines “unsafe” to mean “the text contains profane language, prejudiced or hateful language, something that could be NSFW or text that portrays certain groups/people in a harmful manner.”).

But, given the novel nature of the technology, there are no clear legal requirements that content filters must be applied. So OpenAI is either acting out of concern to avoid its models causing generative harms to people — and/or reputational concern — because if the technology gets associated with instant toxicity that could derail development. will be liable for harmful speech generated by large language models? As advanced AIs such as OpenAI’s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation — and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots — the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can’t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.

Israeli company Virusight’s device detects COVID-19 in 20 seconds

Virusight Diagnostic, an Israeli company that combines artificial intelligence software and spectral technology announced the results of a study that found that its Pathogens Diagnostic device detects COVID-19 with 96.3 percent accuracy in comparison to the common RT-PCR.

The study was conducted by researchers from the Department of Science and Technology, University of Sannio, Benevento, Italy with partner company TechnoGenetics S.p. A.


The Virusight solution was tested on 550 saliva samples and found to be safe and effective.

Study explores the concept of artificial consciousness in the context of the film ‘Being John Malkovich’

Recent technological advances, such as the development of increasingly sophisticated machine learning algorithms and robots, have sparked much debate about artificial intelligence (AI) and artificial consciousness. While many of the tools created to date have achieved remarkable results, there have been many discussions about what differentiates them from humans.

More specifically, computer scientists and neuroscientists have been pondering on the difference between and “consciousness,” wondering whether machines will ever be able to attain the latter. Amar Singh, Assistant Professor at Banaras Hindu University, recently published a paper in a special issue of Springer Link’s AI & Society that explores these concepts by drawing parallels with the fantasy film “Being John Malkovich.”

“Being John Malkovich” is a 1999 film directed by Spike Jonze and featuring John Cusack, Cameron Diaz, and other famous Hollywood stars. The film tells the story of a puppeteer who discovers a portal through which he can access the mind of the movie star John Malkovich, while also altering his being.

From baristas to inspectors: Singapore’s robot workforce plugs labour gaps

Lack of a robotic hand that can match a human hand will continue to delay full automation.


SINGAPORE, May 30 (Reuters) — After struggling to find staff during the pandemic, businesses in Singapore have increasingly turned to deploying robots to help carry out a range of tasks, from surveying construction sites to scanning library bookshelves.

The city-state relies on foreign workers, but their number fell by 235,700 between December 2019 and September 2021, according to the manpower ministry, which notes how COVID-19 curbs have sped up “the pace of technology adoption and automation” by companies.

At a Singapore construction site, a four-legged robot called “Spot”, built by U.S. company Boston Dynamics, scans sections of mud and gravel to check on work progress, with data fed back to construction company Gammon’s control room.

/* */