As the Post Office debacle has amply demonstrated, putting blind faith into a new form of technology can be perilous.
Category: robotics/AI
A Bloomberg report says Apple gave its San Diego Siri AI workers until the end of January to decide if they’re willing to move to its Austin, Texas campus.
121 Apple workers have until the end of January to choose.
Rishi Sunak needs to decide whether he wants to back the UK’s creative industries or gamble everything on an artificial intelligence boom, the chief executive of Getty Images has said.
Craig Peters, who has led the image library since 2019, spoke out amid growing anger from the creative and media sector at the harvesting of their material for “training data” for AI companies. His company is suing a number of AI image generators in the UK and US for copyright infringement.
Beijing set the goal of being the global AI leader by 2030, but that was before the emergence of ChatGPT
Of the many events that stand out as noteworthy in online discussions across Chinese social media in 2023, it’s perhaps the rise of ChatGPT that will prove to be the most significant.
Chinese artist responds to debate about data-scraping as he prepares for new collaboration with AI.
New sleep technology was a trend at CES 2024, and health company DeRucci showcased its new line of smart bed products — including a pillow to combat snoring.
At CES 2024, a Chinese-based company presented what it claims is the first smart, antisnoring pillow.
There is a growing need to develop methods capable of efficiently processing and interpreting data from various document formats. This challenge is particularly pronounced in handling visually rich documents (VrDs), such as business forms, receipts, and invoices. These documents, often in PDF or image formats, present a complex interplay of text, layout, and visual elements, necessitating innovative approaches for accurate information extraction.
Traditionally, approaches to tackle this issue have leaned on two architectural types: transformer-based models inspired by Large Language Models (LLMs) and Graph Neural Networks (GNNs). These methodologies have been instrumental in encoding text, layout, and image features to improve document interpretation. However, they often need help representing spatially distant semantics essential for understanding complex document layouts. This challenge stems from the difficulty in capturing the relationships between elements like table cells and their headers or text across line breaks.
Researchers at JPMorgan AI Research and the Dartmouth College Hanover have innovated a novel framework named ‘DocGraphLM’ to bridge this gap. This framework synergizes graph semantics with pre-trained language models to overcome the limitations of current methods. The essence of DocGraphLM lies in its ability to integrate the strengths of language models with the structural insights provided by GNNs, thus offering a more robust document representation. This integration is crucial for accurately modeling visually rich documents’ intricate relationships and structures.
In today’s column, I will examine closely the recent launch of the OpenAI ChatGPT online GPT store that allows users to post GPTs or chatbots for ready use by others, including and somewhat alarmingly a spate of such chatbots intended for mental health advisory purposes.
OpenAI has launched their awaited GPT Store. This is great news. But there are also mental health GPTs that are less than stellar. I take a close look at the issue.
From blanket bans to specific prohibitions
Previously, OpenAI had a strict ban on using its technology for any “activity that has high risk of physical harm, including” “weapons development” and “military and warfare.” This would prevent any government or military agency from using OpenAI’s services for defense or security purposes. However, the new policy has removed the general ban on “military and warfare” use. Instead, it has listed some specific examples of prohibited use cases, such as “develop or use weapons” or “harm yourself or others.”
PLA scientists are reportedly using AI and large language models like Baidu’s Ernie to train a military AI system that can better predict the behavior of human adversaries.
Chinese scientists have allegedly combined AI and LLM to enhance the accuracy of predicting human behavior during military conflicts.