Toggle light / dark theme

There is a growing need to develop methods capable of efficiently processing and interpreting data from various document formats. This challenge is particularly pronounced in handling visually rich documents (VrDs), such as business forms, receipts, and invoices. These documents, often in PDF or image formats, present a complex interplay of text, layout, and visual elements, necessitating innovative approaches for accurate information extraction.

Traditionally, approaches to tackle this issue have leaned on two architectural types: transformer-based models inspired by Large Language Models (LLMs) and Graph Neural Networks (GNNs). These methodologies have been instrumental in encoding text, layout, and image features to improve document interpretation. However, they often need help representing spatially distant semantics essential for understanding complex document layouts. This challenge stems from the difficulty in capturing the relationships between elements like table cells and their headers or text across line breaks.

Researchers at JPMorgan AI Research and the Dartmouth College Hanover have innovated a novel framework named ‘DocGraphLM’ to bridge this gap. This framework synergizes graph semantics with pre-trained language models to overcome the limitations of current methods. The essence of DocGraphLM lies in its ability to integrate the strengths of language models with the structural insights provided by GNNs, thus offering a more robust document representation. This integration is crucial for accurately modeling visually rich documents’ intricate relationships and structures.

In today’s column, I will examine closely the recent launch of the OpenAI ChatGPT online GPT store that allows users to post GPTs or chatbots for ready use by others, including and somewhat alarmingly a spate of such chatbots intended for mental health advisory purposes.


OpenAI has launched their awaited GPT Store. This is great news. But there are also mental health GPTs that are less than stellar. I take a close look at the issue.

From blanket bans to specific prohibitions

Previously, OpenAI had a strict ban on using its technology for any “activity that has high risk of physical harm, including” “weapons development” and “military and warfare.” This would prevent any government or military agency from using OpenAI’s services for defense or security purposes. However, the new policy has removed the general ban on “military and warfare” use. Instead, it has listed some specific examples of prohibited use cases, such as “develop or use weapons” or “harm yourself or others.”

In the ever-evolving world of financial markets, understanding the unpredictable nature of stock market fluctuations is crucial. A new study has taken a leap in this field by developing an innovative quantum mechanics model to analyze the stock market.

This model not only encompasses economic uncertainty and investor behavior but also aims to unravel the mysteries behind stock market anomalies like fat tails, volatility clustering, and contrarian effects.

The core of this model is quantum mechanics, a pillar of physics known for explaining the behavior of subatomic particles.

Health Care.

As the U.S. Struggles With a Stillbirth Crisis, Australia Offers a Model for How to Do Better.

Australia has emerged as a global leader in the effort to lower the number of babies that die before taking their first breaths. It’s an approach that could benefit America, which lags behind other wealthy nations in reducing stillbirths.