Toggle light / dark theme

OpenAI is poised to revolutionize the realm of generative AI with the introduction of Multimodal GPT-4, featuring an array of groundbreaking features. The highlight of this eagerly anticipated development is the “All Tools” feature, which provides users seamless access to the full range of GPT-4 capabilities without the need for constant toggling. This game-changing update simplifies the user experience and empowers them to seamlessly transition between tasks, unlocking the true potential of GPT-4.

One of the standout features of Multimodal GPT-4 is the integration of DALL·E 3, allowing users to upload images and request creative responses. This integration not only expands the horizons of content creation but also enhances the system’s overall versatility. Users can now unleash their creativity in new and exciting ways.

It is important to note that the “All Tools” feature excludes ChatGPT plugins, a strategic decision by OpenAI to streamline the user experience within a single platform. This consolidation of tools eliminates the need for third-party additions that often duplicated similar functionalities.

Summary: Researchers devised a machine learning model that gauges a country’s peace level by analyzing word frequency in its news media.

By studying over 723,000 articles from 18 countries, the team identified distinct linguistic patterns corresponding to varying peace levels.

While high-peace countries often used terms related to optimism and daily life, lower-peace nations favored words tied to governance and control.

Emmett Short discusses these comments on this episode of Lifespan News.

But first, the mad scientist David Sinclair, this time with Peter Diamandis at Abundance 360, giving more details into human trials for the genetic engineering side of the technology versus the chemical and pill side of the technology. Which would you want more? We’ll also hear David’s thoughts on how AI will affect the advancement of this tech. Spoiler: A lot. I’m going to play the best parts and add my commentary along the way.

Indian start-up Green Robot Machinery (GRoboMac) has developed a cotton picker with autonomous robotic arms, mounted on a semi-autonomous electric farm vehicle.

The robotic arms of the battery-operated machine are each capable of picking about 50 kgs cotton per day. That means that four arms, mounted on the vehicle, can pick about 200 kgs per day. High yielding farms can use additional arms, the company says.

The large blue warning signs read “Video recording for fall detection and prevention” on the third-floor dementia care unit of the Trousdale, a private-pay senior living community in Silicon Valley where a studio starts from about $7,000 per month.

In late 2019, AI-based fall detection technology from a Bay Area startup, SafelyYou, was installed to monitor its 23 apartments (it is turned on in all but one apartment where the family didn’t consent). A single camera unobtrusively positioned high on each bedroom wall continuously monitors the scene.

If the system, which has been trained on SafelyYou’s ever expanding library of falls, detects a fall, staff are alerted. The footage, which is kept only if an event triggers the system, can then be viewed in the Trousdale’s control room by paramedics to help decide whether someone needs to go to hospital – did they hit their head? – and by designated staff to analyze what changes could prevent the person falling again.

Researchers from Tsinghua University, China, have developed an all-analog photoelectronic chip that combines optical and electronic computing to achieve ultrafast and highly energy-efficient computer vision processing, surpassing digital processors.

Computer vision is an ever-evolving field of artificial intelligence focused on enabling machines to interpret and understand from the world, similar to how humans perceive and process images and videos.

It involves tasks such as image recognition, object detection, and scene understanding. This is done by converting from the environment into for processing by , enabling machines to make sense of visual information. However, this -to-digital conversion consumes significant time and energy, limiting the speed and efficiency of practical neural network implementations.

For the first time, a physical neural network has successfully been shown to learn and remember “on the fly,” in a way inspired by and similar to how the brain’s neurons work.

The result opens a pathway for developing efficient and low-energy machine intelligence for more complex, real-world learning and .

Published today in Nature Communications, the research is a collaboration between scientists at the University of Sydney and University of California at Los Angeles.

President Joe Biden’s executive order plans for standardized digital watermarking rules.

President Joe Biden’s executive order on artificial intelligence is a first-of-its-kind action from the government to tackle some of the technology’s greatest challenges — like how to identify if an image is real or fake.

Among a myriad of other demands, the order, signed Monday, calls for a new set of government-led standards on watermarking AI-generated content. Like watermarks on photographs or paper money, digital watermarks help users distinguish between a real object and a fake one and determine who owns it.


Digital watermarks are easily broken and abused.

Nearly five years ago, DeepMind, one of Google’s more prolific AI-centered research labs, debuted AlphaFold, an AI system that can accurately predict the structures of many proteins inside the human body. Since then, DeepMind has improved on the system, releasing an updated and more capable version of AlphaFold — AlphaFold 2 — in 2020.

And the lab’s work continues.

Today, DeepMind revealed that the newest release of AlphaFold, the successor to AlphaFold 2, can generate predictions for nearly all molecules in the Protein Data Bank, the world’s largest open access database of biological molecules.