Indian start-up Green Robot Machinery (GRoboMac) has developed a cotton picker with autonomous robotic arms, mounted on a semi-autonomous electric farm vehicle.
The robotic arms of the battery-operated machine are each capable of picking about 50 kgs cotton per day. That means that four arms, mounted on the vehicle, can pick about 200 kgs per day. High yielding farms can use additional arms, the company says.
The large blue warning signs read “Video recording for fall detection and prevention” on the third-floor dementia care unit of the Trousdale, a private-pay senior living community in Silicon Valley where a studio starts from about $7,000 per month.
In late 2019, AI-based fall detection technology from a Bay Area startup, SafelyYou, was installed to monitor its 23 apartments (it is turned on in all but one apartment where the family didn’t consent). A single camera unobtrusively positioned high on each bedroom wall continuously monitors the scene.
If the system, which has been trained on SafelyYou’s ever expanding library of falls, detects a fall, staff are alerted. The footage, which is kept only if an event triggers the system, can then be viewed in the Trousdale’s control room by paramedics to help decide whether someone needs to go to hospital – did they hit their head? – and by designated staff to analyze what changes could prevent the person falling again.
Researchers from Tsinghua University, China, have developed an all-analog photoelectronic chip that combines optical and electronic computing to achieve ultrafast and highly energy-efficient computer vision processing, surpassing digital processors.
Computer vision is an ever-evolving field of artificial intelligence focused on enabling machines to interpret and understand visual information from the world, similar to how humans perceive and process images and videos.
It involves tasks such as image recognition, object detection, and scene understanding. This is done by converting analog signals from the environment into digital signals for processing by neural networks, enabling machines to make sense of visual information. However, this analog-to-digital conversion consumes significant time and energy, limiting the speed and efficiency of practical neural network implementations.
For the first time, a physical neural network has successfully been shown to learn and remember “on the fly,” in a way inspired by and similar to how the brain’s neurons work.
The result opens a pathway for developing efficient and low-energy machine intelligence for more complex, real-world learning and memory tasks.
Published today in Nature Communications, the research is a collaboration between scientists at the University of Sydney and University of California at Los Angeles.
President Joe Biden’s executive order plans for standardized digital watermarking rules.
President Joe Biden’s executive order on artificial intelligence is a first-of-its-kind action from the government to tackle some of the technology’s greatest challenges — like how to identify if an image is real or fake.
Among a myriad of other demands, the order, signed Monday, calls for a new set of government-led standards on watermarking AI-generated content. Like watermarks on photographs or paper money, digital watermarks help users distinguish between a real object and a fake one and determine who owns it.
Nearly five years ago, DeepMind, one of Google’s more prolific AI-centered research labs, debuted AlphaFold, an AI system that can accurately predict the structures of many proteins inside the human body. Since then, DeepMind has improved on the system, releasing an updated and more capable version of AlphaFold — AlphaFold 2 — in 2020.
And the lab’s work continues.
Today, DeepMind revealed that the newest release of AlphaFold, the successor to AlphaFold 2, can generate predictions for nearly all molecules in the Protein Data Bank, the world’s largest open access database of biological molecules.
It’s strange times we’re living in when a blockchain billionaire, not the usual Big Tech suspects, is the one supplying the compute capacity needed to develop generative AI.
Already, the cluster of GPUs — worth an estimated half a billion dollars and among the largest in the world — is being used by startups Imbue and Character.ai for AI model experimentation, claims Eric Park, the CEO of a newly formed organization, Voltage Park, that’s charged with running and managing the data centers.
Siemens and Microsoft team up to introduce the Siemens Industrial Copilot and enhance human-machine collaboration.
Microsoft and Siemens have unveiled a groundbreaking partnership aimed at leveraging generative AI to revolutionize industries on a global scale, Microsoft announced.
This collaboration introduces the Siemens Industrial Copilot, an AI-powered assistant designed to enhance human-machine cooperation, particularly in the manufacturing sector significantly. The joint initiative represents a substantial leap forward in AI technology, intending to accelerate innovation and streamline operations across diverse industries.
NVIDIA looking to ramp up chip production in the face of supply shortage.
NVIDIA, the most profitable chip-making company in the world, has unveiled a custom large language model, the technology on which artificial intelligence tools like ChatGPT are based, which the company has developed for their internal use.
Trained on NVIDIA’s proprietary data, “ChipNeMo” will generate and optimize software and provide assistance to human designers in building semiconductors. Developed by NVIDIA researchers, ChipNeMo would be highly beneficial in the context of the company’s work in graphics processing, artificial intelligence, and other technologies.