Toggle light / dark theme

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Since its launch in November 2022, ChatGPT, an artificial intelligence (AI) chatbot, has been causing quite a stir because of the software’s surprisingly human and accurate responses.

The auto-generative system reached a record-breaking 100 million monthly active users only two months after launching. However, while its popularity continues to grow, the current discussion within the cybersecurity industry is whether this type of technology will aid in making the internet safer or play right into the hands of those trying to cause chaos.

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

The explosion of new generative AI products and capabilities over the last several months — from ChatGPT to Bard and the many variations from others based on large language models (LLMs) — has driven an overheated hype cycle. In turn, this situation has led to a similarly expansive and passionate discussion about needed AI regulation.

The AI regulation firestorm was ignited by the Future of Life Institute open letter, now signed by thousands of AI researchers and concerned others. Some of the notable signees include Apple cofounder Steve Wozniak, SpaceX, Tesla and Twitter CEO Elon Musk; Stability AI CEO Emad Mostaque; Sapiens author Yuval Noah Harari; and Yoshua Bengio, founder of AI research institute Mila.

Summary: It may be possible to optimize the stimulation parameters of brain implants in animals without human intervention. The study highlights the potential for autonomous optimization of prostheses implanted in the brain. The advance may prove to be beneficial for those with spinal cord injury and diseases that affect movement.

Source: University of Montreal.

Scientists have long studied neurostimulation to treat paralysis and sensory deficits caused by strokes and spinal cord injuries, which in Canada affect some 380,000 people across the country.

There are a lot of reasons why we think technological singularity will happen sooner than 2045. With technology advancing at a rapid pace, an abundance of data, increased investment, collaboration, and potential breakthroughs, we might just wake up one day and realize that the robots have taken over. But hey, at least they’ll do our laundry.

Do you think singularity will happen sooner than 2045? Why or why not? Answer in the comment section below.

ChatGPT’s immense popularity and power make it eye-wateringly expensive to maintain, The Information reports, with OpenAI paying up to $700,000 a day to keep its beefy infrastructure running, based on figures from the research firm SemiAnalysis.

“Most of this cost is based around the expensive servers they require,” Dylan Patel, chief analyst at the firm, told the publication.

The costs could be even higher now, Patel told Insider in a follow-up interview, because these estimates were based on GPT-3, the previous model that powers the older and now free version of ChatGPT.

Cameras for machine vision and robotics are essentially bionic devices mimicking human eyes. These applications require advanced color imaging systems to possess a number of attributes such as high resolution, large FoV, compact design, light-weight and low energy consumption, etc1. Conventional imaging systems based on CCD/CMOS image sensors suffer from relatively low FoV, bulkiness, high complexity, and power consumption issues, especially with mechanically tunable optics. Recently, spherical bionic eyes with curved image sensor retinas have triggered enormous research interest1,2,3,4,5,6,7. This type of devices possess several appealing features such as simplified lens design, low image aberration, wide FoV, and appearance similar to that of the biological eyes rendering them suitable for humanoid robots8,9,10,11,12,13. However, the existing spherical bionic eyes with curved retinas typically only have fixed lens and can only acquire mono color images. Fixed lenses cannot image objects with varying distances. On the other hand, conventional color imaging function of CCD/CMOS image sensors are achieved by using color filter arrays, which add complexity to the device fabrication and cause optical loss14,15,16,17,18,19. Typical absorptive organic dye filters suffer from poor UV and high-temperature stabilities, and plasmonic color filters suffer from low transmission20,21,22. And it is even more challenging to fabricate color filter arrays on hemispherical geometry where most traditional microelectronic fabrication methods are not applicable.

Herein, we demonstrate a novel bionic eye design that possesses adaptive optics and a hemispherical nanowire array retina with filter-free color imaging and neuromorphic preprocessing abilities. The primary optical sensing function of the artificial retina is realized by using a hemispherical all-inorganic CsPbI3 nanowire array that can produce photocurrent without external bias leading to a self-powered working mode. Intriguingly, an electrolyte-assisted color-dependent bidirectional synaptic photo-response is discovered in a well-engineered hybrid nanostructure. Inspired by the vertical alignment of a color-sensitive cone cell and following neurons, the device structure vertically integrates a SnO2/NiO double-shell nanotube filled with ionic liquid in the core on top of a CsPbI3/NiO core-shell nanowire. It is found that the positive surrounding gate effect of NiO due to photo hole injection can be partially or fully balanced by electrolyte under shorter (blue) or longer (green and red) wavelength illuminations, respectively. Thus, the device can yield either positive or negative photocurrent under shorter or longer wavelength illumination, respectively. The carriers can be accumulated in SnO2/NiO structure, giving rise to the bidirectional synaptic photo-response. This color-sensitive bidirectional photo-response instills a unique filter-free color imaging function to the retina. The synaptic behavior-based neuromorphic preprocessing ability, along with the self-powered feature, effectively reduce the energy consumption of the system23,24,25,26,27,28. Moreover, the color selectivity of each pixel can be tuned by a small external bias to detect more accurate color information. We demonstrate that the device can reconstruct color images with high fidelity for convolutional neural network (CNN) classifications. In addition, our bionic eye integrates adaptive optics in the device, by integrating an artificial crystalline lens and an electronic iris based on liquid crystals. The artificial crystalline lens can switch focal length to detect objects from different distances, and the electronic iris can control the amount of light reaching the retina which enhances the dynamic range. Both of the optical components can be easily tuned by the electric field, which are fast, compact, and much more energy efficient compared to the conventional mechanically controlled optics reported hitherto. (Supplementary Table 1 compares our system with some commercial zoom lenses.) The combination of all these unique features makes the bionic eye structurally and functionally equivalent to its biological counterpart.

Researchers show how Stable Diffusion can read minds. The method reconstructs images from fMRI scans with amazing accuracy.

Researchers have been using AI models to decode information from the human brain for years. At their core, most methods involve using pre-recorded fMRI images as input to a generative AI model for text or images.

In early 2018, for example, a group of researchers from Japan demonstrated how a neural network reconstructed images from fMRI recordings. In 2019, a group reconstructed images from monkey neurons, and Meta’s research group, led by Jean-Remi King, has published new work that derives text from fMRI data, for example.