Toggle light / dark theme

Mind in the machine

‘Neural networks today are about as similar to a brain as an airplane is to a bird.’ — Kwabena Boahen, PhD, professor of bioengineering and of electrical engineering One problem, as Boahen sees it, is that AI relies on a “synaptocentric” mode of computing, in that half of the nodes — lines of binary…


Do nerve cells hold the key to an epic advance in computing?

By John Sanford

Photography by Misha Gravenor.

Kwabena Boahen, PhD, a professor of bioengineering and of electrical engineering, right, and H.-S. Philip Wong, PhD, professor of electrical engineering and the Willard R. and Inez Kerr Bell Professor in the School of Engineering.

Codegen raises new cash to automate software engineering tasks

face_with_colon_three Basically although some or all coding jobs could be absorbed I remain positive because now everyone be a god now when infinite computation comes out and also infinite agi.


Jay Hack, an AI researcher with a background in natural language processing and computer vision, came to the realization several years ago that large language models (LLMs) — think OpenAI’s GPT-4 or ChatGPT — have the potential to make developers more productive by translating natural language requests into code.

After working at Palantir as a machine learning engineer and building and selling Mira, an AI-powered shopping startup for cosmetics, Hack began experimenting with LLMs to execute pull requests — the process of merging new code changes with main project repositories. With the help of a small team, Hack slowly expanded these experiments into a platform, Codegen, that attempts to automate as many mundane, repetitive software engineering tasks as possible leveraging LLMs.

“Codegen automates the menial labor out of software engineering by empowering AI agents to ship code,” Hack told TechCrunch in an email interview. “The platform enables companies to move significantly quicker and eliminates costs from tech debt and maintenance, allowing companies to focus on product innovation.”

Meta brings us a step closer to AI-generated movies

Like “Avengers” director Joe Russo, I’m becoming increasingly convinced that fully AI-generated movies and TV shows will be possible within our lifetimes.

A host of AI unveilings over the past few months, in particular OpenAI’s ultra-realistic-sounding text-to-speech engine, have given glimpses into this brave new frontier. But Meta’s announcement today put our AI-generated content future into especially sharp relief — for me at least.

Meta his morning debuted Emu Video, an evolution of the tech giant’s image generation tool, Emu. Given a caption (e.g. “A dog running across a grassy knoll”), image or a photo paired with a description, Emu Video can generate a four-second-long animated clip.

These AI-powered lifts will generate energy on their own

A Zaha Hadid-designed property in Hong Kong will have AI-powered lifts that generate their own energy.


An urban oasis with next-gen elevators

The new structure replaces an old car park to create an urban oasis filled with gardens boasting many plants and trees. Getting to these gardens requires taking Henderson’s next-generation AI-powered lifts.

An official from Henderson properties told Baba that in the future there will be an AI-powered lift assigned to each user so they no longer have to wait for long periods on the ground or on a specific floor. The AI system will be able to appropriately coordinate the lifts to travel according to the number of people waiting for them and their final destinations.

Sam Altman: GPT-5 underway and will substantially differ from GPT-4

OpenAI is seeking more funds from Microsoft to build future models both companies can profit from.


Justin Sullivan/Getty.

Since its blockbuster product, ChatGPT, which came out in November last year, OpenAI has released improved versions of GPT, the AI model that powered the conversational chatbot. Its most recent iteration, GPT Turbo, offers a faster and cost-effective way to use GPT-4.