Toggle light / dark theme

One AI startup’s undoing is another’s opportunity.

Case in point: Today, AI21 Labs, a company developing generative AI products along the lines of OpenAI’s GPT-4 and ChatGPT, closed a $53 million extension to its previously announced Series C funding round. The new tranche, which had participation from new investors Intel Capital and Comcast Ventures, brings AI21’s total raised to $336 million.

The startup’s valuation remains unchanged at $1.4 billion.

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

Microsoft’s vision for zero trust security is galvanized around generative AI and reflects how identity and network access must constantly improve to counter complex cyberattacks.

Their many security announcements at Ignite 2023 reflect how they’re architecting the future of zero trust with greater adaptability and contextual intelligence designed in. The Microsoft Ignite 2023 Book of News overviews the new products announced this week at the event.

Treating cancer is becoming increasingly complex, but also offers more and more possibilities. After all, the better a tumor’s biology and genetic features are understood, the more treatment approaches there are. To be able to offer patients personalized therapies tailored to their disease, laborious and time-consuming analysis and interpretation of various data is required. Researchers at Charité – Universitätsmedizin Berlin and Humboldt-Universität zu Berlin have now studied whether generative artificial intelligence (AI) tools such as ChatGPT can help with this step. This is one of many projects at Charité analyzing the opportunities unlocked by AI in patient care.

If the body can no longer repair certain genetic mutations itself, cells begin to grow unchecked, producing a tumor. The crucial factor in this phenomenon is an imbalance of growth-inducing and growth-inhibiting factors, which can result from changes in oncogenes – genes with the potential to cause cancer – for example. Precision oncology, a specialized field of personalized medicine, leverages this knowledge by using specific treatments such as low-molecular weight inhibitors and antibodies to target and disable hyperactive oncogenes.

Why does it feel like everybody at OpenAI has lost their mind?

In what’s arguably turning into the hottest AI story of the year, former OpenAI CEO Sam Altman was ousted by the rest of the company’s nonprofit board on Friday, leading to a seemingly endless drama cycle that’s included hundreds of staffers threatening to quit en masse if the board doesn’t reinstate him.

A key character in the spectacle has been OpenAI chief scientist and board member Ilya Sutskever — who, according to The Atlantic, likes to burn effigies and lead ritualistic chants at the company — and appears to have been one of the main drivers behind Altman’s ousting.

Ms. McCauley and Ms. Toner [HF — two board members] have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that A.I. could one day destroy humanity. Today’s A.I. technology cannot destroy humanity. But this community believes that as the technology grows increasingly powerful, these dangers will arise.

McCauley and Toner reportedly worried that Altman was pushing too hard, too quickly for new and potentially dangerous forms of AI (similar fears led some OpenAI people to bail out and found a competitor, Anthropic, a couple of years ago). The FT’s reporting confirms that the fight was over how quickly to commercialize AI

The back-story to all of this is actually much weirder than the average sex scandal. The field of AI (in particular, its debates around Large Language Models (LLMs) like OpenAI’s GPT-4) is profoundly shaped by cultish debates among people with some very strange beliefs.

Nov 20 (Reuters) — Following a surprise ouster, OpenAI co-founder and former CEO Sam Altman joined Microsoft (MSFT.O) as the head of artificial intelligence research along with the ChatGPT maker’s former President Greg Brockman and other staff.

The developments come less than a year after OpenAI kicked off the generative AI frenzy with the launch of viral chatbot ChatGPT and bagged Microsoft as an investor, among other big names.

The shakeup is not the first at OpenAI, which was launched in 2015. Tesla CEO Elon Musk, a co-founder of the non-profit, was once its co-chair, and in 2020 other executives departed, going on to found competitor Anthropic, which claims to have a greater focus on AI safety.

As part of pioneering the security of satellite communication in space, NASA is funding a groundbreaking project at the University of Miami’s Frost Institute for Data Science and Computing (IDSC) which will enable augmenting traditional large satellites with nanosatellites or constellations of nanosatellites.

These nanosatellites are designed to accomplish diverse goals, ranging from communication and weather prediction to Earth science research and observational data gathering. Technical innovation is a hallmark of NASA, a global leader in the development of novel technologies that enable US space missions and translate to a wide variety of applications from Space and Earth science to consumer goods and to national and homeland security.

With advances in satellite technology and reduced cost of deployment and operation, nanosatellites also come with significant challenges for the protection of their communication networks. Specifically, small satellites are owned and operated by a wide variety of public and private sector organizations, expanding the attack surface for cyber exploitation. The scenario is similar to Wi-Fi network vulnerabilities. These systems provide an opportunity for adversaries to threaten national security as well as raise economic concerns for satellite companies, operators, and users.

Cambridge scientists have shown that placing physical constraints on an artificially-intelligent system—in much the same way that the human brain has to develop and operate within physical and biological constraints—allows it to develop features of the brains of complex organisms in order to solve tasks.

As such as the organize themselves and make connections, they have to balance competing demands. For example, energy and resources are needed to grow and sustain the network in , while at the same time optimizing the network for . This trade-off shapes all brains within and across species, which may help explain why many brains converge on similar organizational solutions.

Jascha Achterberg, a Gates Scholar from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge said, “Not only is the brain great at solving , it does so while using very little energy. In our new work we show that considering the brain’s problem-solving abilities alongside its goal of spending as few resources as possible can help us understand why brains look like they do.”