Toggle light / dark theme

Researchers at EPFL have developed a new, uniquely modular machine learning model for flexible decision-making. It is able to input any mode of text, video, image, sound, and time-series and then output any number, or combination, of predictions.

We’ve all heard of , or LLMs—massive scale trained on huge amounts of text that form the basis for chatbots like OpenAI’s ChatGPT. Next-generation multimodal models (MMs) can learn from inputs beyond text, including video, images, and sound.

Creating MM models at a smaller scale poses significant challenges, including the problem of being robust to non-random missing information. This is information that a model doesn’t have, often due to some biased availability in resources. It is thus critical to ensure the model does not learn the patterns of biased missingness in making its predictions.

Elon Musk claims OpenAI is using GPT-4 to ‘maximize profits’ instead of ‘for the benefit of humanity.’


The lawsuit claims that the GPT-4 model OpenAI released in March 2023 isn’t just capable of reasoning but is also actually “better at reasoning than average humans,” having scored in the 90th percentile on the Uniform Bar Examination for lawyers. The company is rumored to be developing a more advanced model, known as “Q Star,” that has a stronger claim to being true artificial general intelligence (AGI).

Altman was fired (and subsequently rehired five days later) by OpenAI in 2023 over vague claims that his communication with the board was “hindering its ability to exercise its responsibilities.” The lawsuit filed by Musk alleges that in the days following this event, Altman, Brockman, and Microsoft “exploited Microsoft’s significant leverage over OpenAI” to replace board members with handpicked alternatives that were better approved of by Microsoft.

“The new Board members lack substantial AI expertise and, on information and belief, are ill equipped by design to make an independent determination of whether and when OpenAI has attained AGI — and hence when it has developed an algorithm that is outside the scope of Microsoft’s license,” claims the lawsuit. The partnership between OpenAI and Microsoft is currently being examined by regulators in the UK, EU, and US to assess if their shared relationship impacts competition.

With an emphasis on AI-first strategy and improving Google Cloud databases’ capability to support GenAI applications, Google announced developments in the integration of generative AI with databases.


AWS offers a broad range of services for vector database requirements, including Amazon OpenSearch Service, Amazon Aurora PostgreSQL-Compatible Edition, Amazon RDS for PostgreSQL, Amazon Neptune ML, and Amazon MemoryDB for Redis. AWS emphasizes the operationalization of embedding models, making application development more productive through features like data management, fault tolerance, and critical security features. AWS’s strategy focuses on simplifying the scaling and operationalization of AI-powered applications, providing developers with the tools to innovate and create unique experiences powered by vector search.

Azure takes a similar approach by offering vector database extensions to existing databases. This strategy aims to avoid the extra cost and complexity of moving data to a separate database, keeping vector embeddings and original data together for better data consistency, scale, and performance. Azure Cosmos DB and Azure PostgreSQL Server are positioned as services that support these vector database extensions. Azure’s approach emphasizes the integration of vector search capabilities directly alongside other application data, providing a seamless experience for developers.

Google’s move towards native support for vector storage in existing databases simplifies building enterprise GenAI applications relying on data stored in the cloud. The integration with LangChain is a smart move, enabling developers to instantly take advantage of the new capabilities.

Check out Sanctuary AI’s Pheonix humanoid robot sorting items with grace and speed.

Following hot on the heels of Tesla’s Optimus and Figure 1 videos released recently, another humanoid robotics firm, Sanctuary AI, released the latest developments in its bot–the Pheonix.


Sanctuary AI’s Pheonix can now move things around a table just like a human being. Check it out for yourself.

Fast and cheap for AI inference (responding to chat prompts with very low latency at very high speeds.)


Discussing how it works, benchmarks, how it compares to other AI accelerators and the future outlook!

Support me at Patreon ➜ / anastasiintech.

Sign up for my Deep In Tech Newsletter for free! ➜ https://anastasiintech.substack.com.

https://anastasiintech.com