Some researchers see superhuman qualities in artificial intelligence. All scientists need to be alert to the risks this creates.
Link to paper: Megaphragma is a tiny parasitoid wasp that shows complex behaviors despite its miniscule size.
Chua et al. map the compound eye and the lamina of micro-wasp Megaphragma viggianii in the same specimen with a few-nanometer resolution. The stereotyped cartridge connectome is similar to but simpler than that in larger insects. Dorsal rim specialization in photoreceptor morphology is reflected in the differences in downstream cartridge connectomes.
I explain how and why some modern philosophers are moving from materialism to idealism. Timestamps00:00 – Intro00:13 – Quick rundown01:03 – David Chalmers and…
Physicists debut a new method to detect gravitational waves with unprecedented precision, providing insights into black hole mergers.
A team of researchers from Peking University and the Eastern Institute of Technology (EIT) in China has developed a new framework to train machine learning models with prior knowledge, such as the laws of physics or mathematical logic, alongside data.
Chinese researchers are on the brink of pioneering a groundbreaking approach to developing ‘AI scientists capable of conducting experiments and solving scientific problems.
Recent advances in deep learning models have revolutionized scientific research, but current models still struggle to simulate real-world physics interactions accurately.
Erasing key information during training allows machine learning models to learn new languages faster and more easily.
“Without a fundamental understanding of the world, a model is essentially an animation rather than a simulation,” said Chen Yuntian, study author and a professor at the Eastern Institute of Technology (EIT).
Deep learning models are generally trained using data and not prior knowledge, which can include things such as the laws of physics or mathematical logic, according to the paper.
But the scientists from Peking University and EIT wrote that when training the models, prior knowledge could be used alongside data to make them more accurate, creating “informed machine learning” models capable of incorporating this knowledge into their output.
Inflection AI has recently launched Inflection-2.5, a model that competes with all the world’s leading LLMs, including GPT-4 and Gemini. Inflection-2.5 approaches the performance level of GPT-4 but utilizes only 40% of the computing resources for training.
Inflection-2.5 is available to all Pi’s users today, at pi.ai, on iOS, on Android, and via the new desktop app.
Inflection AI’s previous model, Inflection-1, utilized about 4% of the training FLOPs of GPT-4 and exhibited an average performance of around 72% compared to GPT-4 across various IQ-oriented tasks.
Techstars has currently invested about two-thirds of the fund, Gavet recently told TechCrunch, adding that the bank is “an amazing partner” and “very active in our program.”
However, J.P. Morgan has yet to tell Techstars whether it will renew the partnership for an Advancing Cities 2 Fund once the initial contract expires in December, sources say. That decision was supposed to be handed down last summer so that Techstars could start fundraising and begin deploying capital in 2025.
This means the fate of the Advancing Cities programs — and some of the around 20 people who work at Techstars in this program — is up in the air.