Toggle light / dark theme

A Milan-based deep tech startup, Ephos, raised $8.5M in a seed round led by Starlight Ventures to accelerate the development of its glass-based quantum photonic chips. The company aims to transform not just quantum computing and AI but also the broader computational infrastructure of the future.

Other participants included Collaborative Fund, Exor Ventures, 2100 Ventures, and Unruly Capital. The round also attracted angel investors such as Joe Zadeh, former Vice President at Airbnb; Diego Piacentini, former Senior Vice President at Amazon; and Simone Severini, General Manager of Quantum Technologies at Amazon Web Services.

In addition to private investment, Ephos received funding from the European Innovation Council (EIC) and €450,000 in non-dilutive financing from NATO’s Defence Innovation Accelerator (DIANA).

In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents.

This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed to be impossible.

We are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us; in an important sense, society itself is a form of advanced intelligence. Our grandparents – and the generations that came before them – built and achieved great things. They contributed to the scaffolding of human progress that we all benefit from. AI will give people tools to solve hard problems and help us add new struts to that scaffolding that we couldn’t have figured out on our own. The story of progress will continue, and our children will be able to do things we can’t.

There is growing evidence of the effectiveness of Shampoo, a higher-order preconditioning method, over Adam in deep learning optimization tasks.

However, Shampoo’s drawbacks include additional hyperparameters and computational overhead when compared to Adam, which only updates running averages of…

SOAP: Improving and Stabilizing Shampoo using Adam.

N vyas, D morwani, R zhao, I shapira…


Contribute to nikhilvyas/SOAP development by creating an account on GitHub.

Coronary artery disease (CAD) is the most common cause of illness-based death throughout the world. According to the World Health Organization, CAD causes 17.9 million deaths per year worldwide, nearly one-third of all illness-based deaths annually.

Coronary angiography is currently the best method of confirming a CAD diagnosis, but it is expensive and invasive, poses risks to patients, and is not suitable for early diagnosis and assessing disease risk.

Seeking a safer, lower-cost and more efficient diagnostic method, a research team from Beijing University of Chinese Medicine’s School of Traditional Chinese Medicine, Beijing University of Chinese Medicine’s School of Life Science, and Hunan University of Chinese Medicine’s School of Traditional Chinese Medicine has used artificial intelligence (AI) to develop a diagnostic algorithm based on tongue imaging. Their work is published in Frontiers in Cardiovascular Medicine.

For the most part, we treat electric aviation like it’s something that we’ll see in the future. I mean, batteries are expensive and heavy, and they don’t hold that much energy per unit of weight. So, compared to, say, kerosene (jet fuel), batteries take up a lot more space and weight capacity in a plane design. This means either really poor range or carrying around nothing but batteries (which isn’t very useful).

But that’s only true for the largest of planes. The smaller the plane, the easier it has been for companies to electrify or even go full electric with it it. Once you get down to unmanned planes and helicopters that carry something like a small sensor payload (cameras, etc.), you’re in a realm where all-electric aviation has been around for over a decade.

Though, small unmanned systems like quadcopters tend to only fly for 30–45 minutes at most, while small fixed-wing remote piloted airplanes tend to fly for maybe 1–2 hours. What if you want to fly for a number of hours or even days to cover more ground? It turns out that there are some answers, and the usually involve solar.

Google is testing a new API that uses machine learning models to offer real-time language translation for inputted text and to make it easier to translate web pages.

According to a proposal spotted by Bleeping Computer, the feature is being developed by Chrome’s built-in AI team and is aimed at exposing the web browser’s built-in translation functionality and the ability to download additional language models to translate text.

While Chrome and Edge already have built-in translation features, they can sometimes have issues translating web pages that have dynamic or complex content. For example, Chrome may not be able to translate all sections of an interactive website correctly.

The potential pathways through which AI could help us escape a simulated reality are both fascinating and complex. One approach could involve AI discovering and manipulating the underlying algorithms that govern the simulation. By understanding these algorithms, AI could theoretically alter the simulation’s parameters or even create a bridge to the “real” world outside the simulation.

Another approach involves using AI to enhance our cognitive and perceptual abilities, enabling us to detect inconsistencies or anomalies within the simulation. These anomalies, often referred to as “glitches,” could serve as clues pointing to the artificial nature of our reality. For instance, moments of déjà vu or inexplicable phenomena might be more than just quirks of human perception—they could be signs of the simulation’s imperfections.

While the idea of escaping a simulation is intriguing, it also raises profound ethical and existential questions. For one, if we were to confirm that we are indeed living in a simulation, what would that mean for our understanding of free will, identity, and the meaning of life? Moreover, the act of escaping the simulation could have unforeseen consequences. If the simulation is designed to sustain and nurture human life, breaking free from it might expose us to a harsher and more dangerous reality.

I expect this around 2029/2030, so about 5-ish years. Phase 1 of it will be: hey Ai, i didnt really like that level, mission, story line, etc… edits on the fly. Phase 2 of it will be creating DLC on the fly. And, Phase 3 will be just telling an AI roughly what you want to play, and it tries to build it.


Publishing giant Electronic Arts shows a concept of the different ways users could generate their own content in a game using generative AI.

What just happened? Researchers have successfully deployed a fully autonomous robot to inspect the inside of a nuclear fusion reactor. This achievement – the first of its kind – took place over 35 days as part of trials at the UK Atomic Energy Authority’s Joint European Torus facility.

JET was one of the world’s largest and most powerful operational fusion reactors until it was recently shut down. Meanwhile, the robotic star of the show was, of course, the four-legged Spot robot from Boston Dynamics, souped up with “localization and mission autonomy solutions” from the Oxford Robotics Institute (ORI) and “inspection payload” from UKAEA.

Spot roamed JET’s environment twice daily, using sensors to map the facility layout, monitor conditions, steer around obstacles and personnel, and collect vital data. These inspection duties normally require human operators to control the robot remotely.