Toggle light / dark theme

Starship Technologies raises $90M as its sidewalk robots pass 6M deliveries

Sidewalk delivery robot services appear to be stalling left and right, but a pioneer in the concept says it is profitable and has now raised a round of funding to scale up to meet market demand. Starship Technologies, a startup out of Estonia that was an early mover in the delivery robotics space, has picked up $90 million in funding as it works to cement its position at the top of its category.

This latest investment round is being co-led by two previous backers: Plural, the VC with roots in Estonia and London that announced a new $430 million fund last month; and Iconical, the London-based investor backed by Janus Friis, the serial entrepreneur who was a co-founder of Skype, and who is also a co-founder of Starship itself.

It brings the total raised by Starship to $230 million, with previous backers including the Finnish-Japanese firm NordicNinja, the European Investment Bank, Morpheus Ventures and TDC.

Navigation Doppler Lidar: Revolutionizing Lunar Landing

Read about NASA’s new instrument for landing on other worlds!


Landing on planetary bodies is both risky and hard, and landing humans is even riskier and harder. This is why technology needs to be developed to mitigate the risks associated with landing large spacecraft on the Moon and other planetary bodies we plan to continue exploring, both in the near and distant future. This is what makes the Nova-C lunar lander from Intuitive Machines—which is scheduled to launch to the Moon on February 13 and also called Nova-C (IM-1) —so vital to returning humans to the Moon. One of its NASA science payloads will be the Navigation Doppler Lidar (NDL), which will serve as a technology demonstration for future landers to help them navigate risky terrain and land safely.

Image of the Navigation Doppler Lidar which will be a technology demonstration during the IM-1 mission. (Credit: NASA/David C. Bowman)

When NASA was landing robots on Mars in the 1990s and 2000s, they discovered that radar and radio waves were insufficient for accurate landing measurements, so the engineers had to come up with their own plan to land spacecraft on extraterrestrial worlds.

Scientists Train AI Using Headcam Footage From Human Toddler

Researchers have not only built an AI child, but are now training AI using headcam footage from a human baby as well.

In a press release, New York University announced that its data science researchers had strapped a camera to the head of a real, live, human toddler for 18 months to see how much an AI model could learn from it.

Most large language models (LLMs), like OpenAI’s GPT-4 and its competitors, are trained on “astronomical amounts of language input” that are many times larger than what infants receive when learning to speak a language during the first years of their lives.

AI Can Design Totally New Proteins From Scratch—It’s Time to Talk Biosecurity

Two decades ago, engineering designer proteins was a dream.

Now, thanks to AI, custom proteins are a dime a dozen. Made-to-order proteins often have specific shapes or components that give them abilities new to nature. From longer-lasting drugs and protein-based vaccines, to greener biofuels and plastic-eating proteins, the field is rapidly becoming a transformative technology.

Custom protein design depends on deep learning techniques. With large language models—the AI behind OpenAI’s blockbuster ChatGPT—dreaming up millions of structures beyond human imagination, the library of bioactive designer proteins is set to rapidly expand.

An AI Just Learned Language Through the Eyes and Ears of a Toddler

For the next year and a half, the camera captured snippets of his life. He crawled around the family’s pets, watched his parents cook, and cried on the front porch with grandma. All the while, the camera recorded everything he heard.

What sounds like a cute toddler home video is actually a daring concept: Can AI learn language like a child? The results could also reveal how children rapidly acquire language and concepts at an early age.

A new study in Science describes how researchers used Sam’s recordings to train an AI to understand language. With just a tiny portion of one child’s life experience over a year, the AI was able to grasp basic concepts—for example, a ball, a butterfly, or a bucket.

Google’s Gemini AI Hints at the Next Great Leap for the Technology

Google has launched Gemini, a new artificial intelligence system that can seemingly understand and speak intelligently about almost any kind of prompt—pictures, text, speech, music, computer code, and much more.

This type of AI system is known as a multimodal model. It’s a step beyond just being able to handle text or images like previous algorithms. And it provides a strong hint of where AI may be going next: being able to analyze and respond to real-time information from the outside world.

Although Gemini’s capabilities might not be quite as advanced as they seemed in a viral video, which was edited from carefully curated text and still-image prompts, it is clear that AI systems are rapidly advancing. They are heading towards the ability to handle more and more complex inputs and outputs.

A Meme’s Glimpse into the Pinnacle of Artificial Intelligence (AI) Progress in a Mamba Series: LLM Enlightenment

In the dynamic field of Artificial Intelligence (AI), the trajectory from one foundational model to another has represented an amazing paradigm shift. The escalating series of models, including Mamba, Mamba MOE, MambaByte, and the latest approaches like Cascade, Layer-Selective Rank Reduction (LASER), and Additive Quantization for Language Models (AQLM) have revealed new levels of cognitive power. The famous ‘Big Brain’ meme has succinctly captured this progression and has humorously illustrated the rise from ordinary competence to extraordinary brilliance as one delf into the intricacies of each language model.

Mamba

Mamba is a linear-time sequence model that stands out for its rapid inference capabilities. Foundation models are predominantly built on the Transformer architecture due to its effective attention mechanism. However, Transformers encounter efficiency issues when dealing with long sequences. In contrast to conventional attention-based Transformer topologies, with Mamba, the team introduced structured State Space Models (SSMs) to address processing inefficiencies on extended sequences.