Toggle light / dark theme

Luxeed – a young, premium EV brand developed between Chery Automobile and Chinese tech giant Huawei, has shared new details of its flagship sedan, the S7. In a matter of months since first teasing the Tesla Model S competitor, followed by an influx of pre-orders, Chery and Huawei have now shared trim variants, pricing, and of course, range – which tops out at an impressive 855 km.

Luxeed is a new all-electric brand in China and when we say “new,” we mean we didn’t even know the official name of the joint effort between China’s Chery and Huawei until about four months ago.

In that time, we’ve seen Huawei tease its first model, learned its core specs from a regulatory filing in China, and saw the S7 sedan’s first unveiling three weeks ago, ahead of an official launch that took place today.

Save this for later! From the Moon to the planets to the stars, read about the celestial events happening in December 2023!


Labroots recently shared key celestial events for November 2023 which offered skywatchers an opportunity to witness 16 days of events that included numerous meteor showers, comets, and planetary encounters. Now, Labroots wishes to share with you 22 days of celestial events for December 2023 that will ensure astronomy fans will not be disappointed. Each event is labeled in all caps for the type of event (e.g., MOON, PLANET, COMET, etc.) to provide skywatchers additional excitement to catch these incredible events!

December 1 (MOON and STAR): The Earth’s Moon will pass between 1–2 degrees from the star Pollux, which is located just under 34 light-years from Earth and resides within the constellation Gemini.

December 2 (MOON, STARS, and METEOR SHOWER): The Earth’s Moon will pass between 3–4 degrees from the Beehive Cluster, which is an open cluster of stars located approximately 577 light-years from Earth and is approximately 15 light-years in diameter. Additionally, the Phoenicids meteor shower will be at its peak with a variable number of observable meteors per hour.

“We want the robot to ask for enough help such that we reach the level of success that the user wants. But meanwhile, we want to minimize the overall amount of help that the robot needs,” said Allen Ren.


A recent study presented at the 7th Annual Conference on Robotic Learning examines a new method for teaching robots how to ask for further instructions when carrying out tasks with the goal of improving robotic safety and efficiency. This study was conducted by a team of engineers from Google and Princeton University and holds the potential to design and build better-functioning robots that mirror human traits, such as humility. Engineers have recently begun using large language models, or LLMs—which is responsible for designing ChatGPT—to make robots more human-like, but this can also come with drawbacks, as well.

“Blindly following plans generated by an LLM could cause robots to act in an unsafe or untrustworthy manner, and so we need our LLM-based robots to know when they don’t know,” said Dr. Anirudha Majumdar, who is an assistant professor of mechanical and aerospace engineering at Princeton University and a co-author on the study.

For the study, the researchers used this LLM method with robotic arms in laboratories in New York City and Mountain View, California. For the experiments, the robots were asked to perform a series of tasks like placing bowls in the microwave or re-arranging items on a counter. The LLM algorithm assigned probabilities on which would be the best option based on the instructions, and promptly asked for help when a certain probability threshold was achieved. For example, the human would ask the robot to place one of two bowls in the microwave but would not say which one. The LLM algorithm would then trigger, causing the robot to ask for additional help.

LONDON, Nov 30 (Reuters) — The president of tech giant Microsoft (MSFT.O) said there is no chance of super-intelligent artificial intelligence being created within the next 12 months, and cautioned that the technology could be decades away.

OpenAI cofounder Sam Altman earlier this month was removed as CEO by the company’s board of directors, but was swiftly reinstated after a weekend of outcry from employees and shareholders.

Reuters last week exclusively reported that the ouster came shortly after researchers had contacted the board, warning of a dangerous discovery they feared could have unintended consequences.

Within a tumultuous week of November 21 for OpenAI—a series of uncontrolled outcomes, each with its own significance—one would not have predicted the outcome that was to be the reinstatement of Sam Altman as CEO of OpenAI, with a new board in tow —all in five days.

While it’s still unclear the official grounds of Sam Altman’s lack of transparency to the board, and the ultimate distrust that led to his ousting, what was apparent was Microsoft’s complete backing of Altman, and the ensuing lack of support for the original board and its decision. It now leaves everyone to question why a board that had control of the company was unable to effectively oust an executive given its members legitimate safety concerns? And, why was a structure that was put in place to mitigate the risk of unilateral control over artificial general intelligence usurped by an investor—the very entity the structure was designed to guard against?