Toggle light / dark theme

Analysts are trying to estimate the value of Tesla’s Supercharger network as the NACS connector becomes the North American standard and could widen Tesla’s charging lead.

One of the top Tesla analysts believes it could be worth more than $100 billion.

The Supercharger network is the only global EV fast-charging network, and in North America, it is by far the most extensive and reliable.

“If someone is a good bulls—er, they are likely quite smart,” says Martin Turpin, a graduate student at the Reasoning and Decision Making Lab at the Unversity of Waterloo and co-lead on the study recently published in the scientific journal Evolutionary Psychology.

Turpin and his colleagues found that people who are better at producing believable explanations for concepts, even when those explanations aren’t based on fact, typically score better on intelligence tests than those who struggle to “bulls—,” as the study puts it.” However, it is not the case that those who are not good bulls—ers are less intelligent,” Turpin says.

Apple today introduced the first version of the visionOS software, debuting the ‌visionOS‌ 1.0 Developer Beta. The introduction of the beta comes as Apple has announced the launch of the ‌visionOS‌ software development kit (SDK) that will allow third-party developers to build apps for the Vision Pro headset.

The SDK can be accessed through Xcode 15 beta 2, and while developers do not have access to the Vision Pro headset itself as of yet, Apple will begin allowing testing starting next month.

The future of clippy and toy makers is promising 👌 😀 😄.


As Microsoft eagerly adds Chat-GPT to Bing, Word, Edge, and dozens of other products, I can’t help myself from thinking of Clippy. The old assistant could be seen as a precursor to modern, collaborative AI. Clearly, someone else had the same thought, but they actually did something with it.

Roboticist David Packman created an animatronic Clippy that answers voice prompts using Chat-GPT. Like a Alexa or Siri, it listens for a wake word (“Hey, Clippy”) and responds accordingly. Voice prompts and Clippy’s responses are processed through Azure Speech Services, and the whole thing runs on a Raspberry Pi computer.

VLADIMIR Putin vowed to deploy his hypersonic “Satan-2” nuclear-capable missiles in a chilling new threat to the West.

The Russian leader said that the new generations of the Sarmat intercontinental ballistic missiles — thought to be the world’s most powerful — would soon be deployed for combat duty.

In a speech to newly graduated soldiers, Putin warned: “In the near future, the first launchers of the Sarmat complex with a new heavy missile will go on combat duty.”

Matching the sight and sound of speech—a face to a voice—in early infancy is an important foundation for later language development.

This ability, known as intersensory processing, is an essential pathway to learning new words. According to a recent study published in the journal Infancy, the degree of success at intersensory processing at only 6 months old can predict vocabulary and language outcomes at 18 months, 2 years and 3 years old.

“Adults are highly skilled at this, but infants must learn to relate what they see with what they hear. It’s a tremendous job and they do it very early in their development,” said lead author Elizabeth V. Edgar, who conducted the study as an FIU psychology doctoral student and is now a postdoctoral fellow at the Yale Child Study Center. “Our findings show that intersensory processing has its own independent contribution to language, over and above other established predictors, including parent language input and socioeconomic status.”