Toggle light / dark theme

Elon Musk Just Made a Big Announcement About Tesla’s Next-Gen Vehicle

During a conference with Morgan Stanley, Tesla CEO Elon Musk reiterated the promise that the company’s next-generation vehicle will operate mostly in autonomous mode. The event was primarily focused on Twitter, but Musk fielded questions about Tesla and SpaceX during the latter half of the conference. One of the questions was about the potential capabilities of the company’s upcoming vehicle, which was previously discussed at Tesla’s Investor Day event. The promise of a mostly autonomous vehicle is not new for Tesla, but it continues to generate interest and speculation from investors and the public alike.

Tesla’s upcoming vehicle, which will be manufactured at the company’s latest factory in Mexico, is anticipated to function in “almost entirely autonomous mode.” Tesla has been working on developing Full Self-Driving for a few years now, and the company’s CEO, Elon Musk, has consistently stated for at least four years that Teslas will be able to drive themselves. Despite impressive strides made in this area, Tesla’s vehicles are still not completely capable of autonomous driving.

Well, after this, I read one opinion under Teslarati that said “We need a cheap practical everyday driver that my mom can drive. Not another autonomous dream car. I really hope Tesla has a plan B for when the next gen car is ready and FSD is not.” Now, some of you guys may say, if you want a cheap EV go get a Chevy Bolt. But here is the thing: If Tesla wants to sell 20 million teslas per year they are going to have to appeal to the masses.

#tesla.

This is Armen Hareyan from Torque News. Please follow us at https://twitter.com/torquenewsauto on Twitter and https://www.torquenews.com/ for daily automotive news.

Reference.

The future is now: Elon Musk says Neuralink is ready for human testing

Elon Musk’s company Neuralink has developed a technology that can link human brains to computers, and according to Musk, it is now ready for human testing. This groundbreaking technology has the potential to revolutionize the way we communicate and interact with machines, and could pave the way for new treatments for neurological disorders. With the announcement that Neuralink is ready for human testing, the future of human-computer integration is closer than ever before.

#neuralink #elonmusk #braincomputerinterface #humanenhancement #neurotechnology #futurismo #transhumanisme #neuroscience #innovation #technews #mindcontrol #cyborgs #neurologicaldisorders #futuretechnology #humanpotential #ai #neuralengineering #brainimplants #humanmachineinterface #brainresearch #brainwavesound

When will a computer surpass the human brain?

This is a clip from Technocalyps, a documentary in three parts about the exponential growth of technology and trans-humanism, made by Hans Moravec. The documentary came out in 1998, and then a new version was made in 2006. This is how the film-makers themselves describe what the movie is about:

“The accelerating advances in genetics, brain research, artificial intelligence, bionics and nanotechnology seem to converge to one goal: to overcome human limits and create higher forms of intelligent life and to create transhuman life.”

You can see the whole documentary here: https://www.youtube.com/watch?v=fKvyXBPXSbk. Or, if you’re more righteous then I am, you can order the DVD on technocalyps.com.

6 Cool Things You Can Do With Bing Chat AI

Microsoft revealed an AI-powered Bing chatbot in February, dubbing it “the new Bing.” But what can you actually do with the new Bing, and where does it fall short?

The new Bing is impressive for an automated tool, as it can not only answer questions in full sentences (or longer paragraphs), but it can also draw information from recent web results. The web features give it an edge over ChatGPT, which has limited knowledge of current events and facts, but it still has problems providing factual answers or helpful responses. That significantly affects its usefulness as a tool, though Bing’s ability to cite sources can help you double-check its responses.

Engineers use psychology, physics, and geometry to make robots more intelligent

Robots are all around us, from drones filming videos in the sky to serving food in restaurants and diffusing bombs in emergencies. Slowly but surely, robots are improving the quality of human life by augmenting our abilities, freeing up time, and enhancing our personal safety and well-being. While existing robots are becoming more proficient with simple tasks, handling more complex requests will require more development in both mobility and intelligence.

Columbia Engineering and Toyota Research Institute computer scientists are delving into psychology, physics, and geometry to create algorithms so that robots can adapt to their surroundings and learn how to do things independently. This work is vital to enabling robots to address new challenges stemming from an aging society and provide better support, especially for seniors and people with disabilities.

A longstanding challenge in computer vision is object permanence, a well-known concept in psychology that involves understanding that the existence of an object is separate from whether it is visible at any moment. It is fundamental for robots to understand our ever-changing, dynamic world. But most applications in computer vision ignore occlusions entirely and tend to lose track of objects that become temporarily hidden from view.

Scientists can now read your MIND: AI turns people’s thoughts into images with 80% accuracy

Artificial intelligence can create images based on text prompts, but scientists unveiled a gallery of pictures the technology produces by reading brain activity. The new AI-powered algorithm reconstructed around 1,000 images, including a teddy bear and an airplane, from these brain scans with 80 percent accuracy.

Quantum computing is the key to consciousness

With the rapid development of chatbots and other AI systems, questions about whether they will ever gain true understanding, become conscious, or even develop a feeling agency have become more pressing. When it comes to making sense of these qualities in humans, our ability for counterfactual thinking is key. The existence of alternative worlds where things happen differently, however, is not just an exercise in imagination – it’s a key prediction of quantum mechanics. Perhaps our brains are able to ponder how things could have been because in essence they are quantum computers, accessing information from alternative worlds, argues Tim Palmer.

Ask a chatbot “How many prime numbers are there?” and it will surely tell you that there are an infinite number. Ask the chatbot “How do we know?” and it will reply that there are many ways to show this, the original going back to the mathematician Euclid of ancient Greece. Ask the chatbot to describe Euclid’s proof and it will answer correctly [ii]. [ii.

Of course, the chatbot has got all this information from the internet. Additional software in the computer can check that each of the steps in Euclid’s proof is valid and hence can confirm that the proof is a good one. But the computer doesn’t understand the proof. Understanding is a kind of Aha! moment, when you see why the proof works, and why it wouldn’t work if a minor element in it was different (for example the proof in the footnotes doesn’t work if any number but 1 is added when creating the number Q). Chatbots don’t have Aha! moments, but we do. Why?

/* */