Toggle light / dark theme

Gonzalez thinks that Tesla taxis could help reinvigorate the city’s yellow-cab industry, which has taken a major hit from ride-hailing services like Uber, Via, and Lyft. He also predicts that the city could, for sustainability reasons, start mandating electric cabs, so he’s looking to get ahead of the curve, even if the commercial charging infrastructure isn’t quite there yet.

Read More: Tesla has released ‘full self-driving’ in beta — here’s how experts rank it, Waymo and 16 other power players in the world of self-driving cars

Drive Sally plans to bring hundreds of Teslas to New York’s streets in the near future, but for now, the company is still working out the kinks. Gonzalez suspects that the EVs may be better suited for for-hire “black cars” than yellow cabs, and he also said that the more-spacious Model Y would likely work better as a cab than the Model 3, but they’re still too expensive.

The year is coming to a close and it’s safe to say Elon Musk’s prediction that his company would field one million “robotaxis” by the end of 2020 isn’t going to come true. In fact, so far, Tesla’s managed to produce exactly zero self-driving vehicles. And we can probably call off the singularity too. GPT-3 has been impressive, but the closer machines get to aping human language the easier it is to see just how far away from us they really are.

So where does that leave us, ultimately, when it comes to the future of AI? That depends on your outlook. Media hype and big tech’s advertising machine has set us up for heartbreak when we compare the reality in 2020 to our 2016-era dreams of fully autonomous flying cars and hyper-personalized digital assistants capable of managing the workload of our lives.

But, if you’re gauging the future of AI from a strictly financial, marketplace point of view, there’s an entirely different outlook to consider. American rock band Timbuk 3 put it best when they sang “the future’s so bright, I gotta wear shades.”

SAN FRANCISCO – L3Harris Technologies will help the U.S. Defense Department extract information and insight from satellite and airborne imagery under a three-year U.S. Army Research Laboratory contract.

L3Harris will develop and demonstrate an artificial intelligence-machine learning interface for Defense Department applications under the multimillion-dollar contract announced Oct. 26.

“L3Harris will assist the Department of Defense with the integration of artificial intelligence and machine learning capabilities and technologies,” Stacey Casella, general manager for L3Harris’ Geospatial Processing and Analytics business, told SpaceNews. L3Harris will help the Defense Department embed artificial intelligence and machine learning in its workflows “to ultimately accelerate our ability to extract usable intelligence from the pretty expansive set of remotely sensed data that we have available today from spaceborne and airborne assets,” she added.

What rights does a robot have? If our machines become intelligent in the science-fiction way, that’s likely to become a complicated question — and the humans who nurture those robots just might take their side.

Ted Chiang, a science-fiction author of growing renown with long-lasting connections to Seattle’s tech community, doesn’t back away from such questions. They spark the thought experiments that generate award-winning novellas like “The Lifecycle of Software Objects,” and inspire Hollywood movies like “Arrival.”

Chiang’s soulful short stories have earned him kudos from the likes of The New Yorker, which has called him “one of the most influential science-fiction writers of his generation.” During this year’s pandemic-plagued summer, he joined the Museum of Pop Culture’s Science Fiction and Fantasy Hall of Fame. And this week, he’s receiving an award from the Arthur C. Clarke Foundation for employing imagination in service to society.

Circa 2019 o,.o.


❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers
❤️ Their blog post is available here: https://www.wandb.com/articles/better-paths-through-idea-space

📝 The paper “Emergent Tool Use from Multi-Agent Interaction” is available here:

A group of five companies including the Japanese unit of IBM Corp are currently developing an artificial intelligence suitcase to assist visually impaired people in traveling independently, with a pilot test of a prototype conducted at an airport in Japan earlier this month.

The small navigation robot, which is able to plan an optimal route to a destination based on the user’s location and map data, uses multiple sensors to assess its surroundings and AI functionality to avoid bumping into obstacles, according to the companies.

At the pilot experiment held on Nov 2, the AI suitcase was able to successfully navigate itself to an All Nippon Airways departure counter after receiving a command from Chieko Asakawa, a visually impaired fellow IBM overseeing the product’s development.

This article was published as a part of the Data Science Blogathon.

Introduction

Computer Vision is evolving from the emerging stage and the result is incredibly useful in various applications. It is in our mobile phone cameras which are able to recognize faces. It is available in self-driving cars to recognize traffic signals, signs, and pedestrians. Also, it is in industrial robots to monitor problems and navigating around co-workers.

You’ve probably heard us say this countless times: GPT-3, the gargantuan AI that spews uncannily human-like language, is a marvel. It’s also largely a mirage. You can tell with a simple trick: Ask it the color of sheep, and it will suggest “black” as often as “white”—reflecting the phrase “black sheep” in our vernacular.

That’s the problem with language models: because they’re only trained on text, they lack common sense. Now researchers from the University of North Carolina, Chapel Hill, have designed a new technique to change that. They call it “vokenization,” and it gives language models like GPT-3 the ability to “see.”

It’s not the first time people have sought to combine language models with computer vision. This is actually a rapidly growing area of AI research. The idea is that both types of AI have different strengths. Language models like GPT-3 are trained through unsupervised learning, which requires no manual data labeling, making them easy to scale. Image models like object recognition systems, by contrast, learn more directly from reality. In other words, their understanding doesn’t rely on the kind of abstraction of the world that text provides. They can “see” from pictures of sheep that they are in fact white.