Toggle light / dark theme

O,.o.


It is no secret that Apple is working on the development of electric vehicle technology for almost 5–6 years now. Codenamed as Project Titan, the project contains many ex-employees of renowned automobile brands such as Tesla, Land Rover, and Aston Martin. Recently, there were rumours of Apple linking up with TSMC (Taiwan Semiconductor Manufacturing Company) for producing self-driving chips for their planned vehicles.

It was unclear until now whether Apple will be manufacturing the vehicles on their own, or will they act as a software provider for existing automobile brands. Now, however, there are reports that the tech company is in early talks with the Hyundai Motor Group, among others.

A Hyundai Motors representative confirmed yesterday that the South Korean automobile company is in discussion with Apple. Of course, ever since the tech company announced its intentions of developing an electric vehicle, it is in talks with a number of global manufacturers. However, Hyundai is one of the first major names to have come up.

Despite the inherent challenges that voice-interaction may create, researchers at the Penn State College of Information Sciences and Technology recently found that deaf and hard-of-hearing users regularly use smart assistants like Amazon’s Alexa and Apple’s Siri in homes, workplaces and mobile devices.

The work highlights a clear need for more inclusive design, and presents an opportunity for deaf and hard-of-hearing users to have a more active role in the research and development of new systems, according to Johnna Blair, an IST doctoral student and member of the research team.

“As smart assistants become more common, are preloaded on every smartphone, and continue to provide benefits to the user beyond just the ease of voice activation, it’s important to understand how deaf and hard-of-hearing users have made smart assistants work for them and the realistic challenges they continue to face,” said Blair.

Over the past few years, computer scientists have developed increasingly advanced and sophisticated artificial intelligence (AI) tools, which can tackle a wide variety of tasks. This includes generative adversarial networks (GANs), machine-learning models that can learn to generate new data, including text, audio files or images. Some of these models can also be tailored for creative purposes, for instance, to create unique drawings, songs or poems.

Researchers at Tongji University in Shanghai in China and the University of Delaware in the US have recently created a GAN-based model that can generate abstract artworks inspired by Chinese . The term Chinese calligraphy refers to the artistic form in which Chinese characters were traditionally written.

“In 2019, we collaborated with a restaurant based in Shanghai to showcase some AI technologies for better customer engagement and experience,” Professor Harry Jiannan Wang, one of the researchers who carried out the study, told TechXplore. “We then had the idea to use AI technologies to generate personalized abstract art based on the dishes customers order and present the artwork to entertain customers while they wait for their meals to be served.”

Circa 2019


MIT’s Biomimetics Robotics department took a whole herd of its new ‘mini cheetah’ robots out for a group demonstration on campus recently – and the result is an adorable, impressive display of the current state of robotic technology in action.

The school’s students are seen coordinating the actions of 9 of the dog-sized robots running through a range of activities, including coordinated movements, doing flips, springing in slow motion from under piles of fall leaves, and even playing soccer.

The mini cheetah weights just 20 lbs, and its design was revealed for the first time earlier this year by a team of robot developers working at MIT’s Department of Mechanical Engineering. The mini cheetah is a shrunk-down version of the Cheetah 3, a much larger and more expensive to produce robot that is far less light on its feet, and not quite so customizable.

Elon Musk’s Neuralink has a straightforward outlook on artificial intelligence: “If you can’t beat em, join em.” The company means that quite literally — it’s building a device that aims to connect our brains with electronics, which would enable us, in theory, to control computers with our thoughts.

But how? What material would companies like Neuralink use to connect electronics with human tissue?

One potential solution was recently revealed at the American Chemical Society’s Fall 2020 Virtual Meeting & Expo. A team of researchers from the University of Delaware presented a new biocompatible polymer coating that could help devices better fuse with the brain.

Deep learning has come a long way since the days when it could only recognize handwritten characters on checks and envelopes. Today, deep neural networks have become a key component of many computer vision applications, from photo and video editors to medical software and self-driving cars.

Roughly fashioned after the structure of the brain, neural networks have come closer to seeing the world as humans do. But they still have a long way to go, and they make mistakes in situations where humans would never err.

These situations, generally known as adversarial examples, change the behavior of an AI model in befuddling ways. Adversarial machine learning is one of the greatest challenges of current artificial intelligence systems. They can lead to machine learning models failing in unpredictable ways or becoming vulnerable to cyberattacks.

We’re witnessing the emergence of something called “augmented creativity,” in which humans use AI to help them understand the deluge of data.


Researchers at Carnegie Mellon developed an alternate method: an AI-based approach to mining the patent and research databases for ideas that could be combined to form interesting solutions to specific problems. Their system uses analogies to help connect work from two seemingly distinct areas, which they believe makes innovation faster and a lot cheaper.

Augmented creativity

What we’re witnessing is the emergence of something called “augmented creativity,” in which humans use AI to help them understand the deluge of data. Early prototypes highlight the important role humans can, and should, play in making sense of the suggestions proposed by the AI.

But the MIT report also acknowledges that while fears of an imminent jobs apocalypse have been over-hyped, the way technology has been deployed over recent decades has polarized the economy, with growth in both white-collar work and low-paid service work at the expense of middle-tier occupations like receptionists, clerks, and assembly-line workers.

This is not an inevitable consequence of technological change, though, say the authors. The problem is that the spoils from technology-driven productivity gains have not been shared equally. The report notes that while US productivity has risen 66 percent since 1978, compensation for production workers and those in non-supervisory roles has risen only 10 percent.

“People understand that automation can make the country richer and make them poorer, and that they’re not sharing in those gains,” economist David Autor, a co-chair of the task force, said in a press release. “We need to restore the synergy between rising productivity and improvements in labor market opportunity.”