Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1416

Jan 13, 2021

Flexible thermoelectric devices enable energy harvesting from human skin

Posted by in categories: energy, robotics/AI, wearables

A thermoelectric device is an energy conversion device that uses the voltage generated by the temperature difference between both ends of a material; it is capable of converting heat energy, such as waste heat from industrial sites, into electricity that can be used in daily life. Existing thermoelectric devices are rigid because they are composed of hard metal-based electrodes and semiconductors, hindering the full absorption of heat sources from uneven surfaces. Therefore, researchers have conducted recent studies on the development of flexible thermoelectric devices capable of generating energy in close contact with heat sources such as human skins and hot water pipes.

The Korea Institute of Science and Technology (KIST) announced that a collaborative research team led by Dr. Seungjun Chung from the Soft Hybrid Materials Research Center and Professor Yongtaek Hong from the Department of Electrical and Computer Engineering at Seoul National University (SNU, President OH Se-Jung) developed flexible with high power generation performance by maximizing flexibility and transfer efficiency. The research team also presented a mass-production plan through an automated process including a printing process.

The transfer efficiency of existing substrates used for research on flexible thermoelectric devices is low due to their very . Their heat absorption efficiency is also low due to lack of flexibility, forming a heat shield layer, e.g., air, when in contact with a heat source. To address this issue, organic-material-based thermoelectric devices with high flexibility have been under development, but their application on wearables is not easy because of its significantly lower performance compared to existing inorganic-material-based rigid thermoelectric devices.

Jan 12, 2021

Google trained a trillion-parameter AI language model

Posted by in category: robotics/AI

Researchers at Google claim to have trained a natural language model containing over a trillion parameters.

Jan 12, 2021

GM shares hit record high as automaker reveals electric van and delves into flying cars

Posted by in categories: robotics/AI, sustainability, transportation

The potential foray into “personal air mobility” was announced as part of Cadillac’s portfolio of luxury and EV vehicles. It included an autonomous shuttle and an electric vertical takeoff and landing (eVTOL) aircraft, or more commonly known as a flying car or air taxi.

Michael Simcoe, vice president of GM global design, said each concept reflected “the needs and wants of the passengers at a particular moment in time and GM’s vision of the future of transportation.”

“This is a special moment for General Motors as we reimagine the future of personal transportation for the next five years and beyond,” Simcoe said.

Jan 12, 2021

Samsung’s Bot Handy is kind of like a first generation robot butler

Posted by in categories: habitats, robotics/AI

This robot will vacuum and serve you a martini all with one hand…ignore the dust in your glass please.


Two of the new robots are more futuristic, but one of Samsung’s new Bots will be available in the US this year — a robot vacuum that doubles as a home monitoring device.

Jan 12, 2021

Diffractive networks improve optical image classification accuracy

Posted by in categories: information science, robotics/AI

Recently, there has been a reemergence of interest in optical computing platforms for artificial intelligence-related applications. Optics is ideally suited for realizing neural network models because of the high speed, large bandwidth and high interconnectivity of optical information processing. Introduced by UCLA researchers, Diffractive Deep Neural Networks (D2NNs) constitute such an optical computing framework, comprising successive transmissive and/or reflective diffractive surfaces that can process input information through light-matter interaction. These surfaces are designed using standard deep learning techniques in a computer, which are then fabricated and assembled to build a physical optical network. Through experiments performed at terahertz wavelengths, the capability of D2NNs in classifying objects all-optically was demonstrated. In addition to object classification, the success of D2NNs in performing miscellaneous optical design and computation tasks, including e.g., spectral filtering, spectral information encoding, and optical pulse shaping have also been demonstrated.

In their latest paper published in Light: Science & Applications, UCLA team reports a leapfrog advance in D2NN-based image classification accuracy through ensemble learning. The key ingredient behind the success of their approach can be intuitively understood through the experiment of Sir Francis Galton (1822–1911), an English philosopher and statistician, who, while visiting a livestock fair, asked the participants to guess the weight of an ox. None of the hundreds of participants succeeded in guessing the weight. But to his astonishment, Galton found that the median of all the guesses came quite close—1207 pounds, and was accurate within 1% of the true weight of 1198 pounds. This experiment reveals the power of combining many predictions in order to obtain a much more accurate prediction. Ensemble learning manifests this idea in machine learning, where an improved predictive performance is attained by combining multiple models.

In their scheme, UCLA researchers reported an ensemble formed by multiple D2NNs operating in parallel, each of which is individually trained and diversified by optically filtering their inputs using a variety of filters. 1252 D2NNs, uniquely designed in this manner, formed the initial pool of networks, which was then pruned using an iterative pruning algorithm, so that the resulting physical ensemble is not prohibitively large. The final prediction comes from a weighted average of the decisions from all the constituent D2NNs in an ensemble. The researchers evaluated the performance of the resulting D2NN ensembles on CIFAR-10 image dataset, which contains 60000 natural images categorized in 10 classes and is an extensively used dataset for benchmarking various machine learning algorithms. Simulations of their designed ensemble systems revealed that diffractive optical networks can significantly benefit from the ‘wisdom of the crowd’.

Jan 12, 2021

Machine learning accelerates discovery of materials for use in industrial processes

Posted by in categories: materials, robotics/AI

New research led by researchers at the University of Toronto (U of T) and Northwestern University employs machine learning to craft the best building blocks in the assembly of framework materials for use in a targeted application.

Jan 11, 2021

Tweaking AI software to function like a human brain improves computer’s learning ability

Posted by in category: robotics/AI

Computer-based artificial intelligence can function more like human intelligence when programmed to use a much faster technique for learning new objects, say two neuroscientists who designed such a model that was designed to mirror human visual learning.

Jan 11, 2021

DARPA Successfully Demonstrates, Transitions Advanced RF Networking Program

Posted by in categories: internet, military, robotics/AI

Field tests validate tech that automatically links diverse radio waveforms in contested environments.

Like.

Comment.

Continue reading “DARPA Successfully Demonstrates, Transitions Advanced RF Networking Program” »

Jan 11, 2021

DARPA Gremlins Project Completes Third Flight Test Deployment

Posted by in categories: military, robotics/AI

Next capture attempts scheduled to occur in spring of 2021

Like.

Comment.

Continue reading “DARPA Gremlins Project Completes Third Flight Test Deployment” »

Jan 11, 2021

Samsung is making a robot that can pour wine and bring you a drink

Posted by in categories: food, robotics/AI

https://youtube.com/watch?v=DqXsTtW5VEo

The robot is still “in development.”


Samsung’s Bot Handy has a robotic arm that can pick up laundry, load the dishwasher, set the table, pour wine, and bring you a drink. The robot is still in development.