Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1518

Jan 14, 2021

Hackers used 4 zero-days to infect Windows and Android devices

Posted by in category: robotics/AI

Boobytrapped websites are used by attackers to infect people who visited them.

Jan 14, 2021

After You Die, Microsoft Wants to Resurrect You as a Chatbot

Posted by in category: robotics/AI

No one knows where we go when we die. Microsoft might have some ideas.

Jan 13, 2021

A framework to assess the importance of variables for different predictive models

Posted by in categories: information science, robotics/AI

Two researchers at Duke University have recently devised a useful approach to examine how essential certain variables are for increasing the reliability/accuracy of predictive models. Their paper, published in Nature Machine Intelligence, could ultimately aid the development of more reliable and better performing machine-learning algorithms for a variety of applications.

“Most people pick a predictive machine-learning technique and examine which variables are important or relevant to its predictions afterwards,” Jiayun Dong, one of the researchers who carried out the study, told TechXplore. “What if there were two models that had similar performance but used wildly different variables? If that was the case, an analyst could make a mistake and think that one variable is important, when in fact, there is a different, equally good model for which a totally different set of variables is important.”

Dong and his colleague Cynthia Rudin introduced a method that researchers can use to examine the importance of variables for a variety of almost-optimal predictive models. This approach, which they refer to as “variable importance clouds,” could be used to gain a better understanding of machine-learning models before selecting the most promising to complete a given task.

Jan 13, 2021

The New Techno-Fusion: The Merging Of Technologies Impacting Our Future

Posted by in categories: augmented reality, biotech/medical, economics, health, internet, media & arts, quantum physics, robotics/AI, virtual reality

The process of systems integration (SI) functionally links together infrastructure, computing systems, and applications. SI can allow for economies of scale, streamlined manufacturing, and better efficiency and innovation through combined research and development.

New to the systems integration toolbox are the emergence of transformative technologies and, especially, the growing capability to integrate functions due to exponential advances in computing, data analytics, and material science. These new capabilities are already having a significant impact on creating our future destinies.

The systems integration process has served us well and will continue to do so. But it needs augmenting. We are on the cusp of scientific discovery that often combines the physical with the digital—the Techno-Fusion or merging of technologies. Like Techno-Fusion in music, Techno-Fusion in technologies is really a trend that experiments and transcends traditional ways of integration. Among many, there are five grouping areas that I consider good examples to highlight the changing paradigm. They are: Smart Cities and the Internet of Things (IoT); Artificial Intelligence (AI), Machine Learning (ML), Quantum and Super Computing, and Robotics; Augmented Reality (AR) and Virtual Reality Technologies (VR); Health, Medicine, and Life Sciences Technologies; and Advanced Imaging Science.

Jan 13, 2021

Concept whitening: A strategy to improve the interpretability of image recognition models

Posted by in categories: robotics/AI, space

Over the past decade or so, deep neural networks have achieved very promising results on a variety of tasks, including image recognition tasks. Despite their advantages, these networks are very complex and sophisticated, which makes interpreting what they learned and determining the processes behind their predictions difficult or sometimes impossible. This lack of interpretability makes deep neural networks somewhat untrustworthy and unreliable.

Researchers from the Prediction Analysis Lab at Duke University, led by Professor Cynthia Rudin, have recently devised a technique that could improve the interpretability of deep neural networks. This approach, called whitening (CW), was first introduced in a paper published in Nature Machine Intelligence.

Continue reading “Concept whitening: A strategy to improve the interpretability of image recognition models” »

Jan 13, 2021

Flexible thermoelectric devices enable energy harvesting from human skin

Posted by in categories: energy, robotics/AI, wearables

A thermoelectric device is an energy conversion device that uses the voltage generated by the temperature difference between both ends of a material; it is capable of converting heat energy, such as waste heat from industrial sites, into electricity that can be used in daily life. Existing thermoelectric devices are rigid because they are composed of hard metal-based electrodes and semiconductors, hindering the full absorption of heat sources from uneven surfaces. Therefore, researchers have conducted recent studies on the development of flexible thermoelectric devices capable of generating energy in close contact with heat sources such as human skins and hot water pipes.

The Korea Institute of Science and Technology (KIST) announced that a collaborative research team led by Dr. Seungjun Chung from the Soft Hybrid Materials Research Center and Professor Yongtaek Hong from the Department of Electrical and Computer Engineering at Seoul National University (SNU, President OH Se-Jung) developed flexible with high power generation performance by maximizing flexibility and transfer efficiency. The research team also presented a mass-production plan through an automated process including a printing process.

The transfer efficiency of existing substrates used for research on flexible thermoelectric devices is low due to their very . Their heat absorption efficiency is also low due to lack of flexibility, forming a heat shield layer, e.g., air, when in contact with a heat source. To address this issue, organic-material-based thermoelectric devices with high flexibility have been under development, but their application on wearables is not easy because of its significantly lower performance compared to existing inorganic-material-based rigid thermoelectric devices.

Jan 12, 2021

Google trained a trillion-parameter AI language model

Posted by in category: robotics/AI

Researchers at Google claim to have trained a natural language model containing over a trillion parameters.

Jan 12, 2021

GM shares hit record high as automaker reveals electric van and delves into flying cars

Posted by in categories: robotics/AI, sustainability, transportation

The potential foray into “personal air mobility” was announced as part of Cadillac’s portfolio of luxury and EV vehicles. It included an autonomous shuttle and an electric vertical takeoff and landing (eVTOL) aircraft, or more commonly known as a flying car or air taxi.

Michael Simcoe, vice president of GM global design, said each concept reflected “the needs and wants of the passengers at a particular moment in time and GM’s vision of the future of transportation.”

“This is a special moment for General Motors as we reimagine the future of personal transportation for the next five years and beyond,” Simcoe said.

Jan 12, 2021

Samsung’s Bot Handy is kind of like a first generation robot butler

Posted by in categories: habitats, robotics/AI

This robot will vacuum and serve you a martini all with one hand…ignore the dust in your glass please.


Two of the new robots are more futuristic, but one of Samsung’s new Bots will be available in the US this year — a robot vacuum that doubles as a home monitoring device.

Jan 12, 2021

Diffractive networks improve optical image classification accuracy

Posted by in categories: information science, robotics/AI

Recently, there has been a reemergence of interest in optical computing platforms for artificial intelligence-related applications. Optics is ideally suited for realizing neural network models because of the high speed, large bandwidth and high interconnectivity of optical information processing. Introduced by UCLA researchers, Diffractive Deep Neural Networks (D2NNs) constitute such an optical computing framework, comprising successive transmissive and/or reflective diffractive surfaces that can process input information through light-matter interaction. These surfaces are designed using standard deep learning techniques in a computer, which are then fabricated and assembled to build a physical optical network. Through experiments performed at terahertz wavelengths, the capability of D2NNs in classifying objects all-optically was demonstrated. In addition to object classification, the success of D2NNs in performing miscellaneous optical design and computation tasks, including e.g., spectral filtering, spectral information encoding, and optical pulse shaping have also been demonstrated.

In their latest paper published in Light: Science & Applications, UCLA team reports a leapfrog advance in D2NN-based image classification accuracy through ensemble learning. The key ingredient behind the success of their approach can be intuitively understood through the experiment of Sir Francis Galton (1822–1911), an English philosopher and statistician, who, while visiting a livestock fair, asked the participants to guess the weight of an ox. None of the hundreds of participants succeeded in guessing the weight. But to his astonishment, Galton found that the median of all the guesses came quite close—1207 pounds, and was accurate within 1% of the true weight of 1198 pounds. This experiment reveals the power of combining many predictions in order to obtain a much more accurate prediction. Ensemble learning manifests this idea in machine learning, where an improved predictive performance is attained by combining multiple models.

In their scheme, UCLA researchers reported an ensemble formed by multiple D2NNs operating in parallel, each of which is individually trained and diversified by optically filtering their inputs using a variety of filters. 1252 D2NNs, uniquely designed in this manner, formed the initial pool of networks, which was then pruned using an iterative pruning algorithm, so that the resulting physical ensemble is not prohibitively large. The final prediction comes from a weighted average of the decisions from all the constituent D2NNs in an ensemble. The researchers evaluated the performance of the resulting D2NN ensembles on CIFAR-10 image dataset, which contains 60000 natural images categorized in 10 classes and is an extensively used dataset for benchmarking various machine learning algorithms. Simulations of their designed ensemble systems revealed that diffractive optical networks can significantly benefit from the ‘wisdom of the crowd’.