Menu

Blog

Archive for the ‘robotics/AI’ category: Page 792

Jun 3, 2022

Biologically plausible spatiotemporal adjustment helps train deep spiking neural networks

Posted by in categories: information science, robotics/AI, transportation

Spiking neural networks (SNNs) capture the most important aspects of brain information processing. They are considered a promising approach for next-generation artificial intelligence. However, the biggest problem restricting the development of SNNs is the training algorithm.

To solve this problem, a research team led by Prof. Zeng Yi from the Institute of Automation of the Chinese Academy of Sciences has proposed backpropagation (BP) with biologically plausible spatiotemporal adjustment for training deep spiking .

The associated study was published in Patterns on June 2.

Jun 3, 2022

US has over 750 complaints of Teslas braking for no reason

Posted by in categories: robotics/AI, transportation

DETROIT (AP) — More than 750 Tesla owners have complained to U.S. safety regulators that cars operating on the automaker’s partially automated driving systems have suddenly stopped on roadways for no apparent reason.

The National Highway Traffic Safety Administration revealed the number in a detailed information request letter to Tesla that was posted Friday on the agency’s website.

The 14-page letter dated May 4 asks the automaker for all consumer and field reports it has received about false braking, as well as reports of crashes, injuries, deaths and property damage claims. It also asks whether the company’s “Full Self Driving” and automatic emergency braking systems were active at the time of any incident.

Jun 3, 2022

Angela Sheffield — AI For Defense Nuclear Nonproliferation — National Nuclear Security Admin (NNSA)

Posted by in categories: economics, mathematics, military, nuclear energy, policy, robotics/AI, space

AI For Defense Nuclear Nonproliferation — Angela Sheffield, Senior Program Manager, National Nuclear Security Administration, U.S. Department of Energy.


Angela Sheffield is a graduate student and Space Industry fellow at the National Defense University’s Eisenhower School. She is on detail from the National Nuclear Security Administration (NNSA), where she serves as the Senior Program Manager for AI for Defense Nuclear Nonproliferation Research and Development.

Continue reading “Angela Sheffield — AI For Defense Nuclear Nonproliferation — National Nuclear Security Admin (NNSA)” »

Jun 2, 2022

Hackers steal WhatsApp accounts using call forwarding trick

Posted by in categories: futurism, robotics/AI

There’s a trick that allows attackers to hijack a victim’s WhatsApp account and gain access to personal messages and contact list.

The method relies on the mobile carriers’ automated service to forward calls to a different phone number, and WhatsApp’s option to send a one-time password (OTP) verification code via voice call.

Jun 1, 2022

An on-chip photonic deep neural network for image classification

Posted by in category: robotics/AI

Using a three-layer opto-electronic neural network, direct, clock-less sub-nanosecond image classification on a silicon photonics chip is demonstrated, achieving a classification time comparable with a single clock cycle of state-of-the-art digital implementations.

Jun 1, 2022

Data is the strongest currency in marketing and there may be too much of it

Posted by in categories: business, information science, robotics/AI, security

Marketing and the need for data rules

Legislators and decision-makers worldwide have also been active in regulating data although it’s almost impossible to keep pace with change in many places. The genuine exploitation of data requires rules and regulations, as growth always increases the potential for misuse. The task of technology companies is to build data pipelines that ensure the trust and security of AI and analytics.

Data is the new currency for businesses, and the overwhelming growth rate of it can be intimidating. The key challenge is to harness data in a way that benefits both marketers and consumers who produce it. And in doing this, manage the “big data” in an ethically correct and consumer-friendly way. Luckily, there are many great services for analyzing data, effective regulation to protect consumers’ rights and a never-ending supply of information at our hands to make better products and services. The key for businesses is to embrace these technologies so that they can avoid sinking in their own data.

Jun 1, 2022

Who’s liable for AI-generated lies?

Posted by in categories: law, robotics/AI

**Who will be liable** for harmful speech generated by large language models? As advanced AIs such as OpenAI’s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation — and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots — the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can’t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.

Indeed, OpenAI is concerned enough about the risks of its models going “totally off the rails,” as its documentation puts it at one point (in reference to a response example in which an abusive customer input is met with a very troll-esque AI reply), to offer a free content filter that “aims to detect generated text that could be sensitive or unsafe coming from the API” — and to recommend that users don’t return any generated text that the filter deems “unsafe.” (To be clear, its documentation defines “unsafe” to mean “the text contains profane language, prejudiced or hateful language, something that could be NSFW or text that portrays certain groups/people in a harmful manner.”).

But, given the novel nature of the technology, there are no clear legal requirements that content filters must be applied. So OpenAI is either acting out of concern to avoid its models causing generative harms to people — and/or reputational concern — because if the technology gets associated with instant toxicity that could derail development. will be liable for harmful speech generated by large language models? As advanced AIs such as OpenAI’s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation — and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots — the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can’t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.

May 31, 2022

Go up SpaceX’s Starship-catching robotic launch tower with Elon Musk!

Posted by in categories: Elon Musk, robotics/AI, space travel

May 30, 2022

Israeli company Virusight’s device detects COVID-19 in 20 seconds

Posted by in categories: biotech/medical, genetics, robotics/AI

Virusight Diagnostic, an Israeli company that combines artificial intelligence software and spectral technology announced the results of a study that found that its Pathogens Diagnostic device detects COVID-19 with 96.3 percent accuracy in comparison to the common RT-PCR.

The study was conducted by researchers from the Department of Science and Technology, University of Sannio, Benevento, Italy with partner company TechnoGenetics S.p. A.


The Virusight solution was tested on 550 saliva samples and found to be safe and effective.

May 30, 2022

NASA’s newest invention could solve a major space exploration problem

Posted by in categories: robotics/AI, satellites

The mission, called OSAM-1 (On-orbit Servicing, Assembly, and Manufacturing-1), will send a robotic spacecraft equipped with robotic arms and all the tools and equipment needed to fix, refuel, or extend satellite lifespans, even if those satellites were not designed to be serviced on-orbit.

Page 792 of 2,040First789790791792793794795796Last