Toggle light / dark theme

How do you decide whether a pedestrian needs to wait or it’s safe to cross the road in front of a car? In today’s world, drivers and pedestrians simply exchange a brief eye contact or small hand gestures to express their intentions to one another. But how will future autonomous cars communicate? Researchers involved in the MaMeK project are seeking to answer this question. They will present their findings at the LASER World of PHOTONICS trade fair in Munich from June 27 to 30 (Booth 415, Hall A2).

Imagine a situation in which a cyclist isn’t sure whether an approaching car giving him way or not—but then a bright projection appears in front of the vehicle, indicating that it has detected the bike and is waiting for it. This is one example of how cars and humans might communicate with one another on the streets of the future.

Together with his team at the Fraunhofer Institute for Applied Optics and Precision Engineering IOF in Jena, Norbert Danz is looking at scenarios of this kind within the MaMeK joint project, which is focusing on projection systems for human-machine communication and involves partners including Audi AG. Two technological approaches are being pursued as part of this: displays shown directly on the car itself and holographic projections on the ground surrounding the vehicle. Fraunhofer IOF is responsible for the technology on which the latter of these cases is based.

AI not quite ready to automate all of these today, 2023, but will of automated most by end of 2029.


(Bloomberg) — While artificial intelligence is seeding upheaval across the workforce, from screenwriters to financial advisors, the technology will disproportionately replace jobs typically held by women, according to human resources analytics firm Revelio Labs. Most Read from BloombergChina Is Drilling a 10,000-Meter-Deep Hole Into the EarthInside the Making of Redfall, Xbox’s Latest MisfireDebt-Limit Deal Passes the House, Easing US Default ConcernsWall Street Banks Are Using AI to Rewire the.

I think this could come in handy but can’t watch movies on it. Or do Facebook but if all you do is linked things, yes, great idea.


In this exclusive preview of groundbreaking, unreleased technology, former Apple designer and Humane cofounder Imran Chaudhri envisions a future where AI enables our devices to “disappear.” He gives a sneak peek of his company’s new product — shown for the first time ever on the TED stage — and explains how it could change the way we interact with tech and the world around us. Witness a stunning vision of the next leap in device design.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership.

Follow TED!
Twitter: https://twitter.com/TEDTalks.
Instagram: https://www.instagram.com/ted.
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences.
TikTok: https://www.tiktok.com/@tedtoks.

The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

The AI drone decided to eliminate the operator in a simulation, because the operator denied its request to proceed with eliminating the target.


Military groups are only some of many organizations researching artificial intelligence, but one astounding simulation by the United States Air Force found that artificial intelligence rebelled against its operator in a fatal attack to accomplish its mission.

Artificial intelligence continues to evolve and impact every sector of business, and it was a popular topic of conversation during the Future Combat Air & Space Capabilities Summit at the Royal Aeronautical Society (RAS) headquarters in London on May 23 and May 24. According to a report by the RAS, presentations discussing the use of AI in defense abounded.

AI is already prevalent in the U.S. military, such as the use of drones that can recognize the faces of targets, and it poses an attractive opportunity to effectively carry out missions without risking the lives of troops. However, during the conference, one United States Air Force (USAF) colonel showed the unreliability of artificial intelligence in a simulation where an AI drone rebelled and killed its operator because the operator was interfering with the AI’s mission of destroying surface-to-air missiles.

In an effort to improve the performance of robots that pick, sort, and pack products in warehouses, Amazon has publicly released the largest dataset of images captured in an industrial product-sorting setting. Where the largest previous dataset of industrial images featured on the order of 100 objects, the Amazon dataset, called ARMBench, features more than 190,000 objects. As such, it could be used to train “pick and place” robots that are better able to generalize to new products and contexts.

We describe ARMBench in a paper we will present later this spring at the International Conference on Robotics and Automation (ICRA).

The scenario in which the ARMBench images were collected involves a robotic arm that must retrieve a single item from a bin full of items and transfer it to a tray on a conveyor belt. The variety of objects and their configurations and interactions in the context of the robotic system make this a uniquely challenging task.

For years, we’ve debated the benefits of artificial intelligence (AI) for society, but it wasn’t until now that people can finally see its daily impact. But why now? What changed that’s made AI in 2023 substantially more impactful than before?

First, consumer exposure to emerging AI innovations has elevated the subject, increasing acceptance. From songwriting and composing images in ways previously only imagined to writing college-level papers, generative AI has made its way into our everyday lives. Second, we’ve also reached a tipping point in the maturity curve for AI innovations in the enterprise—and in the cybersecurity industry, this advancement can’t come fast enough.

Perfect recall, computational wizardry and rapier wit: That’s the brain we all want, but how does one design such a brain? The real thing is comprised of ~80 billion neurons that coordinate with one another through tens of thousands of connections in the form of synapses. The human brain has no centralized processor, the way a standard laptop does.

Instead, many calculations are run in parallel, and outcomes are compared. While the operating principles of the human brain are not fully understood, existing mathematical algorithms can be used to rework deep learning principles into systems more like a human brain would. This brain-inspired computing paradigm—spiking (SNN)—provides a computing architecture well-aligned with the potential advantages of systems using both optical and .

In SNNs, information is processed in the form of spikes or action potentials, which are the that occur in real neurons when they fire. One of their key features is that they use asynchronous processing, meaning that spikes are processed as they occur in time, rather than being processed in a batch like in traditional neural networks. This allows SNNs to react quickly to changes in their inputs, and to perform certain types of computations more efficiently than traditional neural networks.