There’s a three-dimensional solution to manage the evolving dual-use concern of AI: advance states-centric monitoring and regulation, promote intellectual exchange between the non-proliferation sector and the AI industry, and encourage AI industrial contributions.
Category: robotics/AI – Page 706
DeepMind cofounder Mustafa Suleyman wants to build a chatbot that does a whole lot more than chat. In a recent conversation I had with him, he told me that generative AI is just a phase. What’s next is interactive AI: bots that can carry out tasks you set for them by calling on other software and other people to get stuff done. He also calls for robust regulation —and doesn’t think that’ll be hard to achieve.
Suleyman is not the only one talking up a future filled with ever more autonomous software. But unlike most people he has a new billion-dollar company, Inflection, with a roster of top-tier talent plucked from DeepMind, Meta, and OpenAI, and—thanks to a deal with Nvidia—one of the biggest stockpiles of specialized AI hardware in the world. Suleyman has put his money—which he tells me he both isn’t interested in and wants to make more of—where his mouth is.
The US military and its contractors would be exempt.
Robots that are autonomous or semi-autonomous and carry weapons or offensive capabilities are often called armed robots. These robots can be employed in a variety of settings, including the military, law enforcement, industry, and security.
Today, many armed robots are controlled remotely by human operators who can keep a safe distance between themselves and the devices. This is particularly prevalent with military drones, as the operators control the aircraft and its weaponry from a distance, making the machines even more dangerous to civilians.
There are continuous efforts to establish rules and laws controlling the deployment of armed robots in order to reduce the risks involved. Now, one US state is trying to outlaw them altogether.
Some Japanese researchers feel that AI systems trained on foreign languages cannot grasp the intricacies of Japanese language and culture.
Capturing blur-free images of fast movements like falling water droplets or molecular interactions requires expensive ultrafast cameras that acquire millions of images per second. In a new paper, researchers report a camera that could offer a much less expensive way to achieve ultrafast imaging for a wide range of applications such as real-time monitoring of drug delivery or high-speed lidar systems for autonomous driving.
“Our camera uses a completely new method to achieve high-speed imaging,” said Jinyang Liang from the Institut national de la recherche scientifique (INRS) in Canada. “It has an imaging speed and spatial resolution similar to commercial high-speed cameras but uses off-the-shelf components that would likely cost less than a tenth of today’s ultrafast cameras, which can start at close to $100,000.”
In a paper, titled “Diffraction-gated real-time ultrahigh-speed mapping photography” appearing in Optica, Liang together with collaborators from Concordia University in Canada and Meta Platforms Inc. show that their new diffraction-gated real-time ultrahigh-speed mapping (DRUM) camera can capture a dynamic event in a single exposure at 4.8 million frames per second. They demonstrate this capability by imaging the fast dynamics of femtosecond laser pulses interacting with liquid and laser ablation in biological samples.
12:17 minutes.
Predicting smells is more difficult. While we know that many sulfur-containing molecules tend to fall somewhere in the ‘rotten egg’ or ‘skunky’ category, predicting other aromas based solely on a chemical structure is hard. Molecules with a similar chemical structure may smell quite different—while two molecules with very different chemical structures can smell the same.
New AI voice and video tools can look and sound like you. But can they fool your family—or bank?
WSJ’s Joanna Stern replaced herself with her AI twin for the day and put “her” through a series of challenges, including creating a TikTok, making video calls and testing her bank’s voice biometric system.
0:00 How to make an AI video and voice clone.
2:29 Challenge 1: Phone calls.
3:36 Challenge 2: Create a TikTok.
4:47 Challenge 3: Bank Biometrics.
6:05 Challenge 4: Video calls.
6:45 AI vs. Humans.
Tech Things With Joanna Stern.
Sal Khan, the founder and CEO of Khan Academy, thinks artificial intelligence could spark the greatest positive transformation education has ever seen. He shares the opportunities he sees for students and educators to collaborate with AI tools — including the potential of a personal AI tutor for every student and an AI teaching assistant for every teacher — and demos some exciting new features for their educational chatbot, Khanmigo.
If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership.
Follow TED!
Twitter: https://twitter.com/TEDTalks.
Instagram: https://www.instagram.com/ted.
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences.
TikTok: https://www.tiktok.com/@tedtoks.
The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
Highlights from the latest #google I/O keynote presentation, featuring competitor technologies to #openai #gpt4 #chatgpt and many reveals across the entire suite of Google (GOOG stock) products and services. Highlights include Google PaLM, Google Gemini, #googlebard & more!
💰 Want my AI research and stock picks? Let me know: https://tickersymbolyou.com/ai/
Simply Wall Street’s Nvidia (NVDA Stock) Valuation: https://simplywall.st/stocks/us/semiconductors/nasdaq-nvda/nvidia?via=tsyou.
Google (GOOG Stock) Valuation: https://simplywall.st/stocks/us/media/nasdaq-googl/alphabet?via=tsyou.
Here’s my new article for Aporia magazine, the final futurist story in my 4-part series for them!
Written by Zoltan Istvan.
I met my wife on Match.com 15 years ago. She didn’t have a picture on her profile, but she had written a strong description of herself. It was enough to warrant a first date, and we got married a year later.
But what if ordinary dating sites allowed users to see their potential date naked using advanced AI that could “virtually undress” that person? Let’s take it a step further. What if they gave users the option to have virtual sex with their potential date using deepfake technology, before they ever met them in person? Some of this technology is already here. And it’s prompting a lot of thorny questions – not just for dating sites but for anyone who uses the web.