Toggle light / dark theme

We’re Getting Worried about this “Conscious” AI ft. GPT-3

This outstanding AI FAKED an intelligent test 🤖 Visit http://brilliant.org/BeeyondIdeas/ to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription. 🪐

Watch Part I of the AI series here: https://youtu.be/eWHtDqnAIr8

This video was sponsored by Brilliant. If you’d like to see more of this kind of video, consider supporting our work by becoming a member today!
https://www.youtube.com/BeeyondIdeas/join.

Chapters:
0:00 Introduction.
1:11 AI and sentience.
3:00 If AI passes the Turing test.
6:34 Challenging AI to solve puzzles.
8:50 When AI is being manipulative.
11:57 The behavior of conscious AI

#AI #Sentient #TuringTest.

When computers, robots, or even intelligent chairs become sentient in the future, switching them off would be seen as heartless. They could attempt pleading with you to change your mind. However, a genuinely intelligenct smart technology may encourage something more extraordinary, if it had a sophisticated grasp of society and human psychology. In the end, you may not be able to turn it off at all.

“Sentient” AI Says It’s Able to Self-Create ft. Elon Musk

🤖 Should we be worried about AI becoming sentient? Visit https://brilliant.org/BeeyondIdeas/ to get started learning STEM for free. The first 200 people will get 20% off their annual premium subscription.

Watch Part 2 of the AI series: https://youtu.be/6zLi5clmdQc.

Chapters:
0:00 How the idea of a conscious machine started.
1:44 The development of AI
4:28 A dialogue with GPT-3 about sentience.
8:14 AI’s abstraction & reasoning.
13:08 Thinking capacity: Human vs AI
15:30 Perceiving consciousness.

This video was sponsored by Brilliant. If you’d like to see more of this kind of video, consider supporting our work by becoming a member today!
https://www.youtube.com/BeeyondIdeas/join.

#AI #Sentient #ElonMusk.

A conversation with GPT-3 engine:

Humanoid Robots: Sooner Than You Might Think

Not without human level hands. and should be 1. on list. and i dont see it til 2030 at earliest.


Robots are making their first tentative steps from the factory floor into our homes and workplaces. In a recent report, Goldman Sachs Research estimates a $6 billion market (or more) in people-sized-and-shaped robots is achievable in the next 10 to 15 years. Such a market would be able to fill 4% of the projected US manufacturing labor shortage by 2030 and 2% of global elderly care demand by 2035.

GS Research makes an additional, more ambitious projection as well. “Should the hurdles of product design, use case, technology, affordability and wide public acceptance be completely overcome, we envision a market of up to US$154bn by 2035 in a blue-sky scenario,” say the authors of the report The investment case for humanoid robots. A market that size could fill from 48% to 126% of the labor gap, and as much as 53% of the elderly caregiver gap.

Obstacles remain: Today’s humanoid robots can work in only short one-or two-hour bursts before they need recharging. Some humanoid robots have mastered mobility and agility movements, while others can handle cognitive and intellectual challenges – but none can do both, the research says. One of the most advanced robot-like technologies on the commercial market is a self-driving vehicle, but a humanoid robot would have to have greater intelligence and processing abilities than that – by a significant order. “In the history of humanoid robot development,” the report says, “no robots have been successfully commercialized yet.”

How AI has made hardware interesting again

Lawrence Livermore National Laboratory has long been one of the world’s largest consumers of supercomputing capacity. With computing power of more than 200 petaflops, or 200 billion floating-point operations per second, the U.S. Department of Energy-operated institution runs supercomputers from every major U.S. manufacturer.

For the past two years, that lineup has included two newcomers: Cerebras Systems Inc. and SambaNova Systems Inc. The two startups, which have collectively raised more than $1.8 billion in funding, are attempting to upend a market that has been dominated so far by off-the-shelf x86 central processing units and graphics processing units with hardware that’s purpose-built for use in artificial intelligence model development and inference processing to run those models.

Cerebras says its WSE-2 chip, built on a wafer-scale architecture, can bring 2.6 trillion transistors and 850,000 CPU cores to bear on the task of training neural networks. That’s about 500 times as many transistors and 100 times as many cores as are found on a high-end GPU. With 40 gigabytes of onboard memory and the ability to access up to 2.4 petabytes of external memory, the company claims, the architecture can process AI models that are too massive to be practical on GPU-based machines. The company has raised $720 million on a $4 billion valuation.

Notion Releases Alpha of Generative AI Copywriting Tool

Notion wants to write you a poem. The popular note-taking and database app has released Notion AI in private alpha today, becoming the latest consumer technology company to incorporate generative artificial intelligence.

In response to a request from a user, the new functionality can create functional scaffolds for blogs, social media posts, and other assets. A meeting agenda, press release, brainstorm, or poem can also be generated by Notion AI.

Aside from that, Notion AI can speed up the research and editing process for writers. The AI can, for example, analyze and summarize articles, pulling out critical points and action items.


Notion AI is a writing assistant that can help you write, brainstorm, edit, summarize, and more.

3D-printing microrobots with multiple component modules inside a microfluidic chip

Scientists from the Department of Mechanical Engineering at Osaka University introduced a method for manufacturing complex microrobots driven by chemical energy using in situ integration. By 3D-printing and assembling the mechanical structures and actuators of microrobots inside a microfluidic chip, the resulting microrobots were able to perform desired functions, like moving or grasping. This work may help realize the vision of microsurgery performed by autonomous robots.

As medical technology advances, increasingly complicated surgeries that were once considered impossible have become reality. However, we are still far away from a promised future in which microrobots coursing through a patient’s body can perform procedures, such as microsurgery or cancer cell elimination.

Although nanotech methods have already mastered the art of producing , it remains a challenge to manipulate and assemble these constituent parts into functional complex robots, especially when trying to produce them at a mass scale. As a result, the assembly, integration and reconfiguration of tiny mechanical components, and especially movable actuators driven by , remains a difficult and time-consuming process.

AI on a photonic chip conducts image recognition at the speed of light

For the first time, researchers implemented a type of AI called a deep neural network into a photonic (light-based) device. In doing so, they’ve come closer to making a machine that processes what it “sees” like humans do, very quickly and efficiently.

For instance, the photonic deep neural network can classify a single image in less than 570 picoseconds, or nearly 2 billion images per second. To put things into perspective, the frame rate for fluid footage sits between 23 and 120 frames per second.

“Direct, clock-less processing of optical data eliminates analog-to-digital conversion and the requirement for a large memory module, allowing faster and more energy-efficient neural networks for the next generations of deep learning systems,” wrote the authors from the University of Pennsylvania.

Copyright and Artificial Intelligence: An Exceptional Tale

As the US government begins to consider some of the legal implications for copyright in connection with the development and deployment of artificial intelligence, it is important to first step back to ensure that we are properly guided by context and a proper understanding of our goals — grounded in an informed grasp of the relationship of copyright to the development of AI, and a fair observation of the state of legal developments around the world. Far too many observers have oversimplified how various countries have addressed the relationship between copyright and AI. The reality is that all who have done so have rejected the notion that copyright is not implicated, and have developed legal norms which carefully limit the scope of any exceptions with an eye towards facilitating licensing, even when they seek to expand the development of AI as a national economic imperative.

I have written about the approach taken by the EU in the updated Copyright Directive, and note here that despite claims about Japan’s legislation, even their provisions — as manifested in the 2018 amendments, are designed to avoid conflict with the legitimate interests of copyright owners. While I don’t necessarily agree with Japan’s approach, it is important to highlight that even its exceptions, as I understand them: recognize that text and data mining/machine learning does in fact implicate copyright; apply only to materials that have been lawfully acquired; require that the use of each work is “minor” relative to the TDM effort; and provide that license terms must be honored. While it remains unclear to me that Japan’s goal of respecting copyright as required by international law has been achieved, it is important to understand that claims that Japan has removed copyright as an issue that must be addressed in the development of AI are inaccurate.

/* */