Hello đ guys check out our newest video.
#(What)
#destrorobotics.
Please like đand subscribe to our channel.
Hello đ guys check out our newest video.
#(What)
#destrorobotics.
Please like đand subscribe to our channel.
Iâm trying to recall a sci-fi short story that I once read, about a spacecraft thatâs attempting to travel farther from Earth than anyone ever has before. As it gets farther away, the crew start to experience unexplained psychological and neurological symptoms. One by one, they eventually become catatonic and need to be cared for in the shipâs infirmary, while their crewmates desperately try to determine the cause.
The protagonist is the last person to be affected, and just as they are starting to succumb, they come up with a theory: human consciousness is not just an individual phenomenon, but is somehow dependent on the collective effect of all the other human minds on Earth. So as the ship leaves Earthâs âsphere of influenceâ, its passengers lose their consciousness and intelligence. Having realized this, the protagonist is barely able to program the autopilot to turn around, and the narration describes their descent into insanity and subsequent return to consciousness.
The title might have contained a reference to âclosenessâ, âdistanceâ, âsolitudeâ, âtogethernessâ, or something along those lines. I have a vague sense that the theme and style reminded me of David Brinâs work, but having looked through his bibliography, I donât think itâs one of his stories.
đFor business inquiries: [email protected].
â
Instagram: https://www.instagram.com/pro_robots.
You are on the PRO Robots channel and in this video we present the March 2022 news digest. The largest exhibition of technology Expo 2022 in Dubai, artificial intelligence that will replace programmers, new Atlas robot arms, an emotional android and opening of the GigaFactory Berlin by Elon Musk. All the most interesting news from the world of high-tech in one issue!
More interesting and useful content:
â
Elon Musk Innovation https://www.youtube.com/playlist?list=PLcyYMmVvkTuQ-8LO6CwGWbSCpWI2jJqCQ
â
Future Technologies Reviews https://www.youtube.com/playlist?list=PLcyYMmVvkTuTgL98RdT8-z-9a2CGeoBQF
â
Technology news.
https://www.facebook.com/PRO.Robots.Info.
#prorobots #technology #roboticsnews.
PRO Robots is not just a channel about robots and future technologies, we are interested in science, technology, new technologies and robotics in all its manifestations, science news, technology news today, science and technology news 2022, so that in the future it will be possible to expand future release topics. Today, our vlog just talks about complex things, follows the tech news, makes reviews of exhibitions, conferences and events, where the main characters are best robots in the world! Subscribe to the channel, like the video and join us!
Standard image sensors, like the billion or so already installed in practically every smartphone in use today, capture light intensity and color. Relying on common, off-the-shelf sensor technologyâknown as CMOSâthese cameras have grown smaller and more powerful by the year and now offer tens-of-megapixels resolution. But theyâve still seen in only two dimensions, capturing images that are flat, like a drawingâuntil now.
Researchers at Stanford University have created a new approach that allows standard image sensors to see light in three dimensions. That is, these common cameras could soon be used to measure the distance to objects.
The engineering possibilities are dramatic. Measuring distance between objects with light is currently possible only with specialized and expensive lidar âshort for âlight detection and rangingââsystems. If youâve seen a self-driving car tooling around, you can spot it right off by the hunchback of technology mounted to the roof. Most of that gear is the carâs lidar crash-avoidance system, which uses lasers to determine distances between objects.
I subtitled this post âWhy weâre all in denial about the robot apocalypseâ. I say that because I believe that society at large is completely, utterly, and woefully unprepared for the advent of sentient, living artificial general intelligence. I think the singularity is coming much sooner than most people expect, and I think itâs going to cause a great deal of upset when it arrives â for better and for worse.
Take for instance the common religious belief that people possess some unmeasurable, undefinable soul, and that this soul is what separates us from inanimate objects and non-sentient animals. Furthermore, some people believe that these souls come from deity. I have spoken with friends who believe that AGI is impossible because ârobots canât have souls, humans arenât Godâ. For these people, like Caleb says in Ex Machina (paraphrasing), removing the line between man and machine also removes the line between god and man.
Now, this isnât to say that AGI will destroy religion or anything â it may even be used to strengthen some sects (as taken to the extreme in HBOâs Raised By Wolves). No, religion has been around for millennia and Iâm sure it will continue to be around for many more millennia. Iâm simply predicting that a subset of religious people are going to experience lots of cognitive dissonance when the first AGI arrives.
AI will completely take over game development by the early 2030s. To a point where there will be almost no human developers. Just people telling AI what they want to play and it builds it in real time.
Over the past few years weâve seen massive improvements in AI technology, from GPT-3, AI picture generation to self-driving cars and drug discovery. But can machine learning progress change games?
Note: AI has many subsets, in this article when I say AI Iâm referring to machine learning algorithms.
First important question to ask is, will AI even change anything? Why use machine learning when you can just hardcode movement and dialogues? The answer to this question can be found in replayability and immersive gameplay.
If you need any proof that it doesnât take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway âOceanLightâ system housed at the National Supercomputing Center in Wuxi, China.
Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called âbrain-scaleâ where the number of parameters starts approaching the number of synapses in the human brain). But, as it turns out, some of these architectural details were hinted at in the three of the six nominations for the Gordon Bell Prize last fall, which we covered here. To our chagrin and embarrassment, we did not dive into the details of the architecture at the time (we had not seen that they had been revealed), and the BaGuaLu paper gives us a chance to circle back.
Before this slew of papers were announced with details on the new Sunway many-core processor, we did take a stab at figuring out how the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC) might build an exascale system, scaling up from the SW26010 processor used in the Sunway âTaihuLightâ machine that took the world by storm back in June 2016. The 260-core SW26010 processor was etched by Chinese foundry Semiconductor Manufacturing International Corporation using 28 nanometer processes â not exactly cutting edge. And the SW26010-Pro processor, etched using 14 nanometer processes, is not on an advanced node, but China is perfectly happy to burn a lot of coal to power and cool the OceanLight kicker system based on it. (Also known as the Sunway exascale system or the New Generation Sunway supercomputer.)
Tl;dr: Itâs obvious at this point that humanity isnât going to solve the alignment problem, or even try very hard, or even go out with much of a fight. Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with with slightly more dignity.
Well, letâs be frank here. MIRI didnât solve AGI alignment and at least knows that it didnât. Paul Christianoâs incredibly complicated schemes have no chance of working in real life before DeepMind destroys the world. Chris Olahâs transparency work, at current rates of progress, will at best let somebody at DeepMind give a highly speculative warning about how the current set of enormous inscrutable tensors, inside a system that was recompiled three weeks ago and has now been training by gradient descent for 20 days, might possibly be planning to start trying to deceive its operators.
Management will then ask what theyâre supposed to do about that.