Toggle light / dark theme

I’m trying to recall a sci-fi short story that I once read, about a spacecraft that’s attempting to travel farther from Earth than anyone ever has before. As it gets farther away, the crew start to experience unexplained psychological and neurological symptoms. One by one, they eventually become catatonic and need to be cared for in the ship’s infirmary, while their crewmates desperately try to determine the cause.

The protagonist is the last person to be affected, and just as they are starting to succumb, they come up with a theory: human consciousness is not just an individual phenomenon, but is somehow dependent on the collective effect of all the other human minds on Earth. So as the ship leaves Earth’s “sphere of influence”, its passengers lose their consciousness and intelligence. Having realized this, the protagonist is barely able to program the autopilot to turn around, and the narration describes their descent into insanity and subsequent return to consciousness.

The title might have contained a reference to “closeness”, “distance”, “solitude”, “togetherness”, or something along those lines. I have a vague sense that the theme and style reminded me of David Brin’s work, but having looked through his bibliography, I don’t think it’s one of his stories.

👉For business inquiries: [email protected].
✅ Instagram: https://www.instagram.com/pro_robots.

You are on the PRO Robots channel and in this video we present the March 2022 news digest. The largest exhibition of technology Expo 2022 in Dubai, artificial intelligence that will replace programmers, new Atlas robot arms, an emotional android and opening of the GigaFactory Berlin by Elon Musk. All the most interesting news from the world of high-tech in one issue!

More interesting and useful content:

✅ Elon Musk Innovation https://www.youtube.com/playlist?list=PLcyYMmVvkTuQ-8LO6CwGWbSCpWI2jJqCQ
✅Future Technologies Reviews https://www.youtube.com/playlist?list=PLcyYMmVvkTuTgL98RdT8-z-9a2CGeoBQF
✅ Technology news.
https://www.facebook.com/PRO.Robots.Info.

#prorobots #technology #roboticsnews.

PRO Robots is not just a channel about robots and future technologies, we are interested in science, technology, new technologies and robotics in all its manifestations, science news, technology news today, science and technology news 2022, so that in the future it will be possible to expand future release topics. Today, our vlog just talks about complex things, follows the tech news, makes reviews of exhibitions, conferences and events, where the main characters are best robots in the world! Subscribe to the channel, like the video and join us!

Standard image sensors, like the billion or so already installed in practically every smartphone in use today, capture light intensity and color. Relying on common, off-the-shelf sensor technology—known as CMOS—these cameras have grown smaller and more powerful by the year and now offer tens-of-megapixels resolution. But they’ve still seen in only two dimensions, capturing images that are flat, like a drawing—until now.

Researchers at Stanford University have created a new approach that allows standard image sensors to see in three dimensions. That is, these common cameras could soon be used to measure the distance to objects.

The engineering possibilities are dramatic. Measuring distance between objects with light is currently possible only with specialized and expensive —short for “light detection and ranging”—systems. If you’ve seen a self-driving car tooling around, you can spot it right off by the hunchback of technology mounted to the roof. Most of that gear is the car’s lidar crash-avoidance system, which uses lasers to determine distances between objects.

I subtitled this post “Why we’re all in denial about the robot apocalypse”. I say that because I believe that society at large is completely, utterly, and woefully unprepared for the advent of sentient, living artificial general intelligence. I think the singularity is coming much sooner than most people expect, and I think it’s going to cause a great deal of upset when it arrives — for better and for worse.

Take for instance the common religious belief that people possess some unmeasurable, undefinable soul, and that this soul is what separates us from inanimate objects and non-sentient animals. Furthermore, some people believe that these souls come from deity. I have spoken with friends who believe that AGI is impossible because “robots can’t have souls, humans aren’t God”. For these people, like Caleb says in Ex Machina (paraphrasing), removing the line between man and machine also removes the line between god and man.

Now, this isn’t to say that AGI will destroy religion or anything — it may even be used to strengthen some sects (as taken to the extreme in HBO’s Raised By Wolves). No, religion has been around for millennia and I’m sure it will continue to be around for many more millennia. I’m simply predicting that a subset of religious people are going to experience lots of cognitive dissonance when the first AGI arrives.

AI will completely take over game development by the early 2030s. To a point where there will be almost no human developers. Just people telling AI what they want to play and it builds it in real time.


Over the past few years we’ve seen massive improvements in AI technology, from GPT-3, AI picture generation to self-driving cars and drug discovery. But can machine learning progress change games?

Note: AI has many subsets, in this article when I say AI I’m referring to machine learning algorithms.

First important question to ask is, will AI even change anything? Why use machine learning when you can just hardcode movement and dialogues? The answer to this question can be found in replayability and immersive gameplay.

If you need any proof that it doesn’t take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway “OceanLight” system housed at the National Supercomputing Center in Wuxi, China.

Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called “brain-scale” where the number of parameters starts approaching the number of synapses in the human brain). But, as it turns out, some of these architectural details were hinted at in the three of the six nominations for the Gordon Bell Prize last fall, which we covered here. To our chagrin and embarrassment, we did not dive into the details of the architecture at the time (we had not seen that they had been revealed), and the BaGuaLu paper gives us a chance to circle back.

Before this slew of papers were announced with details on the new Sunway many-core processor, we did take a stab at figuring out how the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC) might build an exascale system, scaling up from the SW26010 processor used in the Sunway “TaihuLight” machine that took the world by storm back in June 2016. The 260-core SW26010 processor was etched by Chinese foundry Semiconductor Manufacturing International Corporation using 28 nanometer processes – not exactly cutting edge. And the SW26010-Pro processor, etched using 14 nanometer processes, is not on an advanced node, but China is perfectly happy to burn a lot of coal to power and cool the OceanLight kicker system based on it. (Also known as the Sunway exascale system or the New Generation Sunway supercomputer.)

Tl;dr: It’s obvious at this point that humanity isn’t going to solve the alignment problem, or even try very hard, or even go out with much of a fight. Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with with slightly more dignity.

Well, let’s be frank here. MIRI didn’t solve AGI alignment and at least knows that it didn’t. Paul Christiano’s incredibly complicated schemes have no chance of working in real life before DeepMind destroys the world. Chris Olah’s transparency work, at current rates of progress, will at best let somebody at DeepMind give a highly speculative warning about how the current set of enormous inscrutable tensors, inside a system that was recompiled three weeks ago and has now been training by gradient descent for 20 days, might possibly be planning to start trying to deceive its operators.

Management will then ask what they’re supposed to do about that.