Toggle light / dark theme

Good Morning, 2033 — A Sci-Fi Short Film.

What will your average morning look like in 2033? And who hacked us?

This scif-fi short film explores a number of near-future futurist predictions for the 2030s.

Sleep with a brain sensor sleep mask that determines when to wake you. Wake up with gentle stimulation. Drink enhanced water with nutrients, vitamins, and supplements you need. Slide on your smart glasses that you wear all day. Do yoga and stretching on a smart scale that senses you, and get tips from a virtual trainer. Help yourself wake up with a 99CRI, 500,000 lumen light. Go for a walk and your glasses scan your brain as you walk. Live neurofeedback helps you meditate. Your kitchen uses biodata to figure out the ideal health meal, and a kitchen robot makes it for you. You work in VR, AR, MR, XR, reality in the metaverse. You communicate with the world through your AI assistant and AI avatar. You enter the high tech bathroom that uses UV lights and robotics to clean your body for you. Ubers come in the form of flying cars, EVTOL aircraft, that move at 300km/h. Cities become a single color as every inch of roads and buildings become covered in photovoltaic materials.

A grand vision in robot learning, going back to the SHRDLU experiments in the late 1960s, is that of helpful robots that inhabit human spaces and follow a wide variety of natural language commands. Over the last few years, there have been significant advances in the application of machine learning (ML) for instruction following, both in simulation and in real world systems. Recent Palm-SayCan work has produced robots that leverage language models to plan long-horizon behaviors and reason about abstract goals. Code as Policies has shown that code-generating language models combined with pre-trained perception systems can produce language conditioned policies for zero shot robot manipulation. Despite this progress, an important missing property of current “language in, actions out” robot learning systems is real time interaction with humans.

Ideally, robots of the future would react in real time to any relevant task a user could describe in natural language. Particularly in open human environments, it may be important for end users to customize robot behavior as it is happening, offering quick corrections (“stop, move your arm up a bit”) or specifying constraints (“nudge that slowly to the right”). Furthermore, real-time language could make it easier for people and robots to collaborate on complex, long-horizon tasks, with people iteratively and interactively guiding robot manipulation with occasional language feedback.

How far away could an artificial brain be? Perhaps a very long way off still, but a working analogue to the essential element of the brain’s networks, the synapse, appears closer at hand now.

That’s because a device that draws inspiration from batteries now appears surprisingly well suited to run artificial neural networks. Called electrochemical RAM (ECRAM), it is giving traditional transistor-based AI an unexpected run for its money—and is quickly moving toward the head of the pack in the race to develop the perfect artificial synapse. Researchers recently reported a string of advances at this week’s IEEE International Electron Device Meeting (IEDM 2022) and elsewhere, including ECRAM devices that use less energy, hold memory longer, and take up less space.

The artificial neural networks that power today’s machine-learning algorithms are software that models a large collection of electronics-based “neurons,” along with their many connections, or synapses. Instead of representing neural networks in software, researchers think that faster, more energy-efficient AI would result from representing the components, especially the synapses, with real devices. This concept, called analog AI, requires a memory cell that combines a whole slew of difficult-to-obtain properties: it needs to hold a large enough range of analog values, switch between different values reliably and quickly, hold its value for a long time, and be amenable to manufacturing at scale.

In 1916, Einstein finished his Theory of General Relativity, which describes how gravitational forces alter the curvature of spacetime. Among other things, this theory predicted that the Universe is expanding, which was confirmed by the observations of Edwin Hubble in 1929. Since then, astronomers have looked farther into space (and hence, back in time) to measure how fast the Universe is expanding – aka. the Hubble Constant. These measurements have become increasingly accurate thanks to the discovery of the Cosmic Microwave Background (CMB) and observatories like the Hubble Space Telescope.

Astronomers have traditionally done this in two ways: directly measuring it locally (using variable stars and supernovae) and indirectly based on redshift measurements of the CMB and cosmological models. Unfortunately, these two methods have produced different values over the past decade. As a result, astronomers have been looking for a possible solution to this problem, known as the “Hubble Tension.” According to a new paper by a team of astrophysicists, the existence of “Early Dark Energy” may be the solution cosmologists have been looking for.

The study was conducted by Marc Kamionkowski, the William R. Kenan, a junior professor of physics and astronomy at Johns Hopkins University (JHU), and Adam G. Riess – an astrophysicist and Bloomberg Distinguished Professor at JHU and the Space Telescope Science Institute (STScI). Their paper, titled “The Hubble Tension and Early Dark Energy,” is being reviewed for publication in the Annual Review of Nuclear and Particle Science (ARNP). As they explain in their paper, there are two methods for measuring cosmic expansion.

The big bang is one of the most fascinating topics you can bring up when conversing with scientists and astronomers. This is because the theory talks about how the whole universe started in the first place. However, the event that led to the big bang is one thing that is being argued among scientists today.

For this reason, the James Webb Telescope was called in to make some findings about the big bang. The JWST found something quite alright, but it wasn’t something the scientist had prepared their minds for. What did the James Webb Telescope discover, and in what way would it affect the Big Bang Theory?

Join us as we explore the James Webb telescope’s terrifying discovery before the big bang.

It all began when an astronomer sighted a discovery made by the James Webb Telescope. Astronomer Rohan Naidu was at home with his girlfriend when he discovered the galaxy that almost broke cosmology. He was inspecting some of the images the James Webb Telescope sent earlier when one of the images caught his attention. The telescope had identified an object that Naidu recognized as mysteriously huge. It dates back to the big bang era, making it older than any galaxy we once knew in science. It was a shocking discovery for him, as he called his girlfriend to observe the most distant starlight too. He was praised for this discovery by his team, and then they got to work. A few days later, Naidu and his team published a paper on the discovered galaxy called “GLASS-z13.” It was a discovery that had the whole world of science come to a standstill, as no one expected such a discovery to be made by the James Webb Telescope.

Horgan: Okay, maybe I have been too critical. But can you tell readers, briefly, why they should take panpsychism seriously?

Mørch : Physicalism and dualism are the two main alternatives to panpsychism.

Physicalism implies that consciousness doesn’t exist. Physical science cannot capture what it’s like for someone to have conscious experiences, such as seeing red or being in pain, or any of the qualitative or subjective features of consciousness. So if consciousness is physical, it’s not qualitative or subjective, and therefore not really consciousness after all.