Toggle light / dark theme

DeepMind has created an AI capable of writing code to solve arbitrary problems posed to it, as proven by participating in a coding challenge and placing — well, somewhere in the middle. It won’t be taking any software engineers’ jobs just yet, but it’s promising and may help automate basic tasks.

The team at DeepMind, a subsidiary of Alphabet, is aiming to create intelligence in as many forms as it can, and of course these days the task to which many of our great minds are bent is coding. Code is a fusion of language, logic and problem-solving that is both a natural fit for a computer’s capabilities and a tough one to crack.

Of course it isn’t the first to attempt something like this: OpenAI has its own Codex natural-language coding project, and it powers both GitHub Copilot and a test from Microsoft to let GPT-3 finish your lines.

The Biden administration unveiled a set of far-reaching goals Tuesday aimed at averting harms caused by the rise of artificial intelligence systems, including guidelines for how to protect people’s personal data and limit surveillance.

The Blueprint for an AI Bill of Rights notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world, officials said.

“This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies,” said Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy. “We can and should expect better and demand better from our technologies.”

In this episode we explore a User Interface Theory of reality. Since the invention of the computer virtual reality theories have been gaining in popularity, often to explain some difficulties around the hard problem of consciousness (See Episode #1 with Sue Blackmore to get a full analysis of the problem of how subjective experiences might emerge out of our brain neurology); but also to explain other non-local anomalies coming out of physics and psychology, like ‘quantum entanglement’ or ‘out of body experiences’. Do check the devoted episodes #4 and #28 respectively on those two phenomena for a full breakdown.
As you will hear today the vast majority of cognitive scientists believe consciousness is an emergent phenomena from matter, and that virtual reality theories are science fiction or ‘Woowoo’ and new age. One of this podcasts jobs is to look at some of these Woowoo claims and separate the wheat from the chaff, so the open minded among us can find the threshold beyond which evidence based thinking, no matter how contrary to the consensus can be considered and separated from wishful thinking.
So you can imagine my joy when a hugely respected cognitive scientist and User Interface theorist, who can cut through the polemic and orthodoxy with calm, respectful, evidence based argumentation, agreed to come on the show, the one and only Donald D Hoffman.

Hoffman is a full professor of cognitive science at the University of California, Irvine, where he studies consciousness, visual perception and evolutionary psychology using mathematical models and psychophysical experiments. His research subjects include facial attractiveness, the recognition of shape, the perception of motion and colour, the evolution of perception, and the mind-body problem. So he is perfectly placed to comment on how we interpret reality.

Hoffman has received a Distinguished Scientific Award of the American Psychological Association for early career research into visual perception, the Rustum Roy Award of the Chopra Foundation, and the Troland Research Award of the US National Academy of Sciences. So his recognition in the field is clear.

He is also the author of ‘The Case Against Reality’, the content of which we’ll be focusing on today; ‘Visual Intelligence’, and the co-author with Bruce Bennett and Chetan Prakash of ‘Observer Mechanics’.

What we discuss:
00:00 Intro.
05:30 Belief VS questioning.
11:20 Seeing the world for survival VS for knowing reality as it truly is.
13:30 Competing strategies to maximise ‘fitness’ in the evolutionary sense.
15:22 Fitness payoff’s can be calculated as mathematical functions, based on different organisms, states and actions.
17:00 Evolutionary Game Theory computer simulations at UC Irvine.
21:30 The payoff functions that govern evolution do not contain information about the structure of the world.
25:00 The world is NOT as it seems VS The world is NOTHING like it seems.
29:30 Space-time cannot be fundamental.
32:30 Local and non-contextual realism have been proved false.
37:45 A User-Interface network of conscious agents.
41:30 A virtual reality computer analogy.
43:30 Space and time and physical objects are merely a user interface.
49:30 Reductionism is false.
53:30 User Interface theory VS Simulation theory.
56:30 Panpsychists are fundamentally physicalists.
57:30 Making mathematical predictions about conscious agents.
59:30 Like space and time maths are invented metrics, so must we start with consciousness metrics.
01:03:30 Experiences lead to actions, which affect other agent’s conscious experiences.
01:08:00 The notion of truth is deeper than the notion of proof and theory.
01:10:00 Consciousness projects space-time so it can explore infinite possibilities.
01:13:00 ‘Not that which the eye can see, but that whereby the eye can see’, Kena Upanishad.
01:17:30 Is nature written in the language of Maths?
01:27:00 Consciousness is like the living being, and maths is like the bones.
01:34:50 Don Hoffman on Max Tegmark’s ‘Everything that is mathematically possible is real’
01:48:00 Different analogies for different eras.

References:

Our Next Energy Inc., an electric-car battery startup involving several former leaders of Apple secretive car project, is planning to invest $1.6 billion into a factory in Michigan to make enough battery cells for about 200,000 EVs annually.

The state of Michigan on Wednesday approved a $200 million grant for the project that promises to create 2,112 new jobs once the facility in Van Buren Township, about 10 miles west of the Detroit airport, is fully operational by the end of 2027. The company must create and maintain the jobs or face a clawback of the funds.

Chipmaker Micron Technology revealed on Tuesday ambitious plans to develop a $100-billion computer chip factory complex in upstate New York, in a bid to boost domestic chip manufacturing and possibly deal with a worrying chips shortage. The money will be invested over a 20 year period, according to Reuters.

The world’s largest semiconductor fabrication facility

Micron claims the project will be the world’s largest semiconductor fabrication facility and will create nearly 50,000 jobs in New York alone. Currently, the largest semiconductor manufacturers in the world are: Intel Corp., Samsung, Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC), SK Hynix, Micron Technology Inc., Qualcomm, Broadcom Inc., and Nvidia.

Artificial Intelligence (AI), a term first coined at a Dartmouth workshop in 1956, has seen several boom and bust cycles over the last 66 years. Is the current boom different?

The most exciting advance in the field since 2017 has been the development of “Large Language Models,” giant neural networks trained on massive databases of text on the web. Still highly experimental, Large Language Models haven’t yet been deployed at scale in any consumer product — smart/voice assistants like Alexa, Siri, Cortana, or the Google Assistant are still based on earlier, more scripted approaches.

Large Language Models do far better at routine tasks involving language processing than their predecessors. Although not always reliable, they can give a strong impression of really understanding us and holding up their end of an open-ended dialog. Unlike previous forms of AI, which could only perform specific jobs involving rote perception, classification, or judgment, Large Language Models seem to be capable of a lot more — including possibly passing the Turing Test, named after computing pioneer Alan Turing’s thought experiment that posits when an AI in a chat can’t be distinguished reliably from a human, it will have achieved general intelligence.

But can Large Language Models really understand anything, or are they just mimicking the superficial “form” of language? What can we say about our progress toward creating real intelligence in a machine? What do “intelligence” and “understanding” even mean? Blaise Agüera y Arcas, a Fellow at Google Research, and Melanie Mitchell, the Davis Professor of Complexity at the Santa Fe Institute, take on these thorny questions in a wide-ranging presentation and discussion.

Since the beginning of human storytelling, enhancing oneself to a “better version” was of vital interest to humans. A twenty-first century-philosophical movement called transhumanism dedicated itself to the topic of enhancement. It unites discussions from several disciplines, e.g. philosophy, social science, and neuroscience, and aims to form human beings in desirable ways with the help of science and technology (Bostrom, 2005; Loh, 2018; More, 2013). Enhancement is the employment of methods to enhance human cognition in healthy individuals (Colzato et al., 2021), thereby extending individual performance above already existing abilities. It should thus be distinguished from therapy, which is the application of methods to help individuals with illnesses or dysfunctions in restoring their abilities (Viertbauer & Kögerler, 2019). Although enhancement methods bear psychological implications, there is hardly any psychological research on them. However, as the use of enhancement methods has increased (Leon et al., 2019; McCabe et al., 2014), and with it the demand for official guidelines (Jwa, 2019), it is necessary to examine who would use these methods in the first place, especially because these technologies can easily be misused. Investigating personality traits and values of individuals who want to enhance themselves could not only support suppliers and manufacturers of enhancement technologies in creating guidelines for using enhancement, but also raise more general awareness on which individuals might be in favour of enhancement.

In previous studies investigating the intersection between enhancement and personality traits or values, vignettes were used to describe enhancement methods and to measure their acceptance among participants (e.g. Laakasuo et al., 2018, 2021). Thus, subjects were asked to read scenarios involving the use of a certain enhancement method and then—as a measure of acceptance—judge aspects (e.g. the morality) of the action undertaken in the corresponding scenario (e.g. Laakasuo et al., 2018, 2021). In the present study, we followed a similar vignette-based approach with a variety of different enhancement methods to investigate the link between the acceptance of enhancement (i.e., the willingness to use enhancement methods, hereinafter termed AoE), personality traits, and values. More specifically, we examined the acceptance of the most discussed cognitive enhancement methods: pharmacological enhancement, brain stimulation with transcranial electrical stimulation and deep brain stimulation, genetic enhancement, and mind upload (Bostrom, 2003; Dijkstra & Schuijff, 2016; Dresler et al., 2019; Gaspar et al., 2019; Loh, 2018).

Pharmacological enhancement has received much attention in the media and literature (Daubner et al., 2021; Schelle et al., 2014) and is defined as the application of prescription substances that are intended to ameliorate specific cognitive functions beyond medical indications (Schermer et al., 2009). The best-known drugs for cognitive enhancement are methylphenidate (Ritalin®), dextroamphetamine (Adderall®), and modafinil (Provigil®), which are usually prescribed for the treatment of clinical conditions (de Jongh et al., 2008; Mohamed, 2014; Schermer et al., 2009).

Australia is the driest inhabited continent on planet earth, and is home to The Great Australian Desert which is the 4th largest desert in the world after The Antarctic, The Arctic and The Sahara.
Australia is comparable in size to The United States however its population is significantly less than America’s, the whole of Australia has about the same number of people living in it as the state of Texas. Despite the low population Australia is one of the worst developed countries in the world for broadscale deforestation, wiping out endangered forests and woodlands. In fact, they have cleared nearly half of all forest cover in the last 200 years!

It began in around the early 1800s when the British colonized Australia in search of land and fortunes. At that time Britain had already been completely stripped of trees for centuries by intensive agriculture and war, even today The United Kingdom has one of the lowest percentages of forest cover in Europe. British timber companies were granted free access to vast areas of virgin forest in Australia and trees were felled for agriculture and railway tracks which were constructed alongside other transit infrastructure such as roads, bridges and jetties.

By the 1880s concerns about stripping the forests were being raised but no steps towards conservation were taken and now Australia has become the worst offending country in the world for mammal extinctions, 55 wildlife species plus 37 plant species have gone extinct. The wide spread deforestation has resulted in 55% of all Australian land area being used for agricultural purposes and around 72% of all agricultural output is exported. Meat and live animals has been the fastest-growing export segment, growing 33% in value, However agriculture only accounts for 1.9% of value added (GDP) and 2.5% of employment in 2020–21.

The wide spread land degradation has resulted in man made desertification after centuries of tiling, and the introduction of non native grazing grasses has taken its toll on the landscape. However some regions in Australia are starting to turn this around, transforming large areas of degraded land back into bio-diverse ecosystems, by restoring millions of trees and in turn improving the lives for rural farming communities, as well as capturing over a million tons of carbon to benefit the planet as a whole. This can be considered a major accomplishment for any country, particularly one that has a low average rainfall of 16 inches per year. In this video we will show you how a 200km long green corridor will connect 12 nature reserves across a 10,000 km².

Make sure to check out: Carbon Neutral for more info!
__________________________

🔔 SUBSCRIBE with Bell notification ON.

Earlier this summer, a piece generated by an AI text-to-image application won a prize in a state fair art competition, prying open a Pandora’s Box of issues about the encroachment of technology into the domain of human creativity and the nature of art itself.


Art professionals are increasingly concerned that text-to-image platforms will render hundreds of thousands of well-paid creative jobs obsolete.