Toggle light / dark theme

New Zealand-based agritech company Robotics Plus has launched an autonomous multi-use, modular vehicle for agriculture that could revolutionize the industry by alleviating ongoing labor shortages and simplifying agricultural tasks, according to a press release by the firm published on Thursday.

Optimizing tasks

The robot can be supervised in a fleet of vehicles by a single human operator, using a combination of vision systems and other technologies to sense its environment. This empowers it to optimize tasks and allow intelligent and targeted application of inputs such as sprays. It is suitable for a variety of jobs including spraying, weed control, mulching, mowing and crop analysis.

Occupation-related stress and work characteristics are possible determinants of social inequalities in epigenetic aging but have been little investigated. Here, we investigate the association of several work characteristics with epigenetic age acceleration (AA) biomarkers.

The study population included employed and unemployed men and women (n = 631) from the UK Understanding Society study. We evaluated the association of employment and work characteristics related to job type, job stability; job schedule; autonomy and influence at work; occupational physical activity; and feelings regarding the job with four epigenetic age acceleration biomarkers (Hannum, Horvath, PhenoAge, GrimAge) and pace of aging (DunedinPoAm, DunedinPACE).

We fitted linear regression models, unadjusted and adjusted for established risk factors, and found the following associations for unemployment (years of acceleration): HorvathAA (1.51, 95% CI 0.08, 2.95), GrimAgeAA (1.53, 95% CI 0.16, 2.90) and 3.21 years for PhenoAA (95% CI 0.89, 5.33). Job insecurity increased PhenoAA (1.83, 95% CI 0.003, 3.67), while working at night was associated with an increase of 2.12 years in GrimAgeAA (95% CI 0.69, 3.55). We found effects of unemployment to be stronger in men and effects of night shift work to be stronger in women.

DeepMind has created an AI capable of writing code to solve arbitrary problems posed to it, as proven by participating in a coding challenge and placing — well, somewhere in the middle. It won’t be taking any software engineers’ jobs just yet, but it’s promising and may help automate basic tasks.

The team at DeepMind, a subsidiary of Alphabet, is aiming to create intelligence in as many forms as it can, and of course these days the task to which many of our great minds are bent is coding. Code is a fusion of language, logic and problem-solving that is both a natural fit for a computer’s capabilities and a tough one to crack.

Of course it isn’t the first to attempt something like this: OpenAI has its own Codex natural-language coding project, and it powers both GitHub Copilot and a test from Microsoft to let GPT-3 finish your lines.

The Biden administration unveiled a set of far-reaching goals Tuesday aimed at averting harms caused by the rise of artificial intelligence systems, including guidelines for how to protect people’s personal data and limit surveillance.

The Blueprint for an AI Bill of Rights notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world, officials said.

“This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies,” said Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy. “We can and should expect better and demand better from our technologies.”

In this episode we explore a User Interface Theory of reality. Since the invention of the computer virtual reality theories have been gaining in popularity, often to explain some difficulties around the hard problem of consciousness (See Episode #1 with Sue Blackmore to get a full analysis of the problem of how subjective experiences might emerge out of our brain neurology); but also to explain other non-local anomalies coming out of physics and psychology, like ‘quantum entanglement’ or ‘out of body experiences’. Do check the devoted episodes #4 and #28 respectively on those two phenomena for a full breakdown.
As you will hear today the vast majority of cognitive scientists believe consciousness is an emergent phenomena from matter, and that virtual reality theories are science fiction or ‘Woowoo’ and new age. One of this podcasts jobs is to look at some of these Woowoo claims and separate the wheat from the chaff, so the open minded among us can find the threshold beyond which evidence based thinking, no matter how contrary to the consensus can be considered and separated from wishful thinking.
So you can imagine my joy when a hugely respected cognitive scientist and User Interface theorist, who can cut through the polemic and orthodoxy with calm, respectful, evidence based argumentation, agreed to come on the show, the one and only Donald D Hoffman.

Hoffman is a full professor of cognitive science at the University of California, Irvine, where he studies consciousness, visual perception and evolutionary psychology using mathematical models and psychophysical experiments. His research subjects include facial attractiveness, the recognition of shape, the perception of motion and colour, the evolution of perception, and the mind-body problem. So he is perfectly placed to comment on how we interpret reality.

Hoffman has received a Distinguished Scientific Award of the American Psychological Association for early career research into visual perception, the Rustum Roy Award of the Chopra Foundation, and the Troland Research Award of the US National Academy of Sciences. So his recognition in the field is clear.

He is also the author of ‘The Case Against Reality’, the content of which we’ll be focusing on today; ‘Visual Intelligence’, and the co-author with Bruce Bennett and Chetan Prakash of ‘Observer Mechanics’.

Our Next Energy Inc., an electric-car battery startup involving several former leaders of Apple secretive car project, is planning to invest $1.6 billion into a factory in Michigan to make enough battery cells for about 200,000 EVs annually.

The state of Michigan on Wednesday approved a $200 million grant for the project that promises to create 2,112 new jobs once the facility in Van Buren Township, about 10 miles west of the Detroit airport, is fully operational by the end of 2027. The company must create and maintain the jobs or face a clawback of the funds.

Chipmaker Micron Technology revealed on Tuesday ambitious plans to develop a $100-billion computer chip factory complex in upstate New York, in a bid to boost domestic chip manufacturing and possibly deal with a worrying chips shortage. The money will be invested over a 20 year period, according to Reuters.

The world’s largest semiconductor fabrication facility

Micron claims the project will be the world’s largest semiconductor fabrication facility and will create nearly 50,000 jobs in New York alone. Currently, the largest semiconductor manufacturers in the world are: Intel Corp., Samsung, Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC), SK Hynix, Micron Technology Inc., Qualcomm, Broadcom Inc., and Nvidia.

Artificial Intelligence (AI), a term first coined at a Dartmouth workshop in 1956, has seen several boom and bust cycles over the last 66 years. Is the current boom different?

The most exciting advance in the field since 2017 has been the development of “Large Language Models,” giant neural networks trained on massive databases of text on the web. Still highly experimental, Large Language Models haven’t yet been deployed at scale in any consumer product — smart/voice assistants like Alexa, Siri, Cortana, or the Google Assistant are still based on earlier, more scripted approaches.

Large Language Models do far better at routine tasks involving language processing than their predecessors. Although not always reliable, they can give a strong impression of really understanding us and holding up their end of an open-ended dialog. Unlike previous forms of AI, which could only perform specific jobs involving rote perception, classification, or judgment, Large Language Models seem to be capable of a lot more — including possibly passing the Turing Test, named after computing pioneer Alan Turing’s thought experiment that posits when an AI in a chat can’t be distinguished reliably from a human, it will have achieved general intelligence.

But can Large Language Models really understand anything, or are they just mimicking the superficial “form” of language? What can we say about our progress toward creating real intelligence in a machine? What do “intelligence” and “understanding” even mean? Blaise Agüera y Arcas, a Fellow at Google Research, and Melanie Mitchell, the Davis Professor of Complexity at the Santa Fe Institute, take on these thorny questions in a wide-ranging presentation and discussion.