Toggle light / dark theme

BEIJING, Aug. 24, 2023 /PRNewswire/ — On August 22, the 2023 World Robot Conference (WRC), which was held in Beijing with the theme of “Spurring Innovation for the Future,” came to a big success. The 2023 World Robotics Expo and the 2023 World Robot Contest took place at the same time, bringing together about 160 robotics companies and scientific research institutions from around the globe, and showcasing close to 600 advanced technologies and products, over 320 representatives from international organizations, academicians, renowned experts, and entrepreneurs at home and abroad have been invited to attend.

As a general robotics company, Dreame Technology took center stage for the first time at the World Robot Conference. It unveiled a wide range of robots, including general-purpose humanoid robots, consumer-grade bionic quadruped robots, industrial-grade quadruped robots, wireless Robotic Pool Cleaner, commercial food delivery robots and floor-cleaning robots. This range highlighted Dreame’s broad competitiveness across the fields of R&D for robotic ecosystems and technologies, supply chains, production and manufacturing, talent development, and commercialization.

Dive into the cosmic intersection of human cognition and machine intelligence as we explore the paradigm-shifting rise of Artificial General Intelligence (AGI) and its potential evolution into Artificial Superintelligence (ASI). Using astrophysicist Neil deGrasse Tyson’s hypothesis of an alien encounter, we unpack the profound cognitive chasm between beings. How does a Bonobo’s linguistic prowess compare to a human intellectual titan? And as we’ve witnessed the evolution of ChatGPT from its first iteration to ChatGPT-4, are we brushing the fringes of true AGI? Philosophers like Bostrom speculate on a potential “intelligence explosion” when AI begins to improve itself. As we stand at the dawn of a new era, where machines might eclipse human intellect, we ponder our place in the vast intelligence tapestry. Beyond the philosophical, the practical implications are vast: from power dynamics to potential harm if AI goals misalign with ours. Yet, amidst these uncertainties, there’s optimism. This journey offers a profound insight into the most consequential technological evolution in our history and the pivotal choices we must make.

#artificialintelligence #ai #science

Organoid intelligence is an emerging field in computing and artificial intelligence.

Earlier this year, an Australian startup Cortical Labs developed a cybernetic system made from human brain cells. They called it DishBrain and taught it to play Pong.

The roots of this exciting technology go back 60 years. But while it’s still novel today, it’s already superior to conventional deep-learning AI in several aspects. However, it also poses new ethical and existential risks to humanity.

Watch the video to explore how Organoid Intelligence might evolve over the next few decades and how it could fit into our lives.

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them. Learn more, read the summary and find the full transcript on the 80,000 Hours website: https://80000hours.org/podcast/episodes/jan-leike-superalignment.

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.

Today’s guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, “…the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. … Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it’s not just throwing compute at the problem — it’s also hiring dozens of scientists and engineers to build out the Superalignment team.

These 120 people (91 pictured due to size restrictions) have dedicated their lives, their ideas and often a lot of capital to bring these amazing ideas to practice. Their language is passionate and the ideas they have can at one end be big and bold, and at the other end it can get extremely technical and nuanced. Imagine trying to take these vast ideas covering so many dimensions and the hundreds of thousands of words in these conversations and try and see patterns or signals. These interviews form the underbelly of the next book I am working on, titled Envisage, 100 ideas about the world of ten years from now.

Two years ago, maybe one year ago, this would have either been a very manual and forensic examination by a team of people with expertise in the areas or a building a database. Days, weeks and even months would go by with lot of revisions.

In its own tests, its proprietary camera system outperformed LiDARs in multiple conditions.

After getting his Ph.D. from the Massachusetts Institute of Technology (MIT), Leaf Jiang spent more than a decade building laser ranging systems for the military for various 3D sensing applications. In his experience, Leaf found that laser-based detection systems were too expensive to be deployed on autonomous vehicles being developed for the future, and that’s how NoDar was born.

Light detection and ranging (LiDAR) systems use laser beams to scan their surroundings and create 3D images from the data obtained when surfaces reflect the light. As companies look to make autonomous driving more mainstream,… More.


Scharfsinn86/iStock.

A fine-tuned version of GPT-3.5 Turbo can outperform GPT-4, said OpenAI.

US-based AI company OpenAI just released the fine-tuning API for GPT-3.5 Turbo. This gives developers more flexibility to customize models that perform better for their use cases. The company ran tests, showing that the fine-tuned versions of GPT-3.5 Turbo can surpass GPT-4’s base capabilities on certain tasks.

OpenAI released gpt-3.5-turbo in March this year as a ChatGPT model family for various non-chat uses. It’s priced at $0.002 per 1k tokens, which the AI company… More.


Steve Jennings/Getty.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

ElevenLabs, a year-old startup that is leveraging the power of machine learning for voice cloning and synthesis, today announced the expansion of its platform with a new text-to-speech model that supports 30 languages.

The expansion marks the platform’s official exit from the beta phase, making it ready to use for enterprises and individuals looking to customize their content for audiences worldwide. It comes more than a month after ElevenLabs’ $19 million series A round that valued the company at nearly $100M.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

Microsegmentation is table stakes for CISOs looking to gain the speed, scale and time-to-market advantages that multicloud tech stacks provide digital-first business initiatives.

Gartner predicts that through 2023, at least 99% of cloud security failures will be the user’s fault. Getting microsegmentation right in multicloud configurations can make or break any zero-trust initiative. Ninety percent of enterprises migrating to the cloud are adopting zero trust, but just 22% are confident their organization will capitalize on its many benefits and transform their business. Zscaler’s The State of Zero Trust Transformation 2023 Report says secure cloud transformation is impossible with legacy network security infrastructure such as firewalls and VPNs.

Artificial General Intelligence (AGI) is a term for Artificial Intelligence systems that meet or exceed human performance on the broad range of tasks that humans are capable of performing. There are benefits and downsides to AGI. On the upside, AGIs can do most of the labor that consume a vast amount of humanity’s time and energy. AGI can herald a utopia where no one has wants that cannot be fulfilled. AGI can also result in an unbalanced situation where one (or a few) companies dominate the economy, exacerbating the existing dichotomy between the top 1% and the rest of humankind. Beyond that, the argument goes, a super-intelligent AGI could find it beneficial to enslave humans for its own purposes, or exterminate humans so as to not compete for resources. One hypothetical scenario is that an AGI that is smarter than humans can simply design a better AGI, which can, in turn, design an even better AGI, leading to something called hard take-off and the singularity.

I do not know of any theory that claims that AGI or the singularity is impossible. However, I am generally skeptical of arguments that Large Language Models such the GPT series (GPT-2, GPT-3, GPT-4, GPT-X) are on the pathway to AGI. This article will attempt to explain why I believe that to be the case, and what I think is missing should humanity (or members of the human race) so choose to try to achieve AGI. I will also try to convey a sense for why it is easy to talk about the so-called “recipe for AGI” in the abstract but why physics itself will prevent any sudden and unexpected leap from where we are now to AGI or super-AGI.

To achieve AGI it seems likely we will need one or more of the following: