AInstein robot can respond to inquiries from pupils and even illustrate Albert Einstein’s theory of temporal relativity using a pendulum.
High school students in Cyprus have developed an artificial intelligence (AI) robot that uses ChatGPT to enhance classroom learning.
The Three PASCAL schools’ creation, AInstein, can hold dialogues, produce textual content, and crack jokes, according to an article published on Thursday by Voice of America (VOA).
Schmidt thinks that if the AI sector doesn’t create protections, politicians will have to step in.
Eric Schmidt, the former CEO of Google, has spoken out against the six-month ban on AI development that some tech celebrities and business executives demanded earlier.
“I’m not in favor of a six-month pause, because it will simply benefit China,” said Schmidt, Google’s first CEO.
Wikimedia Commons.
A halt supported by tech leaders like Elon Musk and Steve Wozniak, would “simply benefit China,” the former Google CEO told the Australian Financial Review on Thursday.
The last few weeks have been abuzz with news and fears (well, largely fears) about the impact chatGPT and other generative technologies might have on the workplace. Goldman Sachs predicted 300 million jobs would be lost, while the likes of Steve Wozniak and Elon Musk asked for AI development to be paused (although pointedly not the development of autonomous driving).
Indeed, OpenAI chief Sam Altman recently declared that he was “a little bit scared”, with the sentiment shared by OpenAI’s chief scientist Ilya Sutskever, who recently said that “at some point it will be quite easy, if one wanted, to cause a great deal of harm”.
As fears mount about the jobs supposedly at risk from generative AI technologies like chatGPT, are these fears likely to prevent people from taking steps to adapt?
Poor, poor, Horizon Worlds. According to Facebook-turned-Meta CTO Andrew Bosworth, the company’s metaverse of dead-eyed avatars has been all but abandoned by Meta CEO Mark Zuckerberg — who, in an added blow, is instead said to be spending the bulk of his time chasing the investor-appeasing Silicon Valley squirrel that is generative AI.
“We’ve been investing in artificial intelligence for over a decade, and have one of the leading research institutes in the world,” Bosworth told Nikkei Asia in an interview on Wednesday. “We certainly have a large research organization, hundreds of people.”
“We just created a new team, the generative AI team, a couple of months ago; they are very busy,” he added. “It’s probably the area that I’m spending the most time [in], as well as Mark Zuckerberg and [Chief Product Officer] Chris Cox.”
Models are scientific models, theories, hypotheses, formulas, equations, naïve models based on personal experiences, superstitions (!), and traditional computer programs. In a Reductionist paradigm, these Models are created by humans, ostensibly by scientists, and are then used, ostensibly by engineers, to solve real-world problems. Model creation and Model use both require that these humans Understand the problem domain, the problem at hand, the previously known shared Models available, and how to design and use Models. A Ph.D. degree could be seen as a formal license to create new Models[2]. Mathematics can be seen as a discipline for Model manipulation.
But now — by avoiding the use of human made Models and switching to Holistic Methods — data scientists, programmers, and others do not themselves have to Understand the problems they are given. They are no longer asked to provide a computer program or to otherwise solve a problem in a traditional Reductionist or scientific way. Holistic Systems like DNNs can provide solutions to many problems by first learning about the domain from data and solved examples, and then, in production, to match new situations to this gathered experience. These matches are guesses, but with sufficient learning the results can be highly reliable.
We will initially use computer-based Holistic Methods to solve individual and specific problems, such as self-driving cars. Over time, increasing numbers of Artificial Understanders will be able to provide immediate answers — guesses — to wider and wider ranges of problems. We can expect to see cellphone apps with such good command of language that it feels like talking to a competent co-worker. Voice will become the preferred way to interact with our personal AIs.
Origami robots are autonomous machines that are constructed by folding two-dimensional materials into complex, functional three-dimensional structures. These robots are highly versatile. They can be designed to perform a wide range of tasks, from manipulating small objects to navigating difficult terrain. Their compact size and flexibility allow them to move in ways that traditional robots cannot, making them ideal for use in environments that are hard to reach.
Another notable feature of origami-based robots is their low cost. Because they are constructed using simple materials and techniques, they can be produced relatively inexpensively. This makes them an attractive option for many researchers and companies looking to develop new robotics applications.
There are many potential applications for origami robots. They could be used in search and rescue missions, where their small size and flexibility would allow them to navigate through rubble and debris. They could also be used in manufacturing settings, where their ability to manipulate small objects could be put to use in assembly lines.
As chatbots like ChatGPT bring his work to widespread attention, we spoke to Hinton about the past, present and future of AI.
CBS Saturday Morning’s Brook Silva-Braga interviewed him at the Vector Institute in Toronto on March 1, 2023. #ai #interview #artificialintelligence #GeoffreyHinton #machinelearning #future
Tom is a writer in London with a Master’s degree in Journalism whose editorial work covers anything from health and the environment to technology and archaeology.
That’s interesting. Some will be scared and this will cause contention. I’m interested in if it will be available in the Google Play store.
Just a few weeks ago, OpenAI announced and launched GPT 4, the latest version of its Large Language Model (LLM). Now, there’s a possibility that the next major iteration, GPT 5, might be released by the end of 2023. This information comes from a BGR report, based on tweets by developer Siqi Chen.
However, OpenAI has not yet talked about GPT 5 in public. As a result, we don’t know what changes and improvements to expect. Chen’s initial tweet suggested that OpenAI expects GPT 5 to achieve Artificial General Intelligence (AGI). If this is true, it means that the chatbot would reach human-like understanding and intelligence.