Jibo, a robot engineered by MIT Professor and Dean for Digital Learning Cynthia Breazeal, who is also the founder of the institution’s Media Lab Personal Robots Group (PRG), is a personal robot that can actually do what even few humans can: explain emotions. It achieves this in several languages, including English and Arabic.
Category: robotics/AI – Page 713
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
In 2022, leaders in the U.S. military technology and cybersecurity community said that they considered 2023 to be the “reset year” for quantum computing. They estimated the time it will take to make systems quantum-safe will match the time that the first quantum computers that threaten their security will become available: both around four to six years. It is vital that industry leaders quickly start to understand the security issues around quantum computing and take action to resolve the issues that will arise when this powerful technology surfaces.
Quantum computing is a cutting-edge technology that presents a unique set of challenges and promises unprecedented computational power. Unlike traditional computing, which operates using binary logic (0s and 1s) and sequential calculations, quantum computing works with quantum bits, or qubits, that can represent an infinite number of possible outcomes. This allows quantum computers to perform an enormous number of calculations simultaneously, exploiting the probabilistic nature of quantum mechanics.
Apptronik, an Austin-based robotics start-up, has revealed its latest humanoid robot, Apollo. Standing at 5 feet 8 inches tall and weighing 160 pounds, Apollo is designed for mass production and safe human-robot collaboration. Unlike traditional robots, Apollo uses electricity instead of hydraulics, making it both safer and more efficient.
Apollo is equipped with a four-hour battery life that can be easily exchanged for continuous use up to 22 hours, allowing it to perform physically demanding and dangerous tasks, improving supply chains and reducing human risk.
To ensure that Apollo is accessible and friendly, Austin-based company Argodesign has equipped the robot with features such as digital panels on its chest for clear communication, intentional movements like head rotation, and a friendly face.
Tesla’s (TSLA) stock is rising in pre-market trading on an optimistic new report about the automaker’s Dojo supercomputer coming from Morgan Stanley.
The firm massively increased its price target on Tesla’s stock because of it.
Dojo is Tesla’s own custom supercomputer platform built from the ground up for AI machine learning and, more specifically, for video training using the video data coming from its fleet of vehicles.
Similarly, allowing the MyoLegs to flail around for a while in a seemingly aimless fashion gave them better performance with locomotion tasks, as the researchers described in another paper presented at the recent Robotics Science and Systems meeting. Vittorio Caggiano, a Meta researcher on the project who has a background in both AI and neuroscience, says that scientists in the fields of neuroscience and biomechanics are learning from the MyoSuite work. “This fundamental knowledge [of how motor control works] is very generalizable to other systems,” he says. “Once they understand the fundamental mechanics, then they can apply those principles to other areas.”
This year, MyoChallenge 2023 (which will also culminate at the NeurIPS meeting in December) requires teams to use the MyoArm to pick up, manipulate, and accurately place common household objects and to use the MyoLegs to either pursue or evade an opponent in a game of tag.
Emo Todorov, an associate professor of computer science and engineering at the University of Washington, has worked on similar biomechanical models as part of the popular Mujoco physics simulator. (Todorov was not involved with the current Meta research but did oversee Kumar’s doctoral work some years back.) He says that MyoSuite’s focus on learning general representations means that control strategies can be useful for “a whole family of tasks.” He notes that their generalized control strategies are analogous to the neuroscience principle of muscle synergies, in which the nervous system activates groups of muscles at once to build up to larger gestures, thus reducing the computational burden of movement. “MyoSuite is able to construct such representations from first principles,” Todorov says.
The large language models that enable generative artificial intelligence (AI) are driving an increase in investment and an acceleration of competition in the field of silicon photonics, a technology that combines silicon-based integrated circuits (ICs) and optical components to process and transmit massive amounts of data more efficiently.
Top-rank designers and manufacturers of ICs, AI systems and telecommunications equipment have all joined the race, including NVIDIA, TSMC, Intel, IBM, Cisco Systems, Huawei, NTT and imec, the Interuniversity Microelectronics Centre headquartered in Belgium.
These and other organizations have been working on silicon photonics for many years, some of them (including Intel and NTT) for nearly two decades.
We’re going to be hearing a lot about various plans and positions on AI regulation in the coming weeks.
The US Congress is heading back into session, and they are hitting the ground running on AI. We’re going to be hearing a lot about various plans and positions on AI regulation in the coming weeks, kicking off with Senate Majority Leader Chuck Schumer’s first AI Insight Forum on Wednesday. This and planned future forums will bring together some of the top people in AI to discuss the risks and opportunities posed by advances in this technology and how Congress might write legislation to address them.
This newsletter will break down what exactly these forums are and aren’t, and what might come… More.
Meta is reportedly planning to train a new model that it hopes will be as powerful as OpenAI’s latest and greatest chatbot.
Meta has been snapping up AI training chips and building out data centers in order to create a more powerful new chatbot it hopes will be as sophisticated as OpenAI’s GPT-4, according to The Wall Street Journal.
The Journal writes that Meta has been buying more Nvidia H100 AI-training chips and is beefing up its infrastructure so that, this time around, it won’t need to rely on Microsoft’s Azure cloud platform to train the new chatbot. The company reportedly assembled a group earlier this year to build the model, with the goal of speeding up the creation of AI tools that can emulate human expressions. company aims to release its new model next year.
Though artificial intelligence has been making inroads into the enterprise, the rise of generative AI is accelerating the pace of adoption. It’s time for enterprise CXOs to consider building systems of intelligence that complement systems of record and systems of engagement.
In the last two decades, enterprises have invested in building solid foundations for managing data and information. Relational databases such as Oracle and Microsoft SQL Server became the cornerstone of information systems. Built on this foundation were customer relationship management, human resources management, supply chain management and other line of business applications that quickly became the digital backbone of… More.
This context, when combined with advanced prompt engineering, helps enterprises build intelligent AI-based assistants on the lines of Microsoft Copilot or Google Duet AI.
The foundation models become the core of systems of intelligence. The contextual information generated via semantic search is fed to these generative AI models, which deliver rich insights and accurate information to users. The use cases aligned with SOI go beyond typical chatbots. Different teams within an organization will use them to handle a range of scenarios, from marketing to sales forecasting.
The discourse around Artificial Intelligence (AI) often hinges on the paradoxical duality of its nature. While it mirrors human cognition to an extraordinary extent, its capacity to transcend our limitations is awe-inspiring and unsettling. The heart of this growing field lies in the use of algorithms and the people who control these powerful computational tools.
This brings us to TIME’s recent endeavor—the TIME100 Most Influential People in AI. This meticulously curated list casts light on the people pushing AI’s boundaries and shaping its ethical framework. So when TIME magazine drops a list… More.
Source: TIME