Toggle light / dark theme

If people remember how sampling changed music, watch what this guys does to make AI music. A long time ago when people said AI will replace musicians, I replied AI is just a sampler. If people use a Tupac voice on a song like this guy did, they just pay royalties. Then with samplers arists made sample disks royalty free. They make money when you buy the sample disk. The same with AI, you just upload your sample disk into your AI, whether the music AI is from Meta or Google. Yeah Meta has music AI, you can see it used here.


Welcome to a showcase of sounds sampled through the power of artificial intelligence. Gone are the days of vinyl digging; now, we embrace prompt digging…

Jump on the hype train for this channel, and help me crank out even more wicked videos like this one:
https://www.patreon.com/NobodyandTheComputer.

Contact me:
[email protected].

Colab Notebook META AudioGen & MusicGEN:

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

In 2022, leaders in the U.S. military technology and cybersecurity community said that they considered 2023 to be the “reset year” for quantum computing. They estimated the time it will take to make systems quantum-safe will match the time that the first quantum computers that threaten their security will become available: both around four to six years. It is vital that industry leaders quickly start to understand the security issues around quantum computing and take action to resolve the issues that will arise when this powerful technology surfaces.

Quantum computing is a cutting-edge technology that presents a unique set of challenges and promises unprecedented computational power. Unlike traditional computing, which operates using binary logic (0s and 1s) and sequential calculations, quantum computing works with quantum bits, or qubits, that can represent an infinite number of possible outcomes. This allows quantum computers to perform an enormous number of calculations simultaneously, exploiting the probabilistic nature of quantum mechanics.

Apptronik, an Austin-based robotics start-up, has revealed its latest humanoid robot, Apollo. Standing at 5 feet 8 inches tall and weighing 160 pounds, Apollo is designed for mass production and safe human-robot collaboration. Unlike traditional robots, Apollo uses electricity instead of hydraulics, making it both safer and more efficient.

Apollo is equipped with a four-hour battery life that can be easily exchanged for continuous use up to 22 hours, allowing it to perform physically demanding and dangerous tasks, improving supply chains and reducing human risk.

To ensure that Apollo is accessible and friendly, Austin-based company Argodesign has equipped the robot with features such as digital panels on its chest for clear communication, intentional movements like head rotation, and a friendly face.

Tesla’s (TSLA) stock is rising in pre-market trading on an optimistic new report about the automaker’s Dojo supercomputer coming from Morgan Stanley.

The firm massively increased its price target on Tesla’s stock because of it.

Dojo is Tesla’s own custom supercomputer platform built from the ground up for AI machine learning and, more specifically, for video training using the video data coming from its fleet of vehicles.

Similarly, allowing the MyoLegs to flail around for a while in a seemingly aimless fashion gave them better performance with locomotion tasks, as the researchers described in another paper presented at the recent Robotics Science and Systems meeting. Vittorio Caggiano, a Meta researcher on the project who has a background in both AI and neuroscience, says that scientists in the fields of neuroscience and biomechanics are learning from the MyoSuite work. “This fundamental knowledge [of how motor control works] is very generalizable to other systems,” he says. “Once they understand the fundamental mechanics, then they can apply those principles to other areas.”

This year, MyoChallenge 2023 (which will also culminate at the NeurIPS meeting in December) requires teams to use the MyoArm to pick up, manipulate, and accurately place common household objects and to use the MyoLegs to either pursue or evade an opponent in a game of tag.

Emo Todorov, an associate professor of computer science and engineering at the University of Washington, has worked on similar biomechanical models as part of the popular Mujoco physics simulator. (Todorov was not involved with the current Meta research but did oversee Kumar’s doctoral work some years back.) He says that MyoSuite’s focus on learning general representations means that control strategies can be useful for “a whole family of tasks.” He notes that their generalized control strategies are analogous to the neuroscience principle of muscle synergies, in which the nervous system activates groups of muscles at once to build up to larger gestures, thus reducing the computational burden of movement. “MyoSuite is able to construct such representations from first principles,” Todorov says.

The large language models that enable generative artificial intelligence (AI) are driving an increase in investment and an acceleration of competition in the field of silicon photonics, a technology that combines silicon-based integrated circuits (ICs) and optical components to process and transmit massive amounts of data more efficiently.

Top-rank designers and manufacturers of ICs, AI systems and telecommunications equipment have all joined the race, including NVIDIA, TSMC, Intel, IBM, Cisco Systems, Huawei, NTT and imec, the Interuniversity Microelectronics Centre headquartered in Belgium.

These and other organizations have been working on silicon photonics for many years, some of them (including Intel and NTT) for nearly two decades.

We’re going to be hearing a lot about various plans and positions on AI regulation in the coming weeks.

The US Congress is heading back into session, and they are hitting the ground running on AI. We’re going to be hearing a lot about various plans and positions on AI regulation in the coming weeks, kicking off with Senate Majority Leader Chuck Schumer’s first AI Insight Forum on Wednesday. This and planned future forums will bring together some of the top people in AI to discuss the risks and opportunities posed by advances in this technology and how Congress might write legislation to address them.

This newsletter will break down what exactly these forums are and aren’t, and what might come… More.

Meta is reportedly planning to train a new model that it hopes will be as powerful as OpenAI’s latest and greatest chatbot.

Meta has been snapping up AI training chips and building out data centers in order to create a more powerful new chatbot it hopes will be as sophisticated as OpenAI’s GPT-4, according to The Wall Street Journal.

The Journal writes that Meta has been buying more Nvidia H100 AI-training chips and is beefing up its infrastructure so that, this time around, it won’t need to rely on Microsoft’s Azure cloud platform to train the new chatbot. The company reportedly assembled a group earlier this year to build the model, with the goal of speeding up the creation of AI tools that can emulate human expressions. company aims to release its new model next year.