Toggle light / dark theme

DARPA’s Robotic Autonomy in Complex Environments with Resiliency (RACER) program has successfully completed one experiment and is now moving on to even more difficult off-road landscapes at Camp Roberts, California, for trials set for September 15–27, according to a press release by the organization published last week.

Giving driverless combat vehicles off-road autonomy

The program has stated that its aim is “to give driverless combat vehicles off-road autonomy while traveling at speeds that keep pace with those driven by people in realistic situations.”

The artificial artist Dall-E 2 has now designed the Apple Car.

A hypothetical “AI-generated Apple Car” ingeniously made use of artificial intelligence technology was created by Dall-E 2 in response to a text request by San Francisco-based industrial designer John Mauriello.

Mauriello focuses on advancing his one-of-a-kind craft by utilizing cutting-edge technologies. He typed that he wanted a minimalist sports automobile inspired by a MacBook and a Magic Mouse created out of metal and glass on DALL-E 2, an artificial intelligence system that can create realistic visuals and art from a description. Additionally, he gave the AI instructions to style the design using Jony Ive’s methods, the former head of design at Apple.

😗


On Wednesday, OpenAI released a new open source AI model called Whisper that recognizes and translates audio at a level that approaches human recognition ability. It can transcribe interviews, podcasts, conversations, and more.

OpenAI trained Whisper on 680,000 hours of audio data and matching transcripts in 98 languages collected from the web. According to OpenAI, this open-collection approach has led to “improved robustness to accents, background noise, and technical language.” It can also detect the spoken language and translate it to English.

As many as 350,000 open source projects are believed to be potentially vulnerable to exploitation as a result of a security flaw in a Python module that has remained unpatched for 15 years.

The open source repositories span a number of industry verticals, such as software development, artificial intelligence/machine learning, web development, media, security, and IT management.

The shortcoming, tracked as CVE-2007–4559 (CVSS score: 6.8), is rooted in the tarfile module, successful exploitation of which could lead to code execution from an arbitrary file write.

https://youtube.com/watch?v=R0NP5eMY7Q8&feature=share

Quantum algorithms: An algorithm is a sequence of steps that leads to the solution of a problem. In order to execute these steps on a device, one must use specific instruction sets that the device is designed to do so.

Quantum computing introduces different instruction sets that are based on a completely different idea of execution when compared with classical computing. The aim of quantum algorithms is to use quantum effects like superposition and entanglement to get the solution faster.

Source:
Artificial Intelligence vs Artificial General Intelligence: Eric Schmidt Explains the Difference.

https://youtu.be/VFuElWbRuHM

Disclaimer:

Tool use has long been a hallmark of human intelligence, as well as a practical problem to solve for a vast array of robotic applications. But machines are still wonky at exerting just the right amount of force to control tools that aren’t rigidly attached to their hands.

To manipulate said tools more robustly, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), in collaboration with the Toyota Research Institute (TRI), have designed a system that can grasp tools and apply the appropriate amount of force for a given task, like squeegeeing up liquid or writing out a word with a pen.

The system, dubbed Series Elastic End Effectors, or SEED, uses soft bubble grippers and embedded cameras to map how the grippers deform over a six-dimensional space (think of an airbag inflating and deflating) and apply force to a tool. Using six degrees of freedom, the object can be moved left and right, up or down, back and forth, roll, pitch, and yaw. The closed-loop controller—a self-regulating system that maintains a desired state without —uses SEED and visuotactile feedback to adjust the position of the robot arm in order to apply the desired force.

Those who are venturing into the architecture of the metaverse, have already asked themselves this question. A playful environment where all formal dreams are possible, where determining aspects for architecture such as solar orientation, ventilation, and climate will no longer be necessary, where – to Louis Kahn’s despair – there is no longer a dynamic of light and shadow, just an open and infinite field. Metaverse is the extension of various technologies, or even some call them a combination of some powerful technologies. These technologies are augmented reality, virtual reality, mixed reality, artificial intelligence, blockchain, and a 3D world.

This technology is still under research. However, the metaverse seems to make a significant difference in the education domain. Also, its feature of connecting students across the world with a single metaverse platform may bring a positive change. But, the metaverse is not only about remote learning. It is much more than that.

Architecture emerged on the construction site, at a time when there was no drawing, only experimentation. Over time, thanks to Brunelleschi and the Florence dome in the 15th century, we witnessed the first detachment from masonry, a social division of labor from which liberal art and mechanical art emerge. This detachment generated different challenges and placed architecture on an oneiric plane, tied to paper. In other words, we don’t build any structures, we design them. Now, six centuries later, it looks like we are getting ready to take another step away from the construction site, abruptly distancing ourselves from engineering and construction.

Engineered living materials promise to aid efforts in human health, energy and environmental remediation. Now they can be built big and customized with less effort.

Bioscientists at Rice University have introduced centimeter-scale, slime-like colonies of engineered that self-assemble from the bottom up. They can be programmed to soak up contaminants from the environment or to catalyze biological reactions, among many possible applications.

The creation of autonomous —or ELMs—has been a goal of bioscientist Caroline Ajo-Franklin since long before she joined Rice in 2019.

Following the success of the inaugural competition in 2021, Amazon is officially launching the Alexa Prize TaskBot Challenge 2. Starting today, university teams across the globe can apply to compete in developing multimodal conversational agents that assist customers in completing tasks requiring multiple steps and decisions. The first-place team will take home a prize of $500,000.

The TaskBot Challenge 2, which will begin in January 2023, addresses one of the hardest problems in conversational AI — to create next-generation conversational AI experiences that delight customers by addressing their changing needs as they complete complex tasks. It builds upon the Alexa Prize’s foundation of providing universities a unique opportunity to test cutting edge machine learning models with actual customers at scale.