Toggle light / dark theme

New research reveals how basic psychological needs influence attitudes towards artificial intelligence

New research published in Telematics and Informatics provides evidence that the fulfillment of basic psychological needs through technology use is linked to changes in attitudes towards artificial intelligence over time. The findings indicate that self-determination, particularly feelings of competence and relatedness, plays a crucial role in shaping both negative and positive attitudes towards this emerging technology.

“We live in a world where artificial intelligence (AI) is becoming more common and accessible than ever. People’s attitudes towards AI will most certainly have a huge effect on how fast and widely AI can spread in society and how the development of AI will turn out,” said study author Jenna Bergdahl, a researcher at the Emerging Technologies Lab at Tampere University.

“As a researcher, I work in the Emerging Technologies Lab at Tampere University, where we are particularly interested in the new technological forms of life that constantly challenge and transform human and post-human living. Two projects from the Emerging Technologies Lab, called UrbanAI and Self & Technology, are focusing especially on artificial intelligence in society and conducting cross-national social psychological research on human-technology interaction.”

A.I. is making some common side hustles more lucrative—these can pay up to $100 per hour

Artificial intelligence still has a long way to go before completely taking over most human jobs. But it can already make some side hustles easier and more lucrative, primarily by saving people time.

“Automation, I think, is the key to reducing your workload,” Sean Audet, a food photographer who uses generative AI tools like ChatGPT to write emails and business plans, told CNBC Make It earlier this month. “When a client first reaches out to me, I need to be able to quickly deliver a bunch of information about services and costs … in a nice, succinct and personalized way.”

3D printed robotic gripper doesn’t need electronics to function

A new soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work. The device was developed by a team of roboticists at the University of California San Diego, in collaboration with researchers at the BASF corporation, who detailed their work in Science Robotics.

The researchers wanted to design a soft that would be ready to use right as it comes off the 3D printer, equipped with built in gravity and touch sensors. As a result, the gripper can pick up, hold, and release objects. No such gripper existed before this work.

“We designed functions so that a series of valves would allow the gripper to both grip on contact and release at the right time,” said Yichen Zhai, a postdoctoral researcher in the Bioinspired Robotics and Design Lab at the University of California San Diego and the leading author of the paper. “It’s the first time such a gripper can both grip and release. All you have to do is turn the gripper horizontally. This triggers a change in the airflow in the valves, making the two fingers of the gripper release.”

Tesla Commences Production of Dojo Supercomputer for Autonomous Vehicle Training

In its second-quarter earnings report for 2023, Tesla revealed its ambitious plan to address vehicle autonomy at scale with four key technology pillars: an extensive real-world dataset, neural net training, vehicle hardware, and vehicle software. Notably, the electric vehicle manufacturer asserted its commitment to developing each of these pillars in-house. A significant milestone in this endeavor was announced as Tesla started the production of its custom-built Dojo training computer, a critical component in achieving faster and more cost-effective neural net training.

While Tesla already possesses one of the world’s most potent Nvidia GPU-based supercomputers, the Dojo supercomputer takes a different approach by utilizing chips specifically designed by Tesla. Back in 2019, Tesla CEO Elon Musk christened this project as “Dojo,” envisioning it as an exceptionally powerful training computer. He claimed that Dojo would be capable of performing an exaflop, or one quintillion (1018) floating-point operations per second, an astounding level of computational power. To put it into perspective, performing one calculation every second on a one exaFLOP computer system would take over 31 billion years, as reported by Network World.

The development of Dojo has been a continuous process. At Tesla’s AI Day in 2021, the automaker showcased its initial chip and training tiles, which would eventually form a complete Dojo cluster, also known as an “exapod.” Tesla’s plan involves combining two sets of three tiles in a tray, and then placing two trays in a computer cabinet to achieve over 100 petaflops per cabinet. With a 10-cabinet system, Tesla’s Dojo exapod will exceed the exaflop barrier of compute power.

GitHub CEO says Copilot will write 80% of code “sooner than later”

By simply pressing the tab key, a developer using Copilot can finish a line, generate blocks of code, or even write entire programs. According to GitHub, over 10,000 organizations, ranging from Coca-Cola to Airbnb, have signed up for Copilot’s enterprise version, and more than 30,000 employees at Microsoft itself now regularly code with assistance from Copilot.

“Sooner than later, 80% of the code is going to be written by Copilot. And that doesn’t mean the developer is going to be replaced.”

Recently, Freethink spoke with Thomas Dohmke, GitHub’s CEO, to learn more about how Copilot promises to refashion programming as a profession, and the questions AI-powered development raises about the future of innovation itself. We also talked about why coding with Copilot is so much fun, how AI is going to change the way we learn, and whether Copilot can fix banks that are still running COBOL on mainframes.

RT-2: New model translates vision and language into action

Robotic Transformer 2 (RT-2) is a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control.

High-capacity vision-language models (VLMs) are trained on web-scale datasets, making these systems remarkably good at recognising visual or language patterns and operating across different languages. But for robots to achieve a similar level of competency, they would need to collect robot data, first-hand, across every object, environment, task, and situation.

In our paper, we introduce Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities.

Improved AI model boosts GitHub Copilot’s code generation capabilities

GitHub Copilot is getting an upgrade with an improved AI model and enhanced contextual filtering, resulting in faster and more tailored code suggestions for developers.

The new AI model delivers a 13% improvement in latency, while enhanced contextual filtering delivers a 6% relative improvement in code acceptance. These improvements are coming to GitHub Copilot for Individuals and GitHub Copilot for Business.

According to Github, the new model was developed together with OpenAI and Azure AI, and the 13% improvement in latency means that GitHub Copilot generates code suggestions for developers faster than ever before, promising a significant increase in overall productivity.