Toggle light / dark theme

Italian robotics co…


The qb SoftHand2 Research is the stronger, smarter and more versatile evolution of qb SoftHand Research: an anthropomorphic robotic hand with 19 disclosable self-healing finger joints. It is always adaptable and robust, easy-to-use and flexible.

The qb SoftHand2 Research represents a compromise between complexity and dexterity. This new hand is capable of performing both precision and power grips, as well as manipulating objects while maintaining a stable grip.

The introduction of second synergy hallows the qb SoftHand2 Research to manipulate objects intended for interaction with the human hand, without changing the wrist orientation.

The Indian Space Research Organisation (ISRO) is set to launch a humanoid robot into space as part of its Gaganyaan mission, its first human spaceflight mission.

The much-delayed mission is back on track after ISRO successfully landed its probe on the moon’s South Pole, a world first.

According to the ISRO, the Gaganyaan project was established to demonstrate human spaceflight capability by launching a three-person crew to an orbit of 248 miles for a three-day mission and then bring them back to Earth safely, landing in Indian sea waters.

A robot moves a toy package of butter around a table in the Intelligent Robotics and Vision Lab at The University of Texas at Dallas. With every push, the robot is learning to recognize the object through a new system developed by a team of UT Dallas computer scientists.

The new system allows the to push objects multiple times until a sequence of images are collected, which in turn enables the system to segment all the objects in the sequence until the robot recognizes the objects. Previous approaches have relied on a single push or grasp by the robot to “learn” the object.

The team presented its research paper at the Robotics: Science and Systems conference held July 10–14 in Daegu, South Korea. Papers for the conference were selected for their novelty, technical quality, significance, potential impact and clarity.

If modern artificial intelligence has a founding document, a sacred text, it is Google’s 2017 research paper “Attention Is All You Need.” This paper introduced a new deep learning architecture known as the transformer, which has gone on to revolutionize the field of AI over the past half-decade.

The generative AI mania currently taking the world by storm can be traced directly to the invention of the transformer. Every major AI model and product in the headlines today—ChatGPT, GPT-4, Midjourney, Stable Diffusion, GitHub Copilot, and so on—is built using transformers.


Transformers are remarkably general-purpose: while they were initially developed for language translation specifically, they are now advancing the state of the art in domains ranging from computer vision to robotics to computational biology.

Racially biased artificial intelligence (AI) is not only misleading, it can be right down detrimental, destroying people’s lives. This is a warning University of Alberta Faculty of Law assistant professor Dr. Gideon Christian issued in a press release by the institution.

Christian is most notably the recipient of a $50,000 Office of the Privacy Commissioner Contributions Program grant for a research project called Mitigating Race, Gender and Privacy Impacts of AI Facial Recognition Technology. The initiative seeks to study race issues in AI-based facial recognition technology in Canada. Christian is considered an expert on AI and the law.

“There is this false notion that technology unlike humans is not biased. That’s not accurate,” said Christian, PhD.

Artificial intelligence (AI) has been helping humans in IT security operations since the 2010s, analyzing massive amounts of data quickly to detect the signals of malicious behavior. With enterprise cloud environments producing terabytes of data to be analyzed, threat detection at the cloud scale depends on AI. But can that AI be trusted? Or will hidden bias lead to missed threats and data breaches?

Bias can create risks in AI systems used for cloud security. There are steps humans can take to mitigate this hidden threat, but first, it’s helpful to understand what types of bias exist and where they come from.

Today marks nine months since ChatGPT was released, and six weeks since we announced our AI Start seed fund. Based on our conversations with scores of inception and early-stage AI founders, and hundreds of leading CXOs (chief experience officers), I can attest that we are definitely in exuberant times.

In the span of less than a year, AI investments have become de rigueur in any portfolio, new private company unicorns are being created every week, and the idea that AI will drive a stock market rebound is taking root. People outside of tech are becoming familiar with new vocabulary.

Large language models. ChatGPT. Deep-learning algorithms. Neural networks. Reasoning engines. Inference. Prompt engineering. CoPilots. Leading strategists and thinkers are sharing their view on how it will transform business, how it will unlock potential, and how it will contribute to human flourishing.

Since its public launch last year, the artificially intelligent chatbot ChatGPT has simultaneously wowed and frightened the world with its deep knowledge, its surprising empathy, and its undeniable potential to change the world in unforeseen, possibly miraculous or calamitous, ways. Now, it’s making it possible to digitally resurrect the dead in the form of chatbots trained on data of the deceased.

Developed by OpenAI, ChatGPT is an AI program called a large language model. Trained on more than 300 billion words from all sorts of sources on the Internet, ChatGPT responds to prompts from humans by predicting the word it should use next based on both its training and the prompt. The result is a stream of communication that’s both informative and human-like. ChatGPT has passed difficult tests, written scientific papers, and convinced many Microsoft scientists that it actually can understand language and utilize reason.

ChatGPT and other large language models can also receive more specific training to shape their responses. Programmer Jason Rohrer realized that he can create chatbots that emulate specific people by feeding ChatGPT examples of how they communicate and details of their lives. He started off with Star Trek’s Mr. Spock, as any good nerd would. He next launched a website called Project December, which allows paying customers to input all sorts of data and information and make their own personalized chatbots, even ones based upon deceased friends and family.

At 14, Anton received an old laptop that changed everything. Now he’s using AI to help himself and others achieve their potential.


Neither keyboards nor voice-to-text work well for Anton, a developer with cerebral palsy. He uses AI and LLMs to pursue his passion for programming and shows others how they can harness these technologies to accomplish more.

This talks about the changing dynamics of jobs and relation to AI. While there are a lot of apprehensions of AI killing jobs but it highlights that there are several new jobs being created by AI. It also stresses the need for professionals and students to reskill themselves in areas as diverse as AI and automation. So the argument that AI is going to kill jobs is not valid. Instead it enforces the argument that reskilling is most important.

LinkedIn: https://www.linkedin.com/in/tarah-ai-8316b7153/
Twitter: https://twitter.com/tarahtech.

#reskilling #AI #Automation #jobs #newskills #oldskills #reinvention #humanresources.
#AI #DeepLearning #ReinforcementLearning #MachineLearning #ML #DL #DataScience #ArtificialIntelligence #Classification #Jobs #Regression #Clustering #Intelligence #Learn #Intelligence #Knowledge #LearnFromHome #BI #BA #Analytics #Insights #Visualization #Graphs #Robots #Speech #BackPropagation #CNN #RNN #LSTM #NeuralNetworks #Network #Prediction #BigData #Hadoop