Annotation and analysis of sports videos is a time-consuming task that, once automated, will provide benefits to coaches, players, and spectators. American football, as the most watched sport in the United States, could especially benefit from this automation. Manual annotation and analysis of recorded videos of American football games is an inefficient and tedious process. Currently, most college football programs focus on annotating offensive formations to help them develop game plans for their upcoming games. As a first step to further research for this unique application, we use computer vision and deep learning to analyze an overhead image of a football play immediately before the play begins. This analysis consists of locating individual football players and labeling their position or roles, as well as identifying the formation of the offensive team.
Category: robotics/AI – Page 1,119
Despite his recent criticism, Musk has at times seemed complimentary about OpenAI’s tech — in December, he called ChatGPT “scary good.”
For a long time, scientists and engineers have drawn inspiration from the amazing abilities of animals and have sought to reverse engineer or reproduce these in robots and artificial intelligence (AI) agents. One of these behaviors is odor plume tracking, which is the ability of some animals, particularly insects, to home in on the source of specific odors of interest (e.g., food or mates), often over long distances.
A new study by researchers at University of Washington and University of Nevada, Reno has taken an innovative approach using artificial neural networks (ANNs) in understanding this remarkable ability of flying insects. Their work, recently published in Nature Machine Intelligence, exemplifies how artificial intelligence is driving groundbreaking new scientific insights.
“We were motivated to study a complex biological behavior, odor plume-tracking, that flying insects (and other animals) use to find food or mates,” Satpreet H. Singh, the lead author on the study, told Tech Xplore. “Biologists have experimentally studied many aspects of insect plume tracking in great detail, as it is a critical behavior for insect survival and reproduction. ”.
The metaverse was the next big thing and now it’s generative AI. We asked top tech executives to cut through the hype. They say this one’s for real.
Deepfake technology has been around for some time, and it’s currently causing controversies for the potential threats it may bring when fallen into the wrong hands. Even India’s business tycoon Anand Mahindra sparked alarm over hyper-realistic synthetic videos.
But some personalities are redefining the way viewers perceive deepfakes. For instance, David Guetta recently synthesized Eminem’s voice to hype up an event. And it’s only one of the many examples of people using artificial intelligence (AI) for entertainment. In fact, it already came to different social media websites like Twitch that take streaming to the next level.
Bachir Boumaaza is a Youtuber who once made a name for his record-breaking games, which he used to help numerous charities. But he became inactive, with most of his fans wondering where he went, thinking it might’ve been the end of his career.
Gleams Akihabara 703 2−8−16 Higashi-Kanda Chiyoda-ku Tokyo 101‑0031 Japan.
Tel: +81 3 5829 5,900 Fax: +81 3 5829 5,919 Email: [email protected]
©2023 GPlusMedia Inc.
CarMax already uses this automated vehicle inspection system.
UVeye’s technology is reducing fatalities out on the road by harnessing the power of AI and high-definition cameras to detect faulty tires, fluid leaks and damaged components before a possible accident or breakdown.
The first time a language model was used to synthesize human proteins.
Of late, AI models are really flexing their muscles. We have recently seen how ChatGPT has become a poster child for platforms that comprehend human languages. Now a team of researchers has tested a language model to create amino acid sequences, showcasing abilities to replicate human biology and evolution.
The language model, which is named ProGen, is capable of generating protein sequences with a certain degree of control. The result was achieved by training the model to learn the composition of proteins. The experiment marks the first time a language model was used to synthesize human proteins.
A study regarding the research was published in the journal *Nature Biotechnology Thursday. *The project was a combined effort from researchers at the University of California-San Francisco and the University of California-Berkeley and Salesforce Research, which is a science arm of a software company based in San Fransisco.
## The significance of using a language model
Researchers say that a language model was used for its ability to generate protein sequences with a predictable function across large protein families, akin to generating grammatically and semantically correct natural language sentences on diverse topics.
“In the same way that words are strung together one-by-one to form text sentences, amino acids are strung together one-by-one to make proteins,” Nikhil Naik, the Director of AI Research at Salesforce Research, told *Motherboard*. The team applied “neural language modeling to proteins for generating realistic, yet novel protein sequences.”
Google worked to reassure investors and analysts on Thursday during its quarterly earnings call that it’s still a leader in developing AI. The company’s Q4 2022 results were highly anticipated as investors and the tech industry awaited Google’s response to the popularity of OpenAI’s ChatGPT, which has the potential to threaten its core business.
During the call, Google CEO Sundar Pichai talked about the company’s plans to make AI-based large language models (LLMs) like LaMDA available in the coming weeks and months. Pichai said users will soon be able to use large language models as a companion to search. An LLM, like ChatGPT, is a deep learning algorithm that can recognize, summarize and generate text and other content based on knowledge from enormous amounts of text data. Pichai said the models that users will soon be able to use are particularly good for composing, constructing and summarizing.
“Now that we can integrate more direct LLM-type experiences in Search, I think it will help us expand and serve new types of use cases, generative use cases,” Pichai said. “And so, I think I see this as a chance to rethink and reimagine and drive Search to solve more use cases for our users as well. It’s early days, but you will see us be bold, put things out, get feedback and iterate and make things better.”
Pichai’s comments about the possible ChatGPT rival come as a report revealed this week that Microsoft is working to incorporate a faster version of ChatGPT, known as GPT-4, into Bing, in a move that would make its search engine, which today has only a sliver of search market share, more competitive with Google. The popularity of ChatGPT has seen Google reportedly turning to co-founders Larry Page and Sergey Brin for help in combating the potential threat. The New York Times recently reported that Page and Brin had several meetings with executives to strategize about the company’s AI plans.
During the call, Pichai warned investors and analysts that the technology will need to scale slowly and that he sees large language usage as still being in its “early days.” He also said that the company is developing AI with a deep sense of responsibility and that it’s going to be careful when launching AI-based products, as the company plans to initially launch beta features and then slowly scale up from there.
He went on to note that Google will provide new tools and APIs for developers, creators and partners to empower them to build their own applications and discover new possibilities with AI.
The results highlight some potential strengths and weaknesses of ChatGPT.
Some of the world’s biggest academic journal publishers have banned or curbed their authors from using the advanced chatbot, ChatGPT. Because the bot uses information from the internet to produce highly readable answers to questions, the publishers are worried that inaccurate or plagiarised work could enter the pages of academic literature.
Several researchers have already listed the chatbot as a co-author in academic studies, and some publishers have moved to ban this practice. But the editor-in-chief of Science, one of the top scientific journals in the world, has gone a step further and forbidden any use of text from the program in submitted papers.
It’s not surprising the use of such chatbots is of interest to academic publishers. Our recent study, published in Finance Research Letters, showed ChatGPT could be used to write a finance paper that would be accepted for an academic journal. Although the bot performed better in some areas than in others, adding in our own expertise helped overcome the program’s limitations in the eyes of journal reviewers.
However, we argue that publishers and researchers should not necessarily see ChatGPT as a threat but rather as a potentially important aide for research — a low-cost or even free electronic assistant.
Our thinking was: if it’s easy to get good outcomes from ChatGPT by simply using it, maybe there’s something extra we can do to turn these good results into great ones.
We first asked ChatGPT to generate the standard four parts of a research study: research idea, literature review (an evaluation of previous academic research on the same topic), dataset, and suggestions for testing and examination. We specified only the broad subject and that the output should be capable of being published in “a good finance journal.”