But ‘the use of an AI system by a natural person does not preclude a natural person from qualifying as an inventor.’
AI can help the invention process, though.
But ‘the use of an AI system by a natural person does not preclude a natural person from qualifying as an inventor.’
AI can help the invention process, though.
You’ll soon be able to tell ChatGPT to forget things — or remember specific things in future conversations.
Here’s a ChatGPT guide to help understand Open AI’s viral text-generating system. We outline the most recent updates and answer your FAQs.
Otter, the AI-powered meeting assistant that transcribes audio in real time, is adding another layer of AI to its product with today’s introduction of Meeting GenAI, a new set of AI tools for meetings. Included with GenAI is an AI chatbot you can query to get information about past meetings you’ve recorded with Otter, an AI chat feature that can be used by teams and an AI conversation summary that provides an overview of the meeting that took place, so you don’t have to read the full transcript to catch up.
Although journalists and students may use AI to record things like interviews or lectures, Otter’s new AI features are aimed more at those who leverage the meeting helper in a corporate environment. The company envisions the new tools as a complement or replacement for the AI features offered by different services like Microsoft Copilot, Zoom AI Companion and Google Duet, for example.
Explains Otter CEO Sam Liang, the idea to introduce the new AI tools was inspired by his own busy schedule.
Nvidia, ever keen to incentivize purchases of its latest GPUs, is releasing a tool that lets owners of GeForce RTX 30 Series and 40 Series cards run an AI-powered chatbot offline on a Windows PC.
Nvidia has released a new tool, Chat with RTX, that allows users to run a GenAI model offline — and fine-tune it on their data.
This month, Google unveiled its latest attempt to dethrone ChatGPT from the position it’s held since it launched as king of the generative AI chatbots.
Bard – now renamed Gemini–was released in early 2023 following OpenAI’s groundbreaking LLM-powered chat interface.
Dive into the ultimate AI showdown between ChatGPT and Google’s Gemini to discover which platform claims the crown for superior intelligence, versatility and innovation.
Posted in robotics/AI
Apple’s MGIE is a revolutionary AI model that can edit images based on natural language instructions, using multimodal large language models to generate expressive and imaginative edits.
Posted in internet, robotics/AI
The promise and peril of the internet has always been a memory greater than our own, a permanent recall of information and events that our brains can’t store. More recently, tech companies have promised that virtual assistants and chatbots could handle some of the mnemonic load by both remembering and reminding. It’s a vision of the internet as a conversation layer rather than a repository.
That’s what OpenAI’s latest release is supposed to provide. The company is starting to roll out long-term memory in ChatGPT —a function that maintains a memory of who you are, how you work, and what you like to chat about. Called simply Memory, it’s an AI personalization feature that turbocharges the “custom instructions” tool OpenAI released last July. Using ChatGPT custom instructions, a person could tell the chatbot that they’re a technology journalist based in the Bay Area who enjoys surfing, and the chatbot would consider that information in future responses within that conversation, like a first date who never forgets the details.
Now, ChatGPT’s memory persists across multiple chats. The service will also remember personal details about a ChatGPT user even if they don’t make a custom instruction or tell the chatbot directly to remember something; it just picks up and stores details as conversations roll on. This will work across both the free (ChatGPT 3.5) and paid (ChatGPT 4) version.
Researchers based at the Drexel University College of Engineering have devised a new method for performing structural safety inspections using autonomous robots aided by machine learning technology.
The article they published recently in the Elsevier journal Automation in Construction presented the potential for a new multi-scale monitoring system informed by deep-learning algorithms that work to find cracks and other damage to buildings before using LiDAR to produce three-dimensional images for inspectors to aid in their documentation.
The development could potentially work to benefit the enormous task of maintaining the health of structures that are increasingly being reused or restored in cities large and small across the country. Despite the relative age of America’s built environment, roughly two-thirds of today’s existing buildings will be in use in the year 2050, according to Gensler’s predictions.
Researchers have proposed a new strategy for the shape assembly of robot swarms based on the idea of mean-shift exploration: When a robot is surrounded by neighboring robots and unoccupied locations, it actively gives up its current location by exploring the highest density of nearby unoccupied locations in the desired shape.
The study, titled, “Mean-shift exploration in shape assembly of robot swarms,” has been published in Nature Communications.
This idea is realized by adapting the mean-shift algorithm, an optimization technique widely used in machine learning for locating the maxima of a density function.