Toggle light / dark theme

Healthcare Robot with ‘Sense of Touch’ Could Reduce Infection Spread

A first-of-its-kind robot which gives clinicians the ability to ‘feel’ patients remotely has been launched as part of a Finnish hospital pilot by deep tech robotics company Touchlab, a new tenant of the world-leading centre for robotics and artificial intelligence the National Robotarium.

Controlled by operators wearing an electronic haptic glove, the Välkky telerobot is equipped with the most advanced electronic skin (e-skin) technology ever developed to transfer a sense of touch from its robotic hand to users. E-skin is a material which is made up of single or multiple ultra-thin force sensors to transmit tactile sensations like pressure, vibration or motion from one source to another in real-time.

The 3-month pilot at Laakso Hospital in Helsinki, Finland will see a team of purpose-trained nurses explore how robotics systems can help deliver care, reduce workload and prevent the spread of infections or diseases. The pilot at Laakso Hospital is coordinated by Forum Virium Helsinki, an innovation company for the City of Helsinki. The research is part of a wider €7 billion project aimed at developing the most advanced hospital in Europe, due to be completed in 2028.

AMD Expands AI/HPC Product Lineup With Flagship GPU-only Instinct MI300X with 192GB Memory

Alongside their EPYC server CPU updates, as part of today’s AMD Data Center event, the company is also offering an update on the status of their nearly-finished AMD Instinct MI300 accelerator family. The company’s next-generation HPC-class processors, which use both Zen 4 CPU cores and CDNA 3 GPU cores on a single package, have now become a multi-SKU family of XPUs.

Joining the previously announced 128GB MI300 APU, which is now being called the MI300A, AMD is also producing a pure GPU part using the same design. This chip, dubbed the MI300X, uses just CDNA 3 GPU tiles rather than a mix of CPU and GPU tiles in the MI300A, making it a pure, high-performance GPU that gets paired with 192GB of HBM3 memory. Aimed squarely at the large language model market, the MI300X is designed for customers who need all the memory capacity they can get to run the largest of models.

First announced back in June of last year, and detailed in greater depth back at CES 2023, the AMD Instinct MI300 is AMD’s big play into the AI and HPC market. The unique, server-grade APU packs both Zen 4 CPU cores and CDNA 3 GPU cores on to a single, chiplet-based chip. None of AMD’s competitors have (or will have) a combined CPU+GPU product like the MI300 series this year, so it gives AMD an interesting solution with a truly united memory architecture, and plenty of bandwidth between the CPU and GPU tiles.

Zuckerberg Announces Bold Plan to Jam AI Into “Every Single One of Our Products”

Meta-formerly-Facebook CEO Mark Zuckerberg has a genius new plot to add some interest to Meta-owned products: just jam in some generative AI, absolutely everywhere.

Axios reports that in an all-hands meeting on Thursday, Zuckerberg unveiled a barrage of generative AI tools and integrations, which are to be baked into both Meta’s internal and consumer-facing products, Facebook and Instagram included.

“In the last year, we’ve seen some really incredible breakthroughs — qualitative breakthroughs — on generative AI,” Zuckerberg told Axios in a statement, “and that gives us the opportunity to now go take that technology, push it forward, and build it into every single one of our products.”

Microsoft AI Introduces Orca: A 13-Billion Parameter Model that Learns to Imitate the Reasoning Process of LFMs (Large Foundation Models)

The remarkable zero-shot learning capabilities demonstrated by large foundation models (LFMs) like ChatGPT and GPT-4 have sparked a question: Can these models autonomously supervise their behavior or other models with minimal human intervention? To explore this, a team of Microsoft researchers introduces Orca, a 13-billion parameter model that learns complex explanation traces and step-by-step thought processes from GPT-4. This innovative approach significantly improves the performance of existing state-of-the-art instruction-tuned models, addressing challenges related to task diversity, query complexity, and data scaling.

The researchers acknowledge that the query and response pairs from GPT-4 can provide valuable guidance for student models. Therefore, they enhance these pairs by adding detailed responses that offer a better understanding of the reasoning process employed by the teachers when generating their responses. By incorporating these explanation traces, Orca equips student models with improved reasoning and comprehension skills, effectively bridging the gap between teachers and students.

The research team utilizes the Flan 2022 Collection to enhance Orca’s learning process further. The team samples tasks from this extensive collection to ensure a diverse mix of challenges. These tasks are then sub-sampled to generate complex prompts, which serve as queries for LFMs. This approach creates a diverse and rich training set that facilitates robust learning for the Orca, enabling it to tackle a wide range of tasks effectively.

A linguistics expert explains why humans and AI both recycle language

In one sense, it is undeniably new. Interactions with ChatGPT can feel unprecedented, as when a tech journalist couldn’t get a chatbot to stop declaring its love for him. In my view, however, the boundary between humans and machines, in terms of the way we interact with one another, is fuzzier than most people would care to admit, and this fuzziness accounts for a good deal of the discourse swirling around ChatGPT.

When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought—of course I’m not a robot. On the other hand, when my email client suggests a word or phrase to complete my sentence, or when my phone guesses the next word I’m about to text, I start to doubt myself. Is that what I meant to say? Would it have occurred to me if the application hadn’t suggested it? Am I part robot? These large language models have been trained on massive amounts of “natural” human language. Does this make the robots part human?

Malicious hackers are weaponizing generative AI

Although I’m swearing off studies as blog fodder, it did come to my attention that Vulcan Cyber’s Voyager18 research team recently issued an advisory validating that generative AI, such as ChatGPT, would be turned into a weapon quickly, ready to attack cloud-based systems near you. Most cloud computing insiders have been waiting for this.

New ways to attack

A new breaching technique using the OpenAI language model ChatGPT has emerged; attackers are spreading malicious packages in developers’ environments. Experts are seeing ChatGPT generate URLs, references, code libraries, and functions that do not exist. According to the report, these “hallucinations” may result from old training data. Through the code-generation capabilities of ChatGPT, attackers can exploit fabricated code libraries (packages) that are maliciously distributed, also bypassing conventional methods such as typosquatting.

NVIDIA’S HUGE AI Chip Breakthroughs Change Everything (Supercut)

Highlights from the latest #nvidia keynote at Computex in Taiwan, home of TSMC and is the world’s capital of semiconductor manufacturing and chip fabrication. Topics include @NVIDIA’s insane H100 datacenter GPUs, Grace Hopper superchips, GH200 AI supercomputer, and how these chips will power generative AI technologies like #chatgpt by #openai and reshape computing as we know it.

💰 Want my AI research and stock picks? Let me know: https://tickersymbolyou.com/survey/

⚠️ Get up to 17 FREE stocks with Moomoo: https://tickersymbolyou.com/moomoo.

Simply Wall Street’s Nvidia (NVDA Stock) Valuation: https://simplywall.st/stocks/us/semiconductors/nasdaq-nvda/nvidia?via=tsyou.

Taiwan Semiconductor (TSM Stock) Valuation: https://simplywall.st/stocks/us/semiconductors/nyse-tsm/taiw…?via=tsyou.

Timestamps for this Nvidia Keynote Supercut: