Toggle light / dark theme

Apple reportedly limits internal use of AI-powered tools like ChatGPT and GitHub Copilot

As big tech companies are in a fierce race with each other to build generative AI tools, they are being cautious about giving their secrets away. In a move to prevent any of its data from ending up with competitors, Apple has restricted internal use of tools like OpenAI’s ChatGPT and Microsoft-owned GitHub’s Copilot, a new report says.

According to The Wall Street Journal, Apple is worried about its confidential data ending up with developers who trained the models on user data. Notably, OpenAI launched the official ChatGPT app on iOS Thursday. Separately, Bloomberg reporter Mark Gurman tweeted that the chatbot has been on the list of restricted software at Apple for months.

I believe ChatGPT has been banned/on the list of restricted software at Apple for months. Obviously the release of ChatGPT on iOS today again makes this relevant.

Apple is on the hunt for generative AI talent

Apple, like a number of companies right now, may be grappling with what role the newest advances in AI are playing, and should play, in its business. But one thing Apple is confident about is the fact that it wants to bring more generative AI talent into its business.

The Cupertino company has posted at least a dozen job ads on its career page seeking experts in generative AI. Specifically, it’s looking for machine learning specialists “passionate about building extraordinary autonomous systems” in the field. The job ads (some of which seem to cover the same role, or are calling for multiple applicants) first started appearing April 27, with the most recent of them getting published earlier this week.

The job postings are coming amid some mixed signals from the company around generative AI. During its Q2 earnings call earlier this month, CEO Tim Cook dodged giving specific answers to questions about what the company is doing in the area — but also didn’t dismiss it. While generative AI was “very interesting,” he said, Apple would be “deliberate and thoughtful” in its approach. Then yesterday, the WSJ reported that the company had started restricting use of OpenAI’s ChatGPT and other external generative AI tools for some employees over concerns of proprietary data leaking out through the platforms.

Cruise, Waymo near approval to charge for 24/7 robotaxis in San Francisco

Self-driving vehicle companies Waymo and Cruise are on the cusp of securing final approval to charge fares for fully autonomous robotaxi rides throughout the city of San Francisco at all hours of the day or night.

Amid the city’s mounting resistance to the presence of AVs, the California Public Utilities Commission (CPUC) published two draft resolutions late last week that would grant Cruise and Waymo the ability to extend the hours of operation and service areas of their now-limited robotaxi services.

The drafts are dated for a hearing June 29, and there’s still room for public comments, which are due May 31. Based on the CPUC’s drafted language, many of the protests raised by the city of San Francisco have already been rejected.

New data-driven algorithm can forecast the mortality risk for certain cardiac surgery patients

A machine learning-based method developed by a Mount Sinai research team allows medical facilities to forecast the mortality risk for certain cardiac surgery patients. The new method is the first institution-specific model for determining the risk of a cardiac patient before surgery and was developed using vast amounts of Electronic Health Data (EHR).

Comparing the data-driven approach to the current population-derived models reveals a considerable performance improvement.

AI therapy: Voice-assisted Lumen app hopes to help people with mild depression and anxiety

Researchers hope the new app can help bridge the gap between supply and demand for mental health support.

Asking an AI chatbot to give a rundown on Napoleonic wars is fine. But using a chatbot service for a therapy session?

Even ChatGPT suggests going to a traditional mental health practitioner when you pour your heart out to the AI – perhaps because the most important element of therapy is the client-therapist relationship.

UK telecoms giant BT plans to slash 55,000 jobs, with 10,000 being replaced by AI ‘by the end of the decade’

The announcement comes shortly after IBM announced it would replace 7,800 jobs with AI.

After IBM’s CEO, earlier this month, announced that the company could easily replace at least 7,800 human personnel with artificial intelligence (AI) over the next five years, another startling announcement in the ‘Will AI replace humans’ debate has come to the fore.

BT, a prominent British multinational telecommunications firm, said it will become a ‘leaner business’ as it announced its plans to shed up to 55,000 jobs by the end of the decade, mostly in the United Kingdom. The company also announced that approximately 10,000 of its workforce will be replaced by AI, said a report by The Guardian.

Students’ future in question after lecturer fails entire class for using ChatGPT

According to RollingStone, and other news outlets, a group of students at Texas A&M University-Commerce’s graduations is in question after being accused of using ChatGPT for their essays.

A Texas A&M University-Commerce professor has taken drastic action to fail all his students after suspecting them of using ChatGPT to write their papers. This decision has now delayed them from passing their diplomas. According to RollingStone, the professor, Dr. Jard Mumm, the decision appears flawed as he used the natural language processing software to analyze each essay and judge whether it generated it.


Glegorly/iStock.

“I copy and paste your responses in [ChatGPT], and [it] will tell me if the program generated the content,” the professor wrote in the email. He went on to say that he had tested each paper twice. Dr. Mumm then went on to offer the class a makeup assignment to avoid the failing grade — which could otherwise, in theory, threaten their graduation status.

Exploring the Relationship Between Artificial Intelligence (AI) and the Metaverse

Artificial intelligence (AI) and the metaverse are some of the most captivating technologies of the 21st century so far. Both are believed to have the potential to change many aspects of our lives, disrupt different industries, and enhance the efficiency of traditional workflows. While these two technologies are often looked at separately, they’re more connected than we may think. Before we explore the relationship between AI and the metaverse, let’s start by defining both terms.

The metaverse is a concept describing a hypothetical future design of the internet. It features an immersive, 3D online world where users are represented by custom avatars and access information with the help of virtual reality (VR), augmented reality (AR), and similar technologies. Instead of accessing the internet via their screens, users access the metaverse via a combination of the physical and digital. The metaverse will enable people to socialize, play, and work alongside others in different 3D virtual spaces.

A similar arrangement was described in Neal Stephenson’s 1992 science-fiction novel Snow Crash. While it was perceived as a fantasy mere three decades ago, it seems like it could become a reality sooner rather than later. Although the metaverse isn’t fully in existence yet, some online platforms incorporate elements of it. For example, video games like Fortnite and Horizon World port multiple elements of our day-to-day lives into the online world.