Toggle light / dark theme

In one sense, it is undeniably new. Interactions with ChatGPT can feel unprecedented, as when a tech journalist couldn’t get a chatbot to stop declaring its love for him. In my view, however, the boundary between humans and machines, in terms of the way we interact with one another, is fuzzier than most people would care to admit, and this fuzziness accounts for a good deal of the discourse swirling around ChatGPT.

When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought—of course I’m not a robot. On the other hand, when my email client suggests a word or phrase to complete my sentence, or when my phone guesses the next word I’m about to text, I start to doubt myself. Is that what I meant to say? Would it have occurred to me if the application hadn’t suggested it? Am I part robot? These large language models have been trained on massive amounts of “natural” human language. Does this make the robots part human?

Although I’m swearing off studies as blog fodder, it did come to my attention that Vulcan Cyber’s Voyager18 research team recently issued an advisory validating that generative AI, such as ChatGPT, would be turned into a weapon quickly, ready to attack cloud-based systems near you. Most cloud computing insiders have been waiting for this.

New ways to attack

A new breaching technique using the OpenAI language model ChatGPT has emerged; attackers are spreading malicious packages in developers’ environments. Experts are seeing ChatGPT generate URLs, references, code libraries, and functions that do not exist. According to the report, these “hallucinations” may result from old training data. Through the code-generation capabilities of ChatGPT, attackers can exploit fabricated code libraries (packages) that are maliciously distributed, also bypassing conventional methods such as typosquatting.

Highlights from the latest #nvidia keynote at Computex in Taiwan, home of TSMC and is the world’s capital of semiconductor manufacturing and chip fabrication. Topics include @NVIDIA’s insane H100 datacenter GPUs, Grace Hopper superchips, GH200 AI supercomputer, and how these chips will power generative AI technologies like #chatgpt by #openai and reshape computing as we know it.

💰 Want my AI research and stock picks? Let me know: https://tickersymbolyou.com/survey/

⚠️ Get up to 17 FREE stocks with Moomoo: https://tickersymbolyou.com/moomoo.

Simply Wall Street’s Nvidia (NVDA Stock) Valuation: https://simplywall.st/stocks/us/semiconductors/nasdaq-nvda/nvidia?via=tsyou.

In the digital age, SaaS businesses have started embracing transformative technologies, such as Artificial intelligence (AI) and cloud computing. According to a research firm, the market for artificial intelligence (AI) is nearly 100 billion USD, which is expected to grow twentyfold by 2030, up to almost 2 trillion USD.

Although AI promises revolutionary advancements and cloud computing enables efficient storage and processing of massive amounts of data, their rapid adoption also raises concerns about cybersecurity. In 2021, the global cost of cybercrime was estimated to be $6 trillion.

Local language data helps automated systems understand and respond to users in their own language and can help businesses reach their target audiences more effectively, said Ganesh Gopalan, founder and CEO of AI startup Gnani.ai, during a panel discussion at the Mint Digital Innovation Summit & Awards on Friday.

“If we don’t have, firstly, content in the local language, if we can’t talk to machines in the local language, then it is not possible for no system to work and, you know, reach the right audience,” said Gopalan.

The panel discussion also included Vivekanand Pani, Co-founder, Reverie Language Technologies, who agrees that to develop an AI tool for any language, the availability of data is crucial.

Researchers led by the University of California San Diego have developed a new model that trains four-legged robots to see more clearly in 3D. The advance enabled a robot to autonomously cross challenging terrain with ease—including stairs, rocky ground and gap-filled paths—while clearing obstacles in its way.

The researchers will present their work at the 2023 Conference on Computer Vision and Pattern Recognition (CVPR), which will take place from June 18 to 22 in Vancouver, Canada.

“By providing the robot with a better understanding of its surroundings in 3D, it can be deployed in more complex environments in the real world,” said study senior author Xiaolong Wang, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering.