Toggle light / dark theme

A recent study finds that software engineers who use code-generating AI systems are more likely to cause security vulnerabilities in the apps they develop. The paper, co-authored by a team of researchers affiliated with Stanford, highlights the potential pitfalls of code-generating systems as vendors like GitHub start marketing them in earnest.

“Code-generating systems are currently not a replacement for human developers,” Neil Perry, a PhD candidate at Stanford and the lead co-author on the study, told TechCrunch in an email interview. “Developers using them to complete tasks outside of their own areas of expertise should be concerned, and those using them to speed up tasks that they are already skilled at should carefully double-check the outputs and the context that they are used in in the overall project.”

The Stanford study looked specifically at Codex, the AI code-generating system developed by San Francisco-based research lab OpenAI. (Codex powers Copilot.) The researchers recruited 47 developers — ranging from undergraduate students to industry professionals with decades of programming experience — to use Codex to complete security-related problems across programming languages including Python, JavaScript and C.

One afternoon in the fall of 2019, in a grand old office building near the Arc de Triomphe, I was buzzed through an unmarked door into a showroom for the future of surveillance. The space on the other side was dark and sleek, with a look somewhere between an Apple Store and a doomsday bunker. Along one wall, a grid of electronic devices glinted in the moody downlighting—automated license plate readers, Wi-Fi-enabled locks, boxy data processing units. I was here to meet Giovanni Gaccione, who runs the public safety division of a security technology company called Genetec. Headquartered in Montreal, the firm operates four of these “Experience Centers” around the world, where it peddles intelligence products to government officials. Genetec’s main sell here was software, and Gaccione had agreed to show me how it worked.

He led me first to a large monitor running a demo version of Citigraf, his division’s flagship product. The screen displayed a map of the East Side of Chicago. Around the edges were thumbnail-size video streams from neighborhood CCTV cameras. In one feed, a woman appeared to be unloading luggage from a car to the sidewalk. An alert popped up above her head: “ILLEGAL PARKING.” The map itself was scattered with color-coded icons—a house on fire, a gun, a pair of wrestling stick figures—each of which, Gaccione explained, corresponded to an unfolding emergency. He selected the stick figures, which denoted an assault, and a readout appeared onscreen with a few scant details drawn from the 911 dispatch center. At the bottom was a button marked “INVESTIGATE,” just begging to be clicked.

Check out all the on-demand sessions from the Intelligent Security Summit here.

It’s as good a time as any to discuss the implications of advances in artificial intelligence (AI). 2022 saw interesting progress in deep learning, especially in generative models. However, as the capabilities of deep learning models increase, so does the confusion surrounding them.

On the one hand, advanced models such as ChatGPT and DALL-E are displaying fascinating results and the impression of thinking and reasoning. On the other hand, they often make errors that prove they lack some of the basic elements of intelligence that humans have.

Check out all the on-demand sessions from the Intelligent Security Summit here.

Tomorrow morning, I head south. Straight down I-95, from central New Jersey to northeast Florida, where I will be setting up my laptop in St. Augustine for the next two months. It’s about as far from Silicon Valley as I can be in the continental U.S., but that’s where you’ll find me gearing up for the first artificial intelligence (AI) news of 2023.

These are the 5 biggest AI stories I’m waiting for:

😗


Nearly 70 years after having his security clearance revoked by the Atomic Energy Commission (AEC) due to suspicion of being a Soviet spy, Manhattan Project physicist J. Robert Oppenheimer has finally received some form of justice just in time for Christmas, according to a December 16 article in the New York Times. US Secretary of Energy Jennifer M. Granholm released a statement nullifying the controversial decision that badly tarnished the late physicist’s reputation, declaring it to be the result of a “flawed process” that violated the AEC’s own regulations.

Science historian Alex Wellerstein of Stevens Institute of Technology told the New York Times that the exoneration was long overdue. “I’m sure it doesn’t go as far as Oppenheimer and his family would have wanted,” he said. “But it goes pretty far. The injustice done to Oppenheimer doesn’t get undone by this. But it’s nice to see some response and reconciliation even if it’s decades too late.”

As computer scientists tackle a greater range of problems, their work has grown increasingly interdisciplinary. This year, many of the most significant computer science results also involved other scientists and mathematicians. Perhaps the most practical involved the cryptographic questions underlying the security of the internet, which tend to be complicated mathematical problems. One such problem — the product of two elliptic curves and their relation to an abelian surface — ended up bringing down a promising new cryptography scheme that was thought to be strong enough to withstand an attack from a quantum computer. And a different set of mathematical relationships, in the form of one-way functions, will tell cryptographers if truly secure codes are even possible.

Computer science, and quantum computing in particular, also heavily overlaps with physics. In one of the biggest developments in theoretical computer science this year, researchers posted a proof of the NLTS conjecture, which (among other things) states that a ghostly connection between particles known as quantum entanglement is not as delicate as physicists once imagined. This has implications not just for our understanding of the physical world, but also for the myriad cryptographic possibilities that entanglement makes possible.

China’s ByteDance is using data from TikTok to track journalists and this is now raising eyebrows. There is growing fears that security concerns over TikTok might actually be true. The Chinese ByteDance wants to know which of its employees are speaking to the media.

#china #tiktok #bytedance.

About Channel:

WION The World is One News, examines global issues with in-depth analysis. We provide much more than the news of the day. Our aim to empower people to explore their world. With our Global headquarters in New Delhi, we bring you news on the hour, by the hour. We deliver information that is not biased. We are journalists who are neutral to the core and non-partisan when it comes to the politics of the world. People are tired of biased reportage and we stand for a globalised united world. So for us the World is truly One.

Face recognition tools are computational models that can identify specific people in images, as well as CCTV or video footage. These tools are already being used in a wide range of real-world settings, for instance aiding law enforcement and border control agents in their criminal investigations and surveillance efforts, and for authentication and biometric applications. While most existing models perform remarkably well, there may still be much room for improvement.

Researchers at Queen Mary University of London have recently created a new and promising for face recognition. This architecture, presented in a paper pre-published on arXiv, is based on a strategy to extract from images that differs from most of those proposed so far.

“Holistic methods using (CNNs) and margin-based losses have dominated research on face recognition,” Zhonglin Sun and Georgios Tzimiropoulos, the two researchers who carried out the study, told TechXplore.