Toggle light / dark theme

OpenAI is looking to hire an “insider risk investigator” to “fortify our organization against internal security threats.”

According to the company’s job listing, first spotted by MSPowerUser, the gumshoe is supposed to help the company safeguard its assets by “analyzing anomalous activities, promoting a secure culture, and interacting with various departments to mitigate risks.” Per the Wayback Machine, the job listing has been up since mid-January.

“You’ll play a crucial role in safeguarding OpenAI’s assets by analyzing anomalous activities, promoting a secure culture, and interacting with various departments to mitigate risks,” the listing reads. “Your expertise will be instrumental in protecting OpenAI against internal risks, thereby contributing to the broader societal benefits of artificial intelligence.”

Independent of the smart-corridor project, two major companies behind self-driving big rig technology told KXAN they plan to remove safety drivers and go completely driverless by the end of the year.

A spokesperson for the California-based company Kodiak Robotics told KXAN it started operating self-driving big rigs on routes around Texas in 2019, always with backup safety drivers.

In that time, “the bulk of Kodiak’s deliveries have been between our Dallas operations hub and Houston, Austin, San Antonio, Oklahoma City, and Atlanta,” Kodiak spokesman Daniel Goff said.

Just like smartphone GPS has harmed our sense of spatial cognition and memory, artificial intelligence may soon impair our ability to make decisions for ourselves — an outcome that would be, one expert warns, “catastrophic.”

In an interview with PsyPost, neuropsychology expert Umberto León Domínguez of the University of Monterrey in Mexico said that his new research shows that AI chatbots may end up not just mimicking our speech patterns, but significantly harming our cognitive functioning in general.

Like many other educators, Domínguez said he’s concerned about how his students are using tools like OpenAI’s ChatGPT. Spurred by those concerns, he told PsyPost, he began to explore ways AI chatbots “could interfere with higher-order executive functions to understand how to also train these skills.”

Futuristic advancements in AI and healthcare stole the limelight at the tech extravaganza Consumer Electronics Show (CES) 2024. However, battery technology is the game-changer at the heart of these innovations, enabling greater power efficiency. Importantly, electric vehicles are where this technology is being applied most intensely. Today’s EVs can travel around 700km on a single charge, while researchers are aiming for a 1,000km battery range. Researchers are fervently exploring the use of silicon, known for its high storage capacity, as the anode material in lithium-ion batteries for EVs. However, despite its potential, bringing silicon into practical use remains a puzzle that researchers are still working hard to piece together.

Enter Professor Soojin Park, PhD candidate Minjun Je, and Dr. Hye Bin Son from the Department of Chemistry at Pohang University of Science and Technology (POSTECH). They have cracked the code, developing a pocket-friendly and rock-solid next-generation high-energy-density Li-ion battery system using micro silicon particles and gel polymer electrolytes. This work was published on the online pages of Advanced Science on the 17th of January.

Employing silicon as a battery material presents challenges: It expands by more than three times during charging and then contracts back to its original size while discharging, significantly impacting battery efficiency. Utilizing nano-sized silicon (10-9m) partially addresses the issue, but the sophisticated production process is complex and astronomically expensive, making it a challenging budget proposition. By contrast, micro-sized silicon (10-6m) is superbly practical in terms of cost and energy density. Yet, the expansion issue of the larger silicon particles becomes more pronounced during battery operation, posing limitations for its use as an anode material.

Tech giant Google has finally unveiled its much-hyped Gemini AI, a series of generative AI models it claims are its “largest and most capable” to date.

“This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company,” said Google CEO Sundar Pichai.

Multimodal AI: Generative AIs are algorithms trained to create original content in response to user prompts. OpenAI’s first iteration of ChatGPT, for example, can understand and produce human-like text, while its DALL-E 2 system can generate images based on text prompts.