Toggle light / dark theme

For one, it reached over a million users in five days of its release, a mark unmatched by even the most historically popular apps like Facebook and Spotify. Additionally, ChatGPT has seen near-immediate adoption in business contexts, as organizations seek to gain efficiencies in content creation, code generation, and other functional tasks.

But as businesses rush to take advantage of AI, so too do attackers. One notable way in which they do so is through unethical or malicious LLM apps.

Unfortunately, a recent spate of these malicious apps has introduced risk into an organization’s AI journey. And, the associated risk is not easily addressed with a single policy or solution. To unlock the value of AI without opening doors to data loss, security leaders need to rethink how they approach broader visibility and control of corporate applications.

In a groundbreaking study published on the arXiv server, a team of Swiss researchers introduces Pedipulate, an innovative controller enabling quadruped robots to perform complex manipulation tasks using their legs. This development marks a significant leap forward in robotics, showcasing the potential for legged robots in maintenance, home support, and exploration activities beyond traditional inspection roles.

The study, titled “Pedipulate: Quadruped Robot Manipulation Using Legs,” challenges the conventional design of legged robots that often rely on additional robotic arms for manipulation, leading to increased power consumption and mechanical complexity. By observing quadrupedal animals, the researchers hypothesized that employing the robot’s legs for locomotion and manipulation could significantly simplify and reduce the cost of robotic systems, particularly in applications where size and efficiency are crucial, such as in space exploration.

Pedipulate is trained through deep reinforcement learning, employing a neural network policy that tracks foot position targets. This policy minimizes the distance between the robot’s foot and the target point while penalizing undesirable movements such as jerky motions or collisions. The controller was tested on the ANYmal D robot, which features 12 torque-controlled joints and force-torque sensors on each foot, proving the feasibility of leg-based manipulation in real-world scenarios.

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Irene Solaiman began her career in AI as a researcher and public policy manager at OpenAI, where she led a new approach to the release of GPT-2, a predecessor to ChatGPT. After serving as an AI policy manager at Zillow for nearly a year, she joined Hugging Face as the head of global policy. Her responsibilities there range from building and leading company AI policy globally to conducting socio-technical research.

Solaiman also advises the Institute of Electrical and Electronics Engineers (IEEE), the professional association for electronics engineering, on AI issues, and is a recognized AI expert at the intergovernmental Organization for Economic Co-operation and Development (OECD).

Meta is promising to roll out auto-labeling for AI-generated images — as soon as it figures out how, that is.

Nick Clegg, Meta’s president of global affairs, said in a policy update that the company is currently working with “industry partners” to formulate criteria that will help identify AI content. Once those criteria are determined, Meta will begin automatically labeling posts featuring any AI-generated images, video, or audio “in the coming months.”

“This approach represents the cutting edge of what’s technically possible right now. But it’s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers,” Clegg wrote. “So we’re pursuing a range of options. We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers.”

The recently published tech policy document by the Ministry of Industry and Information Technology reflects their dedication to fostering innovation and development in future industries. The roadmap emphasizes the importance of forward-looking planning, policy guidance, and cultivating new quality productive forces to support the country’s aspirations for global technological leadership.

The race for supremacy in brain-computer interfaces intensifies as the world watches China’s technological journey unfold. With Neuralink marking its milestones, China’s bold ambitions signal a new era of competition in the ever-evolving landscape of cutting-edge technologies.

The question now is not just about who will lead the race but what groundbreaking innovations lie ahead for humanity.

Given the value of the vaccine, it’s mindboggling that some in the US would choose not to protect their children. And yet, vaccine rates among US kindergartners fell for the second consecutive year in 2022, a situation the Centers for Disease Control and Prevention said left some 250,000 kids vulnerable to measles. While some of those missed shots were potentially due to challenges accessing timely health care during the pandemic, there’s reason to worry that growing hesitancy about vaccination is also at play.

It does not help that some states are making it easier to forgo routine childhood vaccines. Mississippi, for example, previously led the nation in vaccination coverage for kindergarteners, with more than 98.6% of kids receiving both doses of their MMR shots in the 2021–2022 school year. But anti-vaccine activists succeeded in loosening the state’s childhood vaccination policy, and last year families could for the first time seek religious exemptions for basic shots like MMR, tetanus, polio and others. According to a report from NBC, the state granted more than 2,200 exemptions in the first five months they were allowed.

The shift seemingly reflects a new partisan divide. A recent Pew Research Center poll found a steep drop in the number of Republicans and people who lean Republican who don’t believe vaccines should be required for attending public school.

From blanket bans to specific prohibitions

Previously, OpenAI had a strict ban on using its technology for any “activity that has high risk of physical harm, including” “weapons development” and “military and warfare.” This would prevent any government or military agency from using OpenAI’s services for defense or security purposes. However, the new policy has removed the general ban on “military and warfare” use. Instead, it has listed some specific examples of prohibited use cases, such as “develop or use weapons” or “harm yourself or others.”