The largest animals do not have proportionally bigger brains — with humans bucking this trend — a new study published in Nature Ecology and Evolution has revealed.
Researchers at the University of Reading and Durham University collected an enormous dataset of brain and body sizes from around 1,500…
We use cookies on reading.ac.uk to improve your experience. Find out more about our cookie policy. By continuing to use our site you accept these terms, and are happy for us to use cookies to improve your browsing experience.
Back in June, YouTube quietly made a subtle but significant policy change that, surprisingly, benefits users by allowing them to remove AI-made videos that simulate their appearance or voice from the platform under YouTube’s privacy request process.
First spotted by TechCrunch, the revised policy encourages affected parties to directly request the removal of AI-generated content on the grounds of privacy concerns and not for being, for example, misleading or fake. YouTube specifies that claims must be made by the affected individual or authorized representatives. Exceptions include parents or legal guardians acting on behalf of minors, legal representatives, and close family members filing on behalf of deceased individuals.
According to the new policy, if a privacy complaint is filed, YouTube will notify the uploader about the potential violation and provide an opportunity to remove or edit the private information within their video. YouTube may, at its own discretion, grant the uploader 48 hours to utilize the Trim or Blur tools available in YouTube Studio and remove parts of the footage from the video. If the uploader chooses to remove the video altogether, the complaint will be closed, but if the potential privacy violation remains within those 48 hours, the YouTube Team will review the complaint.
What are good policy options for academic journals regarding the detection of AI generated content and publication decisions? As a group of associate editors of Dialectica note below, there are several issues involved, including the uncertain performance of AI detection tools and the risk that material checked by such tools is used for the further training of AIs. They’re interested in learning about what policies, if any, other journals have instituted in regard to these challenges and how they’re working, as well as other AI-related problems journals should have policies about. They write: As associate editors of a philosophy journal, we face the challenge of dealing with content that we suspect was generated by AI. Just like plagiarized content, AI generated content is submitted under false claim of authorship. Among the unique challenges posed by AI, the following two are pertinent for journal editors. First, there is the worry of feeding material to AI while attempting to minimize its impact. To the best of our knowledge, the only available method to check for AI generated content involves websites such as GPTZero. However, using such AI detectors differs from plagiarism software in running the risk of making copyrighted material available for the purposes of AI training, which eventually aids the development of a commercial product. We wonder whether using such software under these conditions is justifiable. Second, there is the worry of delegating decisions to an algorithm the workings of which are opaque. Unlike plagiarized texts, texts generated by AI routinely do not stand in an obvious relation of resemblance to an original. This renders it extremely difficult to verify whether an article or part of an article was AI generated; the basis for refusing to consider an article on such grounds is therefore shaky at best. We wonder whether it is problematic to refuse to publish an article solely because the likelihood of its being generated by AI passes a specific threshold (say, 90%) according to a specific website. We would be interested to learn about best practices adopted by other journals and about issues we may have neglected to consider. We especially appreciate the thoughts of fellow philosophers as well as members of other fields facing similar problems. — Aleks…
However, the White House Office of Management and Budget said in a statement of policy on Tuesday: “The Administration strongly opposes Section 924, which would establish a Drone Corps as a basic branch of the Army. A Drone Corps would create an unwarranted degree of specialization and limit flexibility to employ evolving capabilities. Further, the Secretary of the Army already has the authority to create branches, as needed, and creating a branch through legislation would detract from the Army’s flexibility in addressing current and future requirements.” The statement included a list of concerns about various items in the House version of the NDAA that it finds objectionable.
Senior Army officers have also come out against the proposal.
NASA will provide live coverage of prelaunch and launch activities for the National Oceanic and Atmospheric Administration’s (NOAA) GOES-U (Geostationary Operational Environmental Satellite U) mission. The two-hour launch window opens at 5:16 p.m. EDT Tuesday, June 25, for the satellite’s launch on a SpaceX Falcon Heavy rocket from Launch Complex 39A at NASA’s Kennedy Space Center in Florida.
The GOES-U satellite, the final addition to GOES-R series, will help to prepare for two kinds of weather — Earth and space weather. The GOES satellites serve a critical role in providing continuous coverage of the Western Hemisphere, including monitoring tropical systems in the eastern Pacific and Atlantic oceans. This continuous monitoring aids scientists and forecasters in issuing timely warnings and forecasts to help protect the one billion people who live and work in the Americas. Additionally, GOES-U carries a new compact coronagraph that will image the outer layer of the Sun’s atmosphere to detect and characterize coronal mass ejections.
The deadline for media accreditation for in-person coverage of this launch has passed. NASA’s media credentialing policy is available online. For questions about media accreditation, please email: [email protected].
Longtermism asks fundamental questions and promotes the kind of consequentialism that should guide public policy.
Based on a talk delivered at the conference on Existential Threats and Other Disasters: How Should We Address Them? May 30–31, 2024 – Budva, Montenegro – sponsored by the Center for the Study of Bioethics, The Hastings Center, and The Oxford Uehiro Center for Practical Ethics.
For twenty years, I have been talking about old age dependency ratios as an argument for universal basic income and investing in anti-aging therapies to keep elders healthy longer. A declining number of young workers supporting a growing number of retirees is straining many welfare systems. Healthy seniors are less expensive and work longer. UBI is more intergenerationally equitable, especially if we face technological unemployment.
But as a person anticipating grandchildren, I think the declining fertility part of the demographic shift is more on my mind. It’s apparently on the minds of a growing number of people, including folks on the Right, ranging from those worried that feminists are pushing humanity to suicide or that there won’t be enough of their kind of people in the future to those worried about the health of innovation and the economy. The reluctance by the Left to entertain any pronatalism is understandable, given the reactionary ways it has been promoted. But I believe a progressive pro-family agenda is possible.
NASA will roll the fully assembled core stage for the agency’s SLS (Space Launch System) rocket that will launch the first crewed Artemis mission out of NASA’s Michoud Assembly Facility in New Orleans in mid-July. The 212-foot-tall stage will be loaded on the agency’s Pegasus barge for delivery to Kennedy Space Center in Florida.
Media will have the opportunity to capture images and video, hear remarks from agency and industry leadership, and speak to subject matter experts with NASA and its Artemis industry partners as crews move the rocket stage to the Pegasus barge.
NASA will provide additional information on specific timing later, along with interview opportunities. This event is open to U.S. and international media. International media must apply by June 14. U.S. media must apply by July 3. The agency’s media credentialing policy is available online.
Techno-optimist Vinod Khosla believes in the world-changing power of “foolish ideas.” He offers 12 bold predictions for the future of technology — from preventative medicine to car-free cities to planes that get us from New York to London in 90 minutes — and shows why a world of abundance awaits. If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership Follow TED! X: / tedtalks Instagram: / ted Facebook: / ted LinkedIn: / ted-conferences TikTok: / tedtoks The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more. Watch more: https://go.ted.com/vinodkhosla • 12 Predictions for the Future of Tech… TED’s videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organiz… For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com #TED #TEDTalks
Maj. Gen. Dr. Paul Friedrichs, MD is the Inaugural Director of the Office of Pandemic Preparedness and Response Policy, at the White House (OPPR — https://www.whitehouse.gov/oppr/), a permanent executive office aimed at leading, coordinating, and implementing actions to prepare for and respond to pathogens that could lead to a pandemic or significant public health-related disruptions in the U.S., and principal advisor on pandemic preparedness and response, appointed by President Biden.
Dr. Friedrichs was previously the Joint Staff Surgeon at the Pentagon where he provided medical advice to the Chairman of the Joint Chiefs of Staff, the Joint Staff and the Combatant Commanders, coordinating all issues related to health services, including operational medicine, force health protection and readiness among the combatant commands, the Office of the Secretary of Defense and the services. He also led the development and publication of the initial Joint Medical Estimate and served as medical advisor to the Department of Defense COVID-19 Task Force.
Dr. Friedrichs received his commission through the Reserve Officer Training Corps and his Doctor of Medicine (M.D.) from the Uniformed Services University. He has commanded at the squadron and group level, served as an Assistant Professor of Surgery and led joint and interagency teams which earned numerous awards, including “Best Air Force Hospital.” As Chair of the Military Health System’s Joint Task Force on High Reliability Organizations, he oversaw developing a roadmap to continuously improve military health care. As the Command Surgeon for Pacific Air Forces, U.S. Transportation Command and Air Combat Command, the general and his teams identified gaps, developed mitigation plans and enhanced readiness for future conflicts and contingencies.
How can rapidly emerging #AI develop into a trustworthy, equitable force? Proactive policies and smart governance, says Salesforce.
These initial steps ignited AI policy conversations amid the acceleration of innovation and technological change. Just as personal computing democratized internet access and coding accessibility, fueling more technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes large responsibility: We must prioritize policies that allow us to harness its power while protecting against harm. To do so effectively, we must acknowledge and address the differences between enterprise and consumer AI.
Enterprise versus consumer AI
Salesforce has been actively researching and developing AI since 2014, introduced our first AI functionalities into our products in 2016, and established our office of ethical and human use of technology in 2018. Trust is our top value. That’s why our AI offerings are founded on trust, security and ethics. Like many technologies, there’s more than one use for AI. Many people are already familiar with large language models (LLMs) via consumer-facing apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer-grade LLMs and what we classify as enterprise AI.