Toggle light / dark theme

This type of cell could lead to an unlimited cell division in human cells aka a forever lifespan 😃 #immortality


“The sequencing and posting of the HeLa genome brought into sharp relief important ethical and policy issues,” said Dr. Collins. “To understand the family’s perspectives, we met with them face to face three times over four months, and listened carefully to their concerns. Ultimately, we arrived at a path forward that respects their wishes and allows science to progress. We are indebted to the Lacks family for their generosity and thoughtfulness.”

The HeLa Genome Data Use Agreement

The new controlled access policy for full genome sequence data from HeLa cells will give the Lacks family the ability to have a role in work being done with the HeLa genome sequences and track any resulting discoveries. Under the policy, biomedical researchers who agree to abide by terms set forth in the HeLa Genome Data Use Agreement will be able to apply to NIH for access to the full genome sequence data from HeLa cells. Along with representatives from the medical, scientific, and bioethics communities, two representatives of the Lacks family will serve on NIH’s newly formed, six-member working group that will review proposals for access to the HeLa full genome sequence data. In addition, NIH-funded researchers who generate full genome sequence data from HeLa cells will be expected to deposit their data into a single database for future sharing through this process. The database study page will be accessible after the embargo lifts at this url: http://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?s
0640.v1.p1. Other investigators will be encouraged to respect the wishes of the family and do the same. Importantly, all researchers who use or generate full genomic data from HeLa cells will now be asked to include in their publications an acknowledgement and expression of gratitude to the Lacks family for their contributions.

The strategy outlines how AI can be applied to defence and security in a protected and ethical way. As such, it sets standards of responsible use of AI technologies, in accordance with international law and NATO’s values. It also addresses the threats posed by the use of AI by adversaries and how to establish trusted cooperation with the innovation community on AI.

Artificial Intelligence is one of the seven technological areas which NATO Allies have prioritized for their relevance to defence and security. These include quantum-enabled technologies, data and computing, autonomy, biotechnology and human enhancements, hypersonic technologies, and space. Of all these dual-use technologies, Artificial Intelligence is known to be the most pervasive, especially when combined with others like big data, autonomy, or biotechnology. To address this complex challenge, NATO Defence Ministers also approved NATO’s first policy on data exploitation.

Individual strategies will be developed for all priority areas, following the same ethical approach as that adopted for Artificial Intelligence.

The policy applies even if the violation occurred outside a group.


Facebook is taking new steps to crack down on groups users who break its rules, even when they have done so in other parts of the app.

Under the new policy, Facebook will downrank content posted in groups by users who have broken its rules even if they have done so elsewhere on the company’s platform. The new rule will apply to any group member who has had a post removed for violating one of Facebook’s Community Standards in the previous 90 days. Those who have had multiple posts removed will have “more severe” demotions.

“This measure will help reduce the ability of members who break our rules from reaching others in their communities, and builds on the existing restrictions placed upon members who violate Community Standards,” Facebook wrote in a statement. The company notes that it already has policies that from people who repeatedly break rules within.

This post is a collaboration with Dr. Augustine Fou, a seasoned digital marketer, who helps marketers audit their campaigns for ad fraud and provides alternative performance optimization solutions; and Jodi Masters-Gonzales, Research Director at Beacon Trust Network and a doctoral student in Pepperdine University’s Global Leadership and Change program, where her research intersects at data privacy & ethics, public policy, and the digital economy.

The ad industry has gone through a massive transformation since the advent of digital. This is a multi-billion dollar industry that started out as a way for businesses to bring more market visibility to products and services more effectively, while evolving features that would allow advertisers to garner valuable insights about their customers and prospects. Fast-forward 20 years later and the promise of better ad performance and delivery of the right customers, has also created and enabled a rampant environment of massive data sharing, more invasive personal targeting and higher incidences of consumer manipulation than ever before. It has evolved over time, underneath the noses of business and industry, with benefits realized by a relative few. How did we get here? More importantly, can we curb the path of a burgeoning industry to truly protect people’s data rights?

There was a time when advertising inventory was finite. Long before digital, buying impressions was primarily done through offline publications, television and radio. Premium slots commanded higher CPM (cost per thousand) rates to obtain the most coveted consumer attention. The big advertisers with the deepest pockets largely benefitted from this space by commanding the largest reach.

Many people reject scientific expertise and prefer ideology to facts. Lee McIntyre argues that anyone can and should fight back against science deniers.
Watch the Q&A: https://youtu.be/2jTiXCLzMv4
Lee’s book “How to Talk to a Science Denier” is out now: https://geni.us/leemcintyre.

“Climate change is a hoax—and so is coronavirus.” “Vaccines are bad for you.” Many people may believe such statements, but how can scientists and informed citizens convince these ‘science deniers’ that their beliefs are mistaken?

Join Lee McIntyre as he draws on his own experience, including a visit to a Flat Earth convention as well as academic research, to explain the common themes of science denialism.

Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and an Instructor in Ethics at Harvard Extension School. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan (Ann Arbor). He has taught philosophy at Colgate University (where he won the Fraternity and Sorority Faculty Award for Excellence in Teaching Philosophy), Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston.

“The use of organophosphate esters in everything from TVs to car seats has proliferated under the false assumption that they’re safe,” said Heather Patisaul, lead author and neuroendocrinologist at North Carolina State University. “Unfortunately, these chemicals appear to be just as harmful as the chemicals they’re intended to replace but act by a different mechanism.”


Summary: Exposure to even low levels of common chemicals called organophosphate esters can harm IQ, memory, learning, and brain development overall in young children.

Source: Green Science Policy Institute

Chemicals increasingly used as flame retardants and plasticizers pose a larger risk to children’s brain development than previously thought, according to a commentary published today in Environmental Health Perspectives.

Advanced Nuclear Power Advocacy For Humanity — Eric G. Meyer, Founder & Director, Generation Atomic


Eric G. Meyer is the Founder and Director of Generation Atomic (https://generationatomic.org/), a nuclear advocacy non-profit which he founded after hearing about the promise of advanced nuclear reactors, and he decided to devote his life to saving and expanding the use of atomic energy.

Eric worked as an organizer on several political, union, and issue campaigns while in graduate school for applied public policy, taking time off to attend the climate talks in Paris and sing opera about atomic energy.

Without a new legal framework, they could destabilize societal norms.


Autonomous weapon systems – commonly known as killer robots – may have killed human beings for the first time ever last year, according to a recent United Nations Security Council report on the Libyan civil war. History could well identify this as the starting point of the next major arms race, one that has the potential to be humanity’s final one.

Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.

Meanwhile, human rights and humanitarian organizations are racing to establish regulations and prohibitions on such weapons development. Without such checks, foreign policy experts warn that disruptive autonomous weapons technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increasing the risk of preemptive attacks, and because they could become combined with chemical, biological, radiological and nuclear weapons themselves.

Accounting and consulting firm PwC told Reuters on Thursday it will allow all its 40,000 U.S. client services employees to work virtually and live anywhere they want in perpetuity, making it one of the biggest employers to embrace permanent remote work.

The policy is a departure from the accounting industry’s rigid attitudes, known for encouraging people to put in late nights at the office. Other major accounting firms, such as Deloitte and KPMG, have also been giving employees more choice to work remotely in the face of the COVID-19 pandemic.