Toggle light / dark theme

This extensive research actually details the possibility of unknown origins of these elongated human skulls which many think to this day are some form of exterrestial in origin or at the very least unknown in origin which actually nearly uproots most know origin stories.


Modern European genetic structure demonstrates strong correlations with geography, while genetic analysis of prehistoric humans has indicated at least two major waves of immigration from outside the continent during periods of cultural change. However, population-level genome data that could shed light on the demographic processes occurring during the intervening periods have been absent. Therefore, we generated genomic data from 41 individuals dating mostly to the late 5th/early 6th century AD from present-day Bavaria in southern Germany, including 11 whole genomes (mean depth 5.56×). In addition we developed a capture array to sequence neutral regions spanning a total of 5 Mb and 486 functional polymorphic sites to high depth (mean 72×) in all individuals. Our data indicate that while men generally had ancestry that closely resembles modern northern and central Europeans, women exhibit a very high genetic heterogeneity; this includes signals of genetic ancestry ranging from western Europe to East Asia. Particularly striking are women with artificial skull deformations; the analysis of their collective genetic ancestry suggests an origin in southeastern Europe. In addition, functional variants indicate that they also differed in visible characteristics. This example of female-biased migration indicates that complex demographic processes during the Early Medieval period may have contributed in an unexpected way to shape the modern European genetic landscape. Examination of the panel of functional loci also revealed that many alleles associated with recent positive selection were already at modern-like frequencies in European populations ∼1,500 years ago.

Circa 2018


This is a talk by Stephen Wolfram for MIT course 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world.

INFO:

But what I find even more interesting is that as metaverse tools like Nvidia’s Omniverse become more consumer friendly, the ability to use AI and human digital twins will enable us to create our own worlds where we dictate the rules and where our AI-driven digital twins will emulate real people and animals.

At that point, I expect we’ll need to learn what it means to be gods of the worlds we create, and I doubt we are anywhere near ready, both in terms of the addictive nature of such products and how to create these metaverse virtual worlds in ways that can become the basis for our own digital immortality.

Let’s explore the capabilities of the metaverse this week, then we’ll close with my product of the week: the Microsoft Surface Duo 2.

The quadrupedal robots are well suited for repetitive tasks.


Mankind’s new best friend is coming to the U.S. Space Force.

The Space Force has conducted a demonstration using dog-like quadruped unmanned ground vehicles (Q-UGVs) for security patrols and other repetitive tasks. The demonstration used at least two Vision 60 Q-UGVs, or “robot dogs,” built by Ghost Robotics and took place at Cape Canaveral Space Force Station on July 27 and 28.

Realistic and complex models of brain cells, developed at Cedars-Sinai with support from our scientists and our #openscience data, could help answer questions a… See more.


Cedars-Sinai investigators have created bio-realistic and complex computer models of individual brain cells—in unparalleled quantity.

Their research, published today in the peer-reviewed journal Cell Reports, details how these models could one day answer questions about neurological disorders—and even human intellect—that aren’t possible to explore through biological experiments.

“These models capture the shape, timing and speed of the electrical signals that neurons fire in order to communicate with each other, which is considered the basis of brain function,” said Costas Anastassiou, PhD, a research scientist in the Department of Neurosurgery at Cedars-Sinai, and senior author of the study. “This lets us replicate brain activity at the single-cell level.”

In 1903, the Wright brothers invented the first successful airplane. By 1914, just over a decade after its successful test, aircraft would be used in combat in World War I, with capabilities including reconnaissance, bombing and aerial combat. This has been categorized by most historians as a revolution in military affairs. The battlefield, which previously included land and sea, now included the sky, permanently altering the way wars are fought. With the new technology came new strategy, policy, tactics, procedures and formations.

Twenty years ago, unmanned aircraft systems (UASs) were much less prevalent and capable. Today, their threat potential and risk profile have increased significantly. UASs are becoming increasingly more affordable and capable, with improved optics, greater speed, longer range and increased lethality.

The U.S. has long been a proponent of utilizing unmanned aircraft systems, with the MQ-9 Reaper and MQ-1 Predator excelling in combat operations, and smaller squad-based UASs being fielded, such as the RQ-11 Raven and the Switchblade. While the optimization of friendly UAS capability can yield great results on the battlefield, adversarial use of unmanned aircraft systems can be devastating.

Around the same time, neuroscientists developed the first computational models of the primate visual system, using neural networks like AlexNet and its successors. The union looked promising: When monkeys and artificial neural nets were shown the same images, for example, the activity of the real neurons and the artificial neurons showed an intriguing correspondence. Artificial models of hearing and odor detection followed.

But as the field progressed, researchers realized the limitations of supervised training. For instance, in 2017, Leon Gatys, a computer scientist then at the University of Tübingen in Germany, and his colleagues took an image of a Ford Model T, then overlaid a leopard skin pattern across the photo, generating a bizarre but easily recognizable image. A leading artificial neural network correctly classified the original image as a Model T, but considered the modified image a leopard. It had fixated on the texture and had no understanding of the shape of a car (or a leopard, for that matter).

Self-supervised learning strategies are designed to avoid such problems. In this approach, humans don’t label the data. Rather, “the labels come from the data itself,” said Friedemann Zenke, a computational neuroscientist at the Friedrich Miescher Institute for Biomedical Research in Basel, Switzerland. Self-supervised algorithms essentially create gaps in the data and ask the neural network to fill in the blanks. In a so-called large language model, for instance, the training algorithm will show the neural network the first few words of a sentence and ask it to predict the next word. When trained with a massive corpus of text gleaned from the internet, the model appears to learn the syntactic structure of the language, demonstrating impressive linguistic ability — all without external labels or supervision.