Toggle light / dark theme

The data came from Common Crawl, a non-profit that scans the open web every month and downloads content from billions of HTML pages then makes it available in a special format for large-scale data mining. In 2017 the average monthly “crawl” yielded over three billion web pages. Common Crawl has been doing this since 2011, and has petabytes of data in over 40 different languages. The OpenAI team applied some filtering techniques to improve the overall quality of the data, including adding curated datasets like Wikipedia.

GPT stands for Generative Pretrained Transformer. The “transformer” part refers to a neural network architecture introduced by Google in 2017. Rather than looking at words in sequential order and making decisions based on a word’s positioning within a sentence, text or speech generators with this design model the relationships between all the words in a sentence at once. Each word gets an “attention score,” which is used as its weight and fed into the larger network. Essentially, this is a complex way of saying the model is weighing how likely it is that a given word will be preceded or followed by another word, and how much that likelihood changes based on the other words in the sentence.

Through finding the relationships and patterns between words in a giant dataset, the algorithm ultimately ends up learning from its own inferences, in what’s called unsupervised machine learning. And it doesn’t end with words—GPT-3 can also figure out how concepts relate to each other, and discern context.

Artificial intelligence networks have learnt a new trick: being able to create photo-realistic faces from just a few pixelated dots, adding in features such as eyelashes and wrinkles that can’t even be found in the original.

Before you freak out, it’s good to note this is not some kind of creepy reverse pixelation that can undo blurring, because the faces the AI comes up with are artificial – they don’t belong to real people. But it’s a cool technological step forward from what such networks have been able to do before.

The PULSE (Photo Upsampling via Latent Space Exploration) system can produce photos with up to 64 times greater resolution than the source images, which is 8 times more detailed than earlier methods.

In recent years, many research teams worldwide have been developing and evaluating techniques to enable different locomotion styles in legged robots. One way of training robots to walk like humans or animals is by having them analyze and emulate real-world demonstrations. This approach is known as imitation learning.

Researchers at the University of Edinburgh in Scotland have recently devised a for training humanoid robots to walk like humans using human demonstrations. This new framework, presented in a paper pre-published on arXiv, combines imitation learning and deep reinforcement learning techniques with theories of robotic control, in order to achieve natural and dynamic locomotion in humanoid robots.

“The key question we set out to investigate was how to incorporate useful human knowledge in locomotion and human motion capture data for imitation into deep reinforcement learning paradigm to advance the autonomous capabilities of legged robots more efficiently,” Chuanyu Yang, one of the researchers who carried out the study, told TechXplore. We proposed two methods of introducing human prior knowledge into a DRL framework.”

Back in the Sixties, one of the hottest toys in history swept America. It was called Etch-A-Sketch, and its popularity was based on a now-laughably simple feature. It was a handheld small-laptop-sized device that allowed users to create crude images by turning two control knobs that drew horizontal, vertical and diagonal lines composed of aluminum particles sealed in a plastic case. It allowed experienced artists to compose simple and sometimes recognizable portraits. And it allowed inexperienced wannabe artists who could barely draw stick-figure characters to feel like masters of the genre by generating what, frankly, still looked pretty much like mush. But Etch-A-Sketch was fun, and it went on to sell 100 million units to this day.

Six decades later, researchers at the Chinese Academy of Sciences and City University of Hong Kong have come up with an invention that actually does what so many wishful enthusiasts imagined Etch-A-Sketch did all those years ago.

DeepFaceDrawing allows users to create stunningly lifelike portraits by inputting loose, non-professional, roughly drawn sketches. It requires no artistic skills and no programming experience.

The Spot Explorer is likely not coming to a workplace near you. The high price point makes it impractical to all but a few institutions, like high-end construction firms, energy companies, and government agencies. But just seeing out in the world, doing more than dancing to “Uptown Funk,” is surely a sign of progress.


These viral robots have ranked up big YouTube numbers for years. Now, they’re about to start their day jobs.

Administrator Jim Bridenstine, leadership and a panel of scientists and engineers will preview the upcoming mission at 2 p.m. EDT on Wednesday, June 17. Submit your questions during the briefing using #AskNASA!

Perseverance is a robotic scientist that will search for signs of past microbial life on Mars and characterize the planet’s climate and geology. It will also collect rock and soil samples for future return to Earth and pave the way for human exploration of the Red Planet. The mission is scheduled to launch from Space Launch Complex 41 at NASA’s Kennedy Space Center in Florida at 9:15 a.m. EDT July 20. It will land at Mars’ Jezero Crater on Feb. 18, 2021. #CountdownToMars

A rejuvenation roadmap, and some info on Rejuvenate Bio.


Ray Kurzweil predicted the Technological Singularity will be reached in 2045. This actually means there will be strong AI, something like AGI that is 1 billion times more capable than the human brain in many aspects.

<!– Link: https://www.kurzweilai.net/images/How-My-Predictions-Are-Faring.pdf>Ray Kurzweil gives the timing of his predictions a decade on either side. In 2009, Ray reviewed 147 predictions.

CLEW, an Israeli medtech firm specializing in real-time AI analytics platforms, received approval from the United States Food and Drug Administration (FDA) for its “Predictive Analytics Platform in Support of COVID-19 Patients,” the company announced Tuesday.

The Intensive Care Unit (ICU) solution was given Emergency Use Authorization (EUA) by the FDA so that it may be implemented within the United States’ health system as soon as possible.

Robots capable of the sophisticated motions that define advanced physical actions like walking, jumping, and navigating terrain can cost $50,000 or more, making real-world experimentation prohibitively expensive for many.

Now, a collaborative team at the NYU Tandon School of Engineering and the Max Planck Institute for Intelligent Systems (MPI-IS) in Tübingen and Stuttgart, Germany, has designed a relatively low-cost, easy-and-fast-to-assemble quadruped robot called “Solo 8” that can be upgraded and modified, opening the door to sophisticated research and development to teams on limited budgets, including those at startups, smaller labs, or teaching institutions.

The researchers’ work, “An Open Torque-Controlled Modular Robot Architecture for Legged Locomotion Research,” accepted for publication in Robotics and Automation Letters, will be presented later this month at ICRA, the International Conference on Robotics and Automation, one of the world’s leading robotic conferences, to be held virtually.