Toggle light / dark theme

The “technology intelligence engine” uses A.I. to sift through hundreds of millions of documents online, then uses all that information to spot trends.


Build back better

Tarraf was fed up with incorrect predictions. He wanted a more data-driven approach to forecasting that could help investors, governments, pundits, and anyone else to get a more accurate picture of the shape of tech-yet-to-come. Not only could this potentially help make money for his firm, but it could also, he suggested, illuminate some of the blind spots people have which may lead to bias.

Tarraf’s technology intelligence engine uses natural language processing (NLP) to sift through hundreds of millions of documents — ranging from academic papers and research grants to startup funding details, social media posts, and news stories — in dozens of different languages. The futurist and science fiction writer William Gibson famously opined that the future is already here, it’s just not evenly distributed. In other words, tomorrow’s technology has already been invented, but right now it’s hidden away in research labs, patent applications, and myriad other silos around the world. The technology intelligence engine seeks to unearth and aggregate them.

“It would be difficult to introduce a single thing and it causes crime to go down,” one expert said.


“Are we seeing dramatic changes since we deployed the robot in January?” Lerner, the Westland spokesperson said. “No. But I do believe it is a great tool to keep a community as large as this, to keep it safer, to keep it controlled.”

For its part, Knightscope maintains on its website that the robots “predict and prevent crime,” without much evidence that they do so. Experts say this is a bold claim.

“It would be difficult to introduce a single thing and it causes crime to go down,” said Ryan Calo, a law professor at the University of Washington, comparing the Knightscope robots to a “roving scarecrow.”

NASA’s Perseverance rover captured a historic group selfie with the Ingenuity Mars Helicopter on April 6, 2021. But how was the selfie taken? Vandi Verma, Perseverance’s chief engineer for robotic operations at NASA’s Jet Propulsion Laboratory in Southern California breaks down the process in this video.

Video taken by Perseverance’s navigation cameras shows the rover’s robotic arm twisting and maneuvering to take the 62 images that compose the image. The rover’s entry, descent, and landing microphone captured the sound of the arm’s motors whirring during the process.

Selfies allow engineers to check wear and tear on the rover over time.

For more information on Perseverance, visit https://mars.nasa.gov/perseverance.

Ut ohh.


The middle and working classes have seen a steady decline in their fortunes. Sending jobs to foreign countries, the hollowing out of the manufacturing sector, pivoting toward a service economy and the weakening of unions have been blamed for the challenges faced by a majority of Americans.

There’s an interesting, compelling and alternative explanation. According to a new academic research study, automation technology has been the primary driver in U.S. income inequality over the past 40 years. The report, published by the National Bureau of Economic Research, claims that 50% to 70% of changes in U.S. wages, since 1980, can be attributed to wage declines among blue-collar workers who were replaced or degraded by automation.

Great new episode with former Fermilab physicist Gerald Jackson who chats about antimatter propulsion and the politics of advanced propulsion research. This one is out a bit later in the week than normal, but please listen. Good stuff.


Guest Gerald Jackson, former Fermilab physicist and advanced propulsion entrepreneur chats about his plans for an Antimatter Propulsion interstellar robotic probe. First stop would be Proxima Centauri. In a wide-ranging interview, Jackson talks about the politics and pitfalls of advance propulsion research. Too many people seem to think antimatter is something that is still science fiction. It’s not. It’s as real as the chair you’re sitting on.

The artificial intelligence revolution is just getting started. But it is already transforming conflict. Militaries all the way from the superpowers to tiny states are seizing on autonomous weapons as essential to surviving the wars of the future. But this mounting arms-race dynamic could lead the world to dangerous places, with algorithms interacting so fast that they are beyond human control. Uncontrolled escalation, even wars that erupt without any human input at all.

DW maps out the future of autonomous warfare, based on conflicts we have already seen – and predictions from experts of what will come next.

For more on the role of technology in future wars, check out the extended version of this video – which includes a blow-by-blow scenario of a cyber attack against nuclear weapons command and control systems: https://youtu.be/TmlBkW6ANsQ

Subscribe: https://www.youtube.com/user/deutschewelleenglish?sub_confirmation=1

Still the comic relief til about December 31, 2024. By 2035 curing everything, already in the early stages towards that.


Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and also the senior author of the study said, that they were actively working on robots that can help provide health care services to maximize the safety, of both the patients and the health care workforce.

Traverso and his colleagues after the Covid-19 began last year, worked towards reducing interaction between the patients and the health care workers. In this process, they collaborated with Boston Dynamics in creating mobile robots that can interact with patients who waited in the emergency department.

But the question here is, how patients are going to respond to the robots? This question was raised by the researchers of MIT along with Brigham and Women’s Hospital. The researchers conducted a nationwide large-scale online survey of about 1000 people working with a market research company called YouGov. The questions were about the acceptability of robots in healthcare for performing tasks like nasal swabs, inserting a catheter, and turning a patient over in bed.

And they say computers can’t create art.


In 1642, famous Dutch painter Rembrandt van Rijn completed a large painting called Militia Company of District II under the Command of Captain Frans Banninck Cocq — today, the painting is commonly referred to as The Night Watch. It was the height of the Dutch Golden Age, and The Night Watch brilliantly showcased that.

The painting measured 363 cm × 437 cm (11.91 ft × 14.34 ft) — so big that the characters in it were almost life-sized, but that’s only the start of what makes it so special. Rembrandt made dramatic use of light and shadow and also created the perception of motion in what would normally be a stationary military group portrait. Unfortunately, though, the painting was trimmed in 1715 to fit between two doors at Amsterdam City Hall.

For over 300 years, the painting has been missing 60cm (2ft) from the left, 22cm from the top, 12cm from the bottom and 7cm from the right. Now, computer software has restored the missing parts.

## FUTURE TENSE RN ABC (AUDIO 29 MIN) • JUN 27, 2021.

# Some foresight about.
the future of foresight.

*Trying to predict the future is a timeless and time-consuming pursuit.*

Artificial Intelligence is increasingly being enlisted to the cause, but so too are “super-forecasters” — a new coterie of individuals with remarkable predictive powers.

But what are their limits and what does their rise say about the still popular notion of collective intelligence — the wisdom of the crowd?