Toggle light / dark theme

Construction sites are vast jigsaws of people and parts that must be pieced together just so at just the right times. As projects get larger, mistakes and delays get more expensive. The consultancy Mckinsey estimates that on-site mismanagement costs the construction industry $1.6 trillion a year. But typically you might only have five managers overseeing construction of a building with 1,500 rooms, says Roy Danon, founder and CEO of British-Israeli startup Buildots: “There’s no way a human can control that amount of detail.”

Danon thinks that AI can help. Buildots is developing an image recognition system that monitors every detail of an ongoing construction project and flags up delays or errors automatically. It is already being used by two of the biggest building firms in Europe, including UK construction giant Wates in a handful of large residential builds. Construction is essentially a kind of manufacturing, says Danon. If high-tech factories now use AI to manage their processes, why not construction sites?

AI is starting to change various aspects of construction, from design to self-driving diggers. Some companies even provide a kind of overall AI site inspector that matches images taken on site against a digital plan of the building. Now Buildots is making that process easier than ever by using video footage from GoPro cameras mounted on the hard hats of workers.

We ask students to login via google as we share a lot of our content over google drive. To access the same, a google account is a must.


The CRISPR-Cas9 system has revolutionized genetic manipulations and made gene editing simpler, faster and easily accessible to most laboratories.

To its recognition, this year, the French-American duo Emmanuelle Charpentier and Jennifer Doudna have been awarded the prestigious Nobel Prize for chemistry for CRISPR.

A very high speed camera.


Wang’s newest camera called, which has the wordy moniker “single-shot stereo-polarimetric compressed ultrafast photography” (SP-CUP), builds on previous iterations that were capable of shooting at even faster rates, some of them capable of shooting up to 70 trillion frames per second.

But what the new Caltech camera brings to the table is its ability to perceive the world more like humans can. The human eye’s depth perception relies on there being two of them — and the new rig can pull off the same stereoscopic trick.

“The camera is stereo now,” Wang said in a statement. “We have one lens, but it functions as two halves that provide two views with an offset. Two channels mimic our eyes.”

A short story.


A very short story with a long ending.

“What did you do in the Great Cyberwar daddy?”

What can I say? The answer is pretty much what everyone else did, which is try and s urvive. In other words, nothing. The first we all knew about it was when the electricity went off, and did not come back on again. Then other utilities, water, gas and phones. Then the Net itself went down and everyone was in the dark both literally and figuratively. All of that within the space of a few hours. For some it happened overnight and they awoke to a broken world. Most cars still ran, for a while, although GPS was also out. Self driving cars didn’t. When the fuel ran out the gas stations were not pumping (no electricity), the supermarkets along with everyone else could not buy or sell because the payment systems were offline. Within 24 hours looting broke out on a global scale from the richest nations to the poorest and martial law became the new norm.

A rectangular robot as tiny as a few human hairs can travel throughout a colon by doing back flips, Purdue University engineers have demonstrated in live animal models.

Why the back flips? Because the goal is to use these robots to transport drugs in humans, whose colons and other organs have . Side flips work, too.

Why a back-flipping robot to transport drugs? Getting a drug directly to its target site could remove side effects, such as hair loss or stomach bleeding, that the drug may otherwise cause by interacting with other organs along the way.

Article on the soldiers of the very near future. This is when Internet of Things is used for military purposes allowing better situational awareness.


ENVG-B is still being fielded across the force, but the Army is already developing a next-gen system, a set of augmented reality targeting goggles — a militarized Microsoft HoloLens — known as IVAS. The Army’s also developing an Adaptive Squad Architecture to ensure all the different technologies going on a soldier’s body are compatible.

“ENVG-B is a system of systems,” Lynn Bollengier of L3Harris Technologies said at this week’s annual Association of the US Army conference. These systems include integrated augmented reality aspects from the Nett Warrior tablet, as well as wireless interconnectivity with weapon sights.

Combined, that means a soldier wearing the ENVG-B can look through their binoculars, turn on the camera in their rifle’s sight, and point that sight around a corner to see and shoot, without exposing anything more than their hands or the rifle.