Toggle light / dark theme

General interest.


IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. Their designs draw on concepts from the human brain and significantly outperform conventional computers in comparative studies. They report on their recent findings in the Journal of Applied Physics.

Today’s computers are built on the von Neumann architecture, developed in the 1940s. Von Neumann computing systems feature a central processer that executes logic and arithmetic, a memory unit, storage, and input and output devices. Unlike the stovepipe components in conventional computers, the authors propose that brain-inspired computers could have coexisting processing and memory units.

Abu Sebastian, an author on the paper, explained that executing certain in the computer’s memory would increase the system’s efficiency and save energy.

Read more

Quantum computing isn’t going to revolutionize AI anytime soon, according to a panel of experts in both fields.

Different worlds: Yoshua Bengio, one of the fathers of deep learning, joined quantum computing experts from IBM and MIT for a panel discussion yesterday. Participants included Peter Shor, the man behind the most famous quantum algorithm. Bengio said he was keen to explore new computer designs, and he peppered his co-panelists with questions about what a quantum computer might be capable of.

Quantum leaps: The panels quantum experts explained that while quantum computers are scaling up, it will be a while—we’re talking years here—before they could do any useful machine learning, partly because a lot of extra qubits will be needed to do the necessary error corrections. To complicate things further, it isn’t very clear what, exactly, quantum computers will be able to do better than their classical counterparts. But both Aram Harrow of MIT and IBM’s Kristian Temme said that early research on quantum machine learning is under way.

Read more

When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much. They can learn from the behavior of others and note any obstacles to avoid. Robots, on the other hand, struggle with such navigational concepts.

MIT researchers have now devised a way to help robots navigate environments more like humans do. Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they’ve learned before in similar situations. A paper describing the model was presented at this week’s IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

Popular motion-planning algorithms will create a tree of possible decisions that branches out until it finds good paths for navigation. A that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints. One drawback, however, is these algorithms rarely learn: Robots can’t leverage information about how they or other agents acted previously in similar environments.

Read more

By translating a key human physical dynamic skill — maintaining whole-body balance — into a mathematical equation, the team was able to use the numerical formula to program their robot Mercury, which was built and tested over the course of six years. They calculated the margin of error necessary for the average person to lose one’s balance and fall when walking to be a simple figure — 2 centimeters.

“Essentially, we have developed a technique to teach autonomous robots how to maintain balance even when they are hit unexpectedly, or a force is applied without warning,” Sentis said. “This is a particularly valuable skill we as humans frequently use when navigating through large crowds.”

Sentis said their technique has been successful in dynamically balancing both bipeds without ankle control and full humanoid robots.

Read more

From space colonization to resurrection of dinosaurs to machine intelligence, the most awe-inspiring visions of humanity’s future are typically born from science fiction.

But among an abundance of time travel, superheroes, space adventures, and so forth, biotech remains underrepresented in the genre.

This selection highlights some outstanding works (new and not so new) to fill the sci-fi gap for biotech aficionados.

Read more

Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed a technology whereby two robots can work in unison to 3D-print a concrete structure. This method of concurrent 3D printing, known as swarm printing, paves the way for a team of mobile robots to print even bigger structures in the future. Developed by Assistant Professor Pham Quang Cuong and his team at NTU’s Singapore Centre for 3D Printing, this new multi-robot technology is reported in Automation in Construction. The NTU scientist was also behind the Ikea Bot project earlier this year, in which two robots assembled an Ikea chair in about nine minutes.

Using a specially formulated cement mix suitable for 3D , this new development will allow for unique concrete designs currently impossible with conventional casting. Structures can also be produced on demand and in a much shorter period.

Currently, 3D-printing of large concrete structures requires huge printers that are larger in size than the printed objects, which is unfeasible since most construction sites have space constraints. Using multiple that can 3D print in sync means large structures and specially designed facades can be printed anywhere, as long as there is enough space for the robots to move around the work site.

Read more