Toggle light / dark theme

Allow me to introduce you to someone who has the potential to be very important in the future of Bitcoin. His name is Balaji Srinivasan, and he is the chairman and co-founder of 21 Inc. What is 21 Inc? 21 Inc. is the Bitcoin startup that secured the most venture capital of any Bitcoin company in history, at $116 million. What do they need $116 million in venture capital for? They are investing in “future proprietary products designed to drive mainstream adoption of Bitcoin.” With that in mind, the research of 21 Inc. has highlighted some interesting Bitcoin factoids. One Srinivasan released at the second annual Bitcoin Job Fair held last weekend in Sunnyvale, California regarding how big Bitcoin has become in the computing world.

Honestly, I looked online to find out what a petahash rate and a gigahash rate was, and that is one long rabbit hole, so I’ll leave the technical ramble to techies like Mr. Srinivasan. He makes the comparison to Google based on the fair assumption that they are using 1e7 servers, for 1e7 H/s per Xeon, and ~10 Xeons/server = 1 PH/s. One petahash equals 1,000,000 gigahash or 1000 terahashes. Bitcoin reached 1 PH/s of computing power/speed on September 15th, 2013. It is now normally working at over 350 PH/s, or over 350,000,000 GH/s.

” All of Google today would represent less than 1% of all of mining (Bitcoin operations worldwide). The sheer degree of what is happening in (Bitcoin) mining is not being appreciated by the press,” said Balaji Srinivasan at the Bitcoin Job Fair. “If we assume there are 10 million Google servers, and each of these servers is running, you can multiply that through and get one petahash. If they turned off all of their data centers and pointed them at Bitcoin (mining network), they would be less than 1% of the network.”

Read more

To many people, the introduction of the first Macintosh computer and its graphical user interface in 1984 is viewed as the dawn of creative computing. But if you ask Dr. Nick Montfort, a poet, computer scientist, and assistant professor of Digital Media at MIT, he’ll offer a different direction and definition for creative computing and its origins.

Defining Creative

Creative Computing was the name of a computer magazine that ran from 1974 through 1985. Even before micro-computing there was already this magazine extolling the capabilities of the computer to teach, to help people learn, help people explore and help them do different types of creative work, in literature, the arts, music and so on,” Montfort said.

“It was a time when people had a lot of hope that computing would enable people personally as artists and creators to do work. It was actually a different time than we’re in now. There are a few people working in those areas, but it’s not as widespread as hoped in the late 70’s or early 80s.”

These days, Montfort notes that many people use the term “artificial intelligence” interchangeably with creative computing. While there are some parallels, Montfort said what is classically called AI isn’t the same as computational creativity. The difference, he says, is in the results.

“A lot of the ways in which AI is understood is the ability to achieve a particular known objective,” Montfort said. “In computational creativity, you’re trying to develop a system that will surprise you. If it does something you already knew about then, by definition, it’s not creative.”

Given that, Montfort quickly pointed out that creative computing can still come from known objectives.

“A lot of good creative computer work comes from doing things we already know computers can do well,” he said. “As a simple example, the difference between a computer as a producer of poetic language and person as a producer of poetic language is, the computer can just do it forever. The computer can just keep reproducing and, (with) that capability to bring it together with images to produce a visual display, now you’re able to do something new. There’s no technical accomplishment, but it’s beautiful nonetheless.”

Models of Creativity

As a poet himself, another area of creative computing that Montfort keeps an eye on is the study of models of creativity used to imitate human creativity. While the goal may be to replicate human creativity, Montfort has a greater appreciation for the end results that don’t necessarily appear human-like.

“Even if you’re using a model of human creativity the way it’s done in computational creativity, you don’t have to try to make something human-like, (even though) some people will try to make human-like poetry,” Montfort said. “I’d much rather have a system that is doing something radically different than human artistic practice and making these bizarre combinations than just seeing the results of imitative work.”

To further illustrate his point, Montfort cited a recent computer generated novel contest that yielded some extraordinary, and unusual, results. Those novels were nothing close to what a human might have written, he said, but depending on the eye of the beholder, it at least bodes well for the future.

“A lot of the future of creative computing is individual engagement with creative types of programs,” Montfort said. “That’s not just using drawing programs or other facilities to do work or using prepackaged apps that might assist creatively in the process of composition or creation, but it’s actually going and having people work to code themselves, which they can do with existing programs, modifying them, learning about code and developing their abilities in very informal ways.”

That future of creative computing lies not in industrial creativity or video games, but rather a sharing of information and revisioning of ideas in the multiple hands and minds of connected programmers, Montfort believes.

“One doesn’t have to get a computer science degree or even take a formal class. I think the perspective of free software and open source is very important to the future of creative programming,” Montfort said. “…If people take an academic project and provide their work as free software, that’s great for all sorts of reasons. It allows people to replicate your results, it allows people to build on your research, but also, people might take the work that you’ve done and inflect it in different types of artistic and creative ways.”

Who needs a peep hole when a wifi network will do? Researchers from MIT have developed technology that uses wireless signals to see your silhouette through a wall—and it can even tell you apart from other people, too.

The team from MIT’s Computer Science and Artificial Intelligence Lab are no strangers to using wireless signals to see what’s happening on the other side of a wall. In 2013, they showed off software that could use variations in wifi signal to detect the presence of human motion from the other side of a wall. But in the last two years they’ve been busy developing the technique, and now they’ve unveiled the obvious — if slightly alarming — natural progression: they can use the wireless reflections bouncing off a human body to see the silhouette of a person standing behind a wall.

Not only that, the team’s technique, known is RF-Capture, is accurate enough to track the hand of a human and, with some repeated measurements, the system can even be trained to recognise different people based just on their wifi silhouette. The research, which is to be presented at SIGGRAPH Asia next month, was published this morning on the research group’s website.

Read more

Theoretical physicists have proposed a scalable quantum computer architecture. The new model, developed by Wolfgang Lechner, Philipp Hauke and Peter Zoller, overcomes fundamental limitations of programmability in current approaches that aim at solving real-world general optimization problems by exploiting quantum mechanics.

Within the last several years, considerable progress has been made in developing a quantum computer, which holds the promise of solving problems a lot more efficiently than a classical computer. Physicists are now able to realize the basic building blocks, the quantum bits (qubits) in a laboratory, control them and use them for simple computations. For practical application, a particular class of quantum computers, the so-called adiabatic quantum computer, has recently generated a lot of interest among researchers and industry. It is designed to solve real-world optimization problems conventional computers are not able to tackle. All current approaches for adiabatic quantum computation face the same challenge: The problem is encoded in the interaction between qubits; to encode a generic problem, an all-to-all connectivity is necessary, but the locality of the physical quantum bits limits the available interactions.

“The programming language of these systems is the individual interaction between each physical qubit. The possible input is determined by the hardware. This means that all these approaches face a fundamental challenge when trying to build a fully programmable quantum computer,” explains Wolfgang Lechner from the Institute for Quantum Optics and Quantum Information (IQOQI) at the Austrian Academy of Sciences in Innsbruck.

Read more