Toggle light / dark theme

— The Atlantic

Since its debut in 2012, Google Glass always faced a strong headwind. Even on celebrities it looked, well, dorky. The device itself, once released in the wild, was seen as half-baked, and developers lost interest. The press, already leery, was quick to dog pile, especially when Glass’s users quickly became Glass’s own worst enemy.

Many early adopters who got their hands on the device (and paid $1,500 for the privilege under the Google Explorer program) were underwhelmed. “I found that it was not very useful for very much, and it tended to disturb people around me that I have this thing,” said James Katz, Boston University’s director of emerging media studies, to MIT Technology Review.
Read more

Where will Bitcoin be a few years from now?
The recently concluded Bitcoin & the Blockchain Summit in San Francisco on January 27 came up as a vivid source of both anxiety and inspiration. As speakers tackled Bitcoin’s technological limits and possible drawbacks that can be caused by impending regulations, Bitcoin advocate Andreas Antonopoulos lifted up everyone’s hope by discussing how bitcoins will eventually survive and flourish. He managed to do so with no graphics or presentations to prove his claim, just his utmost confidence and conviction that it really will no matter what.

On the currency being weak

There have been statements about Bitcoin’s technology surviving, but not the currency itself. Antonopoulos, however, argues that Bitcoin’s technology, network, and currency are interdependent with each other, which means that one element won’t work without the other. He said: “A consensus network that bases its value on the currency does not work without the currency.”

On why Bitcoin works

Antonopoulos underscores the fact that Bitcoin works because it is a dumb, transaction-processing network. Calling Bitcoin dumb is far from disparaging Bitcoin’s image as he actually thinks of this dumbness as Bitcoin’s true source of strength. According to him, it is a dumb network that supports smart devices, pushing all of the intelligence to the edge. It’s an innovation without permission.

On being 2014’s worst investment

Antonopoulos also argues that those who believe bitcoins to be a bad investment only considers the price when there are other equally important factors to be looked upon such as continuous investments and technological innovations.

For instance, 500 startups were created in 2014, which generated $500 million worth of investments and produced thousands of jobs, some portion from Bitcoin gambling. This was also the year that two remarkably genuine technologies were created, the multi-sig and hierarchal deterministic (HD) wallets.

On waiting for Bitcoin to flourish in 2017

Antonopoulos then stated with unwavering certainty: “Give us two years. Now what happens when you throw 500 companies and 10,000 developers at the problem? Give (it) two years and you will see some pretty amazing things in bitcoin.”

On mining updates

Meanwhile, mining for bitcoins prove to be more challenging than before. A Bitcoin mining facility in China, for instance, generates 4,050 bitcoins every month, which is equivalent to around $1.5 million, but not without repercussions and complexities. The entrepreneurs in the mining facility realize that as the level of difficulty and computing power increase, the ratio also gradually changes.

Typically, the entire mining procedure utilizes about 1,250 kilowatt-hours of electricity, putting the factory’s electricity bill to about $80,000 every month. Nowadays, their miners produce 20–25 bitcoins a day, significantly lesser compared with their previously 100 mined bitcoins per day.

On leaving a thought

The confidence for Bitcoin’s bright future has been regained, thanks to Antonopoulos’ contagious exhilaration and resolute belief in its potential. However, we can only wonder what the increasing difficulties in mining for bitcoins entail to the cryptocurrency’s overall performance and future, though Bitcoin’s unique features have been proven to be strong and resilient enough to surpass any challenges.

By
http://cdn.singularityhub.com/wp-content/uploads/2015/02/inside-SU-rayk-1000x400.jpg

How will you positively impact billions of people?

At Singularity University, this question is often posed to program participants packed into the classroom at the NASA Research Park in the heart of Silicon Valley. Since 2009, select groups of entrepreneurs and innovators have had their perspective shifted to exponential thinking through in-depth lectures, deep discussions, and engagement in workshops.

Yet in that time, only a few thousand individuals from around the world have had the opportunity to transform SU’s insights on accelerating technologies into cutting-edge solutions aimed at solving humanity’s greatest problems. But not anymore.

Read more

Steven Kotler — Forbes
singularity-university-summit-europe-1000x400

*This article co-written with author Ken Goffman.

One of the things that happens when you write books about the future is you get to watch your predictions fail. This is nothing new, of course, but what’s different this time around is the direction of those failures.

Used to be, folks were way too bullish about technology and way too optimistic with their predictions. Flying cars and Mars missions being two classic—they should be here by now—examples. The Jetsons being another.

But today, the exact opposite is happening.
Read more

By Michael S. Malone — MIT Technology Review

The view from Mike Steep’s office on Palo Alto’s Coyote Hill is one of the greatest in Silicon Valley.

Beyond the black and rosewood office furniture, the two large computer monitors, and three Indonesian artifacts to ward off evil spirits, Steep looks out onto a panorama stretching from Redwood City to Santa Clara. This is the historic Silicon Valley, the birthplace of Hewlett-Packard and Fairchild Semiconductor, Intel and Atari, Netscape and Google. This is the home of innovations that have shaped the modern world. So is Steep’s employer: Xerox’s Palo Alto Research Center, or PARC, where personal computing and key computer-­networking technologies were invented, and where he is senior vice president of global business operations.

And yet Mike Steep is disappointed at what he sees out the windows.
Read more

— CoinDesk
Gemini
Cameron and Tyler Winklevoss aren’t shy about issuing bold predictions for Gemini, their recently revealed bitcoin exchange project.

Calling it the “NASDAQ or Google of bitcoin”, the president and CEO, respectively, believe Gemini will be the fully regulated, fully compliant and fully banked institution the US bitcoin ecosystem needs to develop to its full potential.

In a new interview with CoinDesk, the brothers – prominent bitcoin investors and two of the largest-known holders of bitcoin – opened up about Gemini, discussing why they feel the exchange can become the market leader in what has been an increasingly active part of the bitcoin space.

Read more

Quartz

Bill Gates hosted a Reddit Ask Me Anything session yesterday, and in between pushing his philanthropic agenda and divulging his Super Bowl pick (Seahawks, duh), the Microsoft co-founder divulged that he is one in a growing list of tech giants who has reservations when it comes to artificial intelligence.

In response to Reddit user beastcoin’s question, “How much of an existential threat do you think machine superintelligence will be and do you believe full end-to-end encryption for all internet activity [sic] can do anything to protect us from that threat (eg. the more the machines can’t know, the better)??” Gates wrote this (he didn’t answer the second part of the question):

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned. Read more

A revolutionary Finding waits for the final Clinch: c-global

Otto E. Rossler

Institute for Physical and Theoretical Chemistry, University of Tubingen, Auf der Morgenstelle 14, 72076 Tubingen, Germany

Abstract: The global nature of the speed of light in the vacuum, c, was reluctantly given up by Einstein in December of 1907. A revival of the status c had enjoyed during the previous 2 ½ years, from mid-1905 to late-1907, is in the literature for several years by now. The consequences of c-global for cosmology and black-hole theory are far-reaching. Since black holes are an acute concern to date because there exists an attempt to produce them down on earth, the question of whether a global-c transform of the Einstein field equations can be found represents a vital issue — only days before an experiment that is based on the assumed absence of the new result is about to be ignited. (December 22, 2014, February 6, 2015)

Imagine: Einstein’s c were not just a local constant of nature everywhere, as one reluctantly believes it to be since late 1907, but rather a global constant. Then this return to the original 1905–1907 view would revolutionize physics. For example, cosmic expansion — whose speed by definition is added to the local c — would cease to be a physical option. Second, quantum mechanics would cease to generate problems in its unification with general relativity (or rather vice versa). Thirdly, black holes would be stable and hence show their voraciousness at any — even the smallest — size.

But is the speed of light c not a global constant anyhow in general relativity? While every layman and most every physicist does believe so, this status got actually lost by c in late 1907. To witness, it suffices to have a look at the famous “Shapiro time delay”: Light from a distant satellite is characterized, when grazing the sun on its way towards earth, by an increased travelling time compared to the sun’s absence along the light path [1]. This empirically verified famous implication of Einstein’s equation is canonically believed to reflect a locally masked reduction of the speed of light c in the vicinity of the sun [1]. But with c being a global constant, automatically an increased depth of the space-time funnel present around the sun is the real reason for the delay [2].

Is this unfamiliar proposal the physically correct one?

There are two pieces of evidence in favor of this being so, each individually sufficient. First, the famous “Schwarzschild solution” of the Einstein field equations was shown to possess a global–c transform [3]; hence the global constancy of c exists mathematically. Second, the famous “equivalence principle” between ordinary kinematic acceleration and gravitational acceleration, postulated by Einstein in late 1907, happens to be based solely on special relativity with its well-known global c. The equivalence principle was recently proved to actually non–imply a reduction of c more downstairs in the constantly accelerating extended long Einstein rocketship [4]. A third piece of evidence exists by implication: a global–c transform of the full Einstein field equations – despite the fact that this transform still waits to be written down explicitly.

But why not rather wait with giving c-global a broad visibility in the scientific community, given the embarrassing cosmological consequence which it entails as mentioned? It is c-global’s other big implication (regarding black holes) which justifies and necessitates the visibility. Why?

It is because black holes have a chance to get produced down on earth starting next month [5] .

The official safety report of the experiment [6] is already seven years old. Only an absolutely non-ignorable global–c transform of the full Einstein field equation can apparently force the almost 7 years old LSAG “safety report of the most prestigious experiment of history to be renewed in time. “In time” means: before the re-start at twice world-record energies scheduled for next month [5]. The reward to the scientific journal which accepts this brief note for publication will lie in the emergence-in-time of the existing if not yet made-explicit “global–c Einstein equation.” This task is a superhuman one indeed because finding the transform requires a unique strength of mind (or else serendipity) so that the world likely will have to wait for decades. Therefore, the manpower – the many alerted readers – of this Big Blog is needed as a planetary resource in the face of the rapidly closing time window.

In view of CERN’s open refusal to update its 7 years old Safety Report before the re-start at doubled world-record energies, one cannot be more grateful to Stephen Hawking for his timely warning [7]. There never was a stronger reason to admire this unique person and personality.

I thank Bill Seaman for having alerted me to Stephen Hawking’s latest coup. For J.O.R.

References

[1] I.I. Shapiro, Fourth test of general relativity. Physical Review Letters 13, 789–791 (1964).
[2] A half-3-pseudosphere replaces the Flamm paraboloid: https://lifeboat.com/blog/2013/03/ccc-constant-c-catastrophe
[3] O.E. Rossler, Abraham-like return to constant c in general relativity: Gothic-R theorem demonstrated in Schwarzschild metric. Fractal Spacetime and Noncommutative Geometry in Quantum and High Energy Physics 2, 1–14 (2012). Preprint on: http://www.wissensnavigator.com/documents/chaos.pdf
[4] O.E. Rossler, Equivalence principle implies gravitational-redshift proportional space dilation and hence global constancy of c. European Scientific Journal 10(9), 112–117 (2014).
[5] CERN: see http://www.newseveryday.com/articles/5537/20150101/cern-larg…h-2015.htm
[6] Official LHC Safety Report, latest edition: http://lsag.web.cern.ch/lsag/LSAG-Report.pdf (note the date 2008)
[7] https://www.youtube.com/watch?v=KJdc3hkcCUc#t=31

Mark Wilson — FastCo

“Obviously, it’s not a thoroughly vetted concept, but I think it’s extremely intriguing where drones might show up,” says Mark Rolston, founder of argodesign. “It would be nice to see them used this way, rather than another military function or more photography.”

The idea was born from a team brainstorming session around how health care could become more accessible. The designers first thought about how they could build a better ambulance, and the rise of autonomous vehicles inspired them to consider a self-driving ambulance. Then they thought of helicopters and drones, and the rest developed from there.

Read more