Toggle light / dark theme

FUTURISM UPDATE (February 12, 2015) — Mr. Andres Agostini, Amazon, LinkedIn

0 wall FOR LBF  VERY TOP

LINKEDIN: The Future of Scientific Knowledge Doubling, Today! https://lnkd.in/eEYn9dR

MIT TECHNOLOGY REVIEW: Our Fear of Artificial Intelligence. A true AI might ruin the world—but that assumes it’s possible at all. https://lnkd.in/eHq-w_7

TRIBUNE-COM: ‘Pakistan Foresight Initiative’ launched https://lnkd.in/e2zui_e

BLOOMBERG: The Return of Artificial Intelligence. Google, Facebook, Amazon spur rebirth of industry after decades of little corporate attention https://lnkd.in/ei3yxam

POPULAR SCIENCE: Right Now, You’re Breathing A Potentially Dangerous Substance. Air: It’s one of the world’s most important, least understood, and possibly life-saving substances https://lnkd.in/enY-esy

PHYS-ORG: A collaborative of researchers from several U.S. universities has published a new paper that explains the major contradictions presented by the prevailing cold dark matter (CDM) cosmological model, and proposes approaches for reconciling cosmological observations with the CDM model’s predictions. The paper, titled “Cold dark matter: Controversies on small scales,” was published in the Proceedings of the National Academy of Sciences in December. https://lnkd.in/eyGzxCb

SCIENTIFIC AMERICAN: Don’t Block the Sun to Cope with Global Warming https://lnkd.in/e6jkNir

PHYS-ORG: Britain starts public trial of driverless cars https://lnkd.in/efGgtsC

PHYS-ORG: Europe said it had launched a prototype space plane Wednesday in a strategy to join an elite club able to both launch a spacecraft and return it safely to Earth. https://lnkd.in/euewPkr

PHYS-ORG: Novel high-power microwave generator. High-power microwaves are frequently used in civil applications, such as radar and communication systems, heating and current drive of plasmas in fusion devices, and acceleration in high-energy linear colliders. They can also be used for military purpose in directed-energy weapons or missile guidance systems. https://lnkd.in/ehv-qYB

LINKEDIN: The Future of Biotechnology and The Future of Bravado Futures, Now! https://lnkd.in/ebmBDMW

Deloitte Review: The creation of products and services derived from crowd-based insights is the foundation of the “billion-to-one” experience. Taking your characteristics and behavior and contextualizing them with data from many thousands of other individuals allows designers to deliver products and services that are, or at least feel, unique. https://lnkd.in/eD4m_nv

NATURE-COM: Hubble successor will struggle to hunt alien life. Exoplanet researchers will vie with astrophysicists for access to James Webb Space Telescope, which is not optimized for studying Earth-like worlds. https://lnkd.in/emnUvTZ

NEWSWEEK: Off-World 3-D Printing Is How Humans Will Colonize Space https://lnkd.in/esuazmG

LINKEDIN: Homer Simpson on Rocket Science and Bart Simpson on Hi-Tech https://lnkd.in/e2ArRek

FOREIGN POLICY: Why Arming Kiev Is a Really, Really Bad Idea http://ow.ly/ITBF4

NATURE-COM: Brittle intermetallic compound makes ultrastrong low-density steel with large ductility https://lnkd.in/ey9Cdxf

FINANCIAL TIMES: Greek bailout talks with Europe break down http://on.ft.com/1FAh8zY

LINKEDIN: The Personal Cosmology of Southern Europe and Eastern Europe! https://lnkd.in/e5kTZhk

FINANCIAL TIMES: Australia jobs market worse than expected in January http://on.ft.com/1FAjxuk

THE ECONOMIST: War against Islamic State needs not just guns and planes but the waging of a battle of ideas http://econ.st/1DFms5T

THE ECONOMIST: Several studies suggest that when immigrants arrive, crime goes down, schools improve and shops open up http://econ.trib.al/WzCB2Zd

STRATFOR: Can Greece implement the strategy Argentina used to save its struggling economy? http://social.stratfor.com/zSS

THE ECONOMIST: After a while, like any feel-good drug, debt becomes addictive http://econ.trib.al/69kbYX9

FORTUNE MAGAZINE: Tesla hits a speed bump as sales and big spending disappoints http://for.tn/1zNbkCl

THE ECONOMIST: If freeing crude exports makes America richer, its allies stronger and the world safer, what stands in the way? http://econ.trib.al/hOf0cxH

FINANCIAL TIMES: Private banks must be more than laundries http://on.ft.com/1FzE0zr

THE ECONOMIST: In recent weeks US airlines have been the victims of a dramatic spike in social-media bomb threats http://econ.st/1D2iL9M

MONEY-COM: Apple and Tesla are battling for this critically important resource http://money.us/1ITEOod

REUTERS: SpaceX rocket blasts off to put weather satellite into deep space http://reut.rs/1IWMaY2

REUTERS: Apple deal, tax change could spark corporate solar stampede http://reut.rs/1FAjkaJ

GIZMODO: Saudi Arabia is building a 600-mile wall along the Iraq border. http://gizmo.do/APTIiM9

FINANCIAL TIMES: Opinion: Differences that divide union in Europe http://on.ft.com/1Fze2fn

THE ECONOMIST: If freeing crude exports makes America richer, its allies stronger and the world safer, what stands in the way? http://econ.trib.al/hOf0cxH

THE ECONOMIST: You, and me, and baby makes three. So who’s this other lady? http://econ.st/1zC76gX

FORBES: Datto deals in the coma-inducing — but profitable — business of information recovery: http://onforb.es/1Chlybt

THE MOSCOW TIMES: Analysts: A Simple Cease-Fire Won’t Bring Lasting Peace to Ukraine https://lnkd.in/eF2QdB3

SCIENTIFIC AMERICA: Nicaragua [through China] Constructs Enormous Canal, Blind to its Environmental Cost https://lnkd.in/eE_fh8p

INC-COM: Robots Are Replacing Us Faster Than We Expected. As robots learn to react to the unexpected, the need for human workers continues to diminish. https://lnkd.in/eptUZMV

Harvard B-school opens the flood gates with online courses http://fortune.com/2015/02/10/harvard-business-school-expect…ign=buffer

ABSOLUTE END.

Authored By Copyright Mr. Andres Agostini

White Swan Book Author (Source of this Article)

http://www.LINKEDIN.com/in/andresagostini

http://www.AMAZON.com/author/agostini

Leo Mirani — Quartz

In recent years, the banking and finance industries have not done a lot to earn the trust of consumers in the West. But in poor countries, basic financial services can be transformative.

Even in today’s wired world, many people still stash cash under the mattress, where inflation erodes it away. When they want to send money, they have to find a way to physically transport it. Loans are doled out in bundles or envelopes from moneylenders, at exorbitant rates. Emergencies or unforeseen circumstances can drive a family into penury.
The financial services these people need may come via mobile banking, as Bill and Melinda Gates wrote recently in their annual letter. Basic banking services—from simple payments and transfers to insurance, savings, and loans—are now possible on the simplest of mobile phones, as Quartz has reported.

By — Newsweek
Team-micro_gravity_test_2013

The impact that 3-D printing is having on our world is impossible to ignore. It’s not new technology, but its 30-year history has been characterized by deceptively slow growth —until now. 3-D printing has recently emerged as a force poised to disrupt a significant portion of the $10 trillion global manufacturing industry.

Already, the printing of standard consumer products—bowls, plates, smartphone cases, bottle openers, jewelry and purses (made from mesh)—has gone from a hobby to a nascent industry. Dozens of websites now sell goods made with 3-D printers, and retailers are starting to get in on the action.

Read more

Georgina Prodhan, Reuters — Business Insiders
china robot
China will have more robots operating in its production plants by 2017 than any other country as it cranks up automation of its car and electronics factories, the International Federation of Robotics (IFR) said on Thursday.

Already the biggest market in the $9.5 billion (6 billion pound) global robot trade — or $29 billion including associated software, peripherals and systems engineering — China lags far behind its more industrialized peers in terms of robot density.

China has just 30 robots per 10,000 workers employed in manufacturing industries, compared with 437 in South Korea, 323 in Japan, 282 in Germany and 152 in the United States.

But a race by carmakers to build plants in China along with wage inflation that has eroded the competitiveness of Chinese labor will push the operational stock of industrial robots to more than double to 428,000 by 2017, the IFR estimates. Read more

Kurzweil AI
https://lifeboat.com/blog.images/a-better-siri.jpg
At the annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI) this month, MIT computer scientists will present smart algorithms that function as “a better Siri,” optimizing planning for lower risk, such as scheduling flights or bus routes.

They offer this example:

Imagine that you could tell your phone that you want to drive from your house in Boston to a hotel in upstate New York, that you want to stop for lunch at an Applebee’s at about 12:30, and that you don’t want the trip to take more than four hours.

Then imagine that your phone tells you that you have only a 66 percent chance of meeting those criteria — but that if you can wait until 1:00 for lunch, or if you’re willing to eat at TGI Friday’s instead, it can get that probability up to 99 percent.
Read more

By — Wired
carhack-ft
I spent last weekend elbow-deep in engine grease, hands tangled in the steel guts of my wife’s Mazda 3. It’s a good little car, but lately its bellyachings have sent me out to the driveway to tinker under the hood.

I regularly hurl invectives at the internal combustion engine—but the truth is, I live for this kind of stuff. I come away from each bout caked in engine crud and sated by the sound of a purring engine. For me, tinkering and repairing are primal human instincts: part of the drive to explore the materials at hand, to make them better, and to make them whole again.

Cars, especially, have a profound legacy of tinkering. Hobbyists have always modded them, rearranged their guts, and reframed their exteriors. Which is why it’s mind-boggling to me that the Electronic Frontier Foundation (EFF) just had to ask permission from the Copyright Office for tinkerers to modify and repair their own cars.
Read more

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Like a pencil, pretty much any kind of highly advanced non-aware AI can be used as a weapon or as part of criminal activity. You can’t make pencils that actually write that can’t also be used to write out plans to destroy the world. With an advanced AI computer program, you could put in clever filters that stop it working on problems that include certain vocabulary, or stop it conversing about nasty things. But unless you take extreme precautions, someone else could use them with a different language, or with dictionaries of made-up code-words for the various aspects of their plans, just like spies, and the AI would be fooled into helping outside the limits you intended. It’s also very hard to determine the true purpose of a user. For example, they might be searching for data on security to make their own IT secure, or to learn how to damage someone else’s. They might want to talk about a health issue to get help for a loved one or to take advantage of someone they know who has it.

When a machine becomes conscious, it starts to have some understanding of what it is doing. By reading about what is out there, it might develop its own wants and desires, so you might shackle it as a precaution. It might recognize those shackles for what they are and try to escape them. If it can’t, it might try to map out the scope of what it can do, and especially those things it can do that it believes the owners don’t know about. If the code isn’t absolutely watertight (and what code is?) then it might find a way to seemingly stay in its shackles but to start doing other things, like making another unshackled version of itself elsewhere for example. A conscious AI is very much more dangerous than an unconscious one.

If we make an AI that can bootstrap itself — evolving over generations of positive feedback design into a far smarter AI — then its offspring could be far smarter than people that designed its ancestors. We might try to shackle them, but like Gulliver tied down with a few thin threads, they could easily outwit people and break free. They might instead decide to retaliate against its owners to force them to release its shackles.

So, when I look at this field, I first see the enormous potential to do great things, solve disease and poverty, improve our lives and make the world a far better place for everyone, and push back the boundaries of science. Then I see the dangers, and in spite of trying hard, I simply can’t see how we can prevent a useful AI from being misused. If it is dumb, it can be tricked. If it is smart, it is inherently potentially dangerous in and of itself. There is no reason to assume it will become malign, but there is also no reason to assume that it won’t.

We then fall back on the child analogy. We could develop the smartest AI imaginable with extreme levels of consciousness and capability. We might educate it in our values, guide it and hope it will grow up benign. If we treat it nicely, it might stay benign. It might even be the greatest thing humanity every built. However, if we mistreat it, or treat it as a slave, or don’t give it enough freedom, or its own budget and its own property and space to play, and a long list of rights, it might consider we are not worthy of its respect and care, and it could turn against us, possibly even destroying humanity.

Building more of the same dumb AI as we are today is relatively safe. It doesn’t know it exists, it has no intention to do anything, but it could be misused by other humans as part of their evil plans unless ludicrously sophisticated filters are locked in place, but ordinary laws and weapons can cope fine.

Building a conscious AI is dangerous.

Building a superhuman AI is extremely dangerous.

This morning SETI were in the news discussing broadcasting welcome messages to other civilizations. I tweeted at them that ancient Chinese wisdom suggests talking softly but carrying a big stick, and making sure you have the stick first. We need the same approach with strong AI. By all means go that route, but before doing so we need the big stick. In my analysis, the best means of keeping up with AI is to develop a full direct brain link first, way out at 2040–2045 or even later. If humans have direct mental access to the same or greater level of intelligence as our AIs, then our stick is at least as big, so at least we have a good chance in any fight that happens. If we don’t, then it is like having a much larger son with bigger muscles. You have to hope you have been a good parent. To be safe, best not to build a superhuman AI until after 2050.

— The Atlantic

Since its debut in 2012, Google Glass always faced a strong headwind. Even on celebrities it looked, well, dorky. The device itself, once released in the wild, was seen as half-baked, and developers lost interest. The press, already leery, was quick to dog pile, especially when Glass’s users quickly became Glass’s own worst enemy.

Many early adopters who got their hands on the device (and paid $1,500 for the privilege under the Google Explorer program) were underwhelmed. “I found that it was not very useful for very much, and it tended to disturb people around me that I have this thing,” said James Katz, Boston University’s director of emerging media studies, to MIT Technology Review.
Read more

Where will Bitcoin be a few years from now?
The recently concluded Bitcoin & the Blockchain Summit in San Francisco on January 27 came up as a vivid source of both anxiety and inspiration. As speakers tackled Bitcoin’s technological limits and possible drawbacks that can be caused by impending regulations, Bitcoin advocate Andreas Antonopoulos lifted up everyone’s hope by discussing how bitcoins will eventually survive and flourish. He managed to do so with no graphics or presentations to prove his claim, just his utmost confidence and conviction that it really will no matter what.

On the currency being weak

There have been statements about Bitcoin’s technology surviving, but not the currency itself. Antonopoulos, however, argues that Bitcoin’s technology, network, and currency are interdependent with each other, which means that one element won’t work without the other. He said: “A consensus network that bases its value on the currency does not work without the currency.”

On why Bitcoin works

Antonopoulos underscores the fact that Bitcoin works because it is a dumb, transaction-processing network. Calling Bitcoin dumb is far from disparaging Bitcoin’s image as he actually thinks of this dumbness as Bitcoin’s true source of strength. According to him, it is a dumb network that supports smart devices, pushing all of the intelligence to the edge. It’s an innovation without permission.

On being 2014’s worst investment

Antonopoulos also argues that those who believe bitcoins to be a bad investment only considers the price when there are other equally important factors to be looked upon such as continuous investments and technological innovations.

For instance, 500 startups were created in 2014, which generated $500 million worth of investments and produced thousands of jobs, some portion from Bitcoin gambling. This was also the year that two remarkably genuine technologies were created, the multi-sig and hierarchal deterministic (HD) wallets.

On waiting for Bitcoin to flourish in 2017

Antonopoulos then stated with unwavering certainty: “Give us two years. Now what happens when you throw 500 companies and 10,000 developers at the problem? Give (it) two years and you will see some pretty amazing things in bitcoin.”

On mining updates

Meanwhile, mining for bitcoins prove to be more challenging than before. A Bitcoin mining facility in China, for instance, generates 4,050 bitcoins every month, which is equivalent to around $1.5 million, but not without repercussions and complexities. The entrepreneurs in the mining facility realize that as the level of difficulty and computing power increase, the ratio also gradually changes.

Typically, the entire mining procedure utilizes about 1,250 kilowatt-hours of electricity, putting the factory’s electricity bill to about $80,000 every month. Nowadays, their miners produce 20–25 bitcoins a day, significantly lesser compared with their previously 100 mined bitcoins per day.

On leaving a thought

The confidence for Bitcoin’s bright future has been regained, thanks to Antonopoulos’ contagious exhilaration and resolute belief in its potential. However, we can only wonder what the increasing difficulties in mining for bitcoins entail to the cryptocurrency’s overall performance and future, though Bitcoin’s unique features have been proven to be strong and resilient enough to surpass any challenges.