Category: computing – Page 1,059
Justice Beyond Privacy
As the old social bonds unravel, philosopher and member of the Lifeboat Foundation’s advisory board Professor Steve Fuller asks: can we balance free expression against security?
Justice has been always about modes of interconnectivity. Retributive justice – ‘eye for an eye’ stuff – recalls an age when kinship was how we related to each other. In the modern era, courtesy of the nation-state, bonds have been forged in terms of common laws, common language, common education, common roads, etc. The internet, understood as a global information and communication infrastructure, is both enhancing and replacing these bonds, resulting in new senses of what counts as ‘mine’, ‘yours’, ‘theirs’ and ‘ours’ – the building blocks of a just society…
Read the full article at IAI.TV
The Globalized Smartification and Changing for the Better Via Simultaneous: Activated and Deactivated Kaisen!
The Globalized Smartification and Changing for the Better Via Simultaneous: Activated and Deactivated Kaisen!
Chiefly, this brief post is about the pictorial I composed here.
Pay great attention to this proprietary image. I greatly value Japanese execs and sages but they focus only on throughputting(• the Known Inputs Into Desirable Outputs inside their premises, without considering the Non-Existential and Existential Risk of the External Environment (outside their industrial facade) at large as we do in the White Swan’s Tranformative and Integrative Risk Management Services.
(• Throughputting is a Latin word in its ING-form stemming from Latin language, namely: Modus Operandi (MO).
Yes, they Kaisen within and beyond the Assembly Line to HHRR and many other administrative facilities and operations. However, in Transformative and Integrative Risk Management we consider and implement, as a major sub-chapter, every possible and most updated tool(s) by Quality Assurance and Continuous Improvement, most of the times to a “Shock and Awe” practical level for the sake of corporate lucre in sustainability.
This is a real-life story extremely summarized. I made a granularity-of-detail executive presentation to Toyota’s Board of Directors, including Mr. Noda (the Production Director). With the largest and smallest minuteness, step by step, sub-step by sub-step, I outright proved to Toyota Chairman, CEO, Production Director, CFO and others in the Board that using Kaisen to Manage Risks Holistically as per the Western state-of-the-art understanding was beyond ineffectual and inconsequential.
Of course and due to extreme Japanese regionalism and single mindedness, Mr. Noda assumed that I was too stupid to know something substantive about Kaisen and Toyota Production System and the American Professors who taught them their “stuff” through long consultative years.
When I first knocked the Toyota front-office door, I had spent twenty (20) years studying every advancement in business, management, and industry, clearly acknowledging every upside and every downside. In fact I was first introduced to a full-scope indoctrination in Japanese methodologies by Royal Dutch Shell. In Shell, like with Mr. Jiddu Krishnamurti, there are no folly regionalism but frinctionless globalization and globalization smartification, thus embracing any useful approach, regardless of geography, story, race, ethnicity or else, as long as it further underpins the global strategic bottom-line, PERIOD!
I was heavily researching not just Toyota’s advancements and others by the Corporate Miracle of Japan of the 1980s, but absolutely everything regarding the countermeassuring of any form (including its many synonyms) of, direct or indirect, disruptions, both in the West and the Far East.
As Japan was topnotch and nobody in the West was doing something meritorious (as per Noda’s schema), he found it stupid and time-wasting and not lucrative to even consider the methodologies, even those by NASA and way beyond that, that I was ruthlessly researching, nation by nation, industry by industry. Ergo, as my amazing father and Napoleon Bonaparte stated, “ … I only have one counsel or you — be a master …” to the strategic surprise (Sputnik Moment) of Toyota, Noda, and Mitsubishi Motors.
Mr. Noda was extremely infuriated with me but, despite him, the Chairman hired me and carried on with emotional evenness, that of a Wise and Sage Patriarch. Mr. Noda gave me a nickname that I will not release at this or other time.
Other considerations pertaining to Kaisen and its evolution way beyond that, I will be commenting about in due time.
By Andres Agostini
White Swan Author
www.linkedin.com/in/andresagostini
Why Apple’s Swift Language Will Instantly Remake Computer Programming
By Cade Metz — Wired
Chris Lattner spent a year and a half creating a new programming language—a new way of designing, building, and running computer software—and he didn’t mention it to anyone, not even his closest friends and colleagues.
He started in the summer of 2010, working at night and on weekends, and by the end of the following year, he’d mapped out the basics of the new language. That’s when he revealed his secret to the top executives at his company, and they were impressed enough to put a few other seasoned engineers on the project. Then, after another eighteen months, it became a “major focus” for the company, with a huge team of developers working alongside Lattner, and that meant the new language would soon change the world of computing. Lattner, you see, works for Apple.
Is the End of Moore’s Law Slowing the World’s Supercomputing Race?
Robert McMillan — Wired
Every six months, a team of supercomputing academics compiles a list of the most powerful computers on the planet. It’s called the Top500 list, and it has become a competition of sorts. National labs vie against universities, military facilities, NASA, and even temporary cloud-based supercomputers—all to see who’s building the worlds’ largest number-crunching machines.
This year, the machine on the top of the list is Tihane-2, a Chinese system that can perform 33.86 quadrillion calculations per second. But here’s the thing. Tihane-2 was on top back in November of 2013, and a year ago too. In fact, when you look at the top 10 machines on the June list, there’s only one new entry–an unidentified Cray supercomputer, operated by the U.S. government. It’s ranked tenth.
Is it possible to build an artificial superintelligence without fully replicating the human brain?
The technological singularity requires the creation of an artificial superintelligence (ASI). But does that ASI need to be modelled on the human brain, or is it even necessary to be able to fully replicate the human brain and consciousness digitally in order to design an ASI ?
Animal brains and computers don’t work the same way. Brains are massively parallel three-dimensional networks, while computers still process information in a very linear fashion, although millions of times faster than brains. Microprocessors can perform amazing calculations, far exceeding the speed and efficiency of the human brain using completely different patterns to process information. The drawback is that traditional chips are not good at processing massively parallel data, solving complex problems, or recognizing patterns.
Newly developed neuromorphic chips are modelling the massively parallel way the brain processes information using, among others, neural networks. Neuromorphic computers should ideally use optical technology, which can potentially process trillions of simultaneous calculations, making it possible to simulate a whole human brain.
The Blue Brain Project and the Human Brain Project, funded by the European Union, the Swiss government and IBM, are two such attempts to build a full computer model of a functioning human brain using a biologically realistic model of neurons. The Human Brain Project aims to achieve a functional simulation of the human brain for 2016.
Neuromorphic chips make it possible for computers to process sensory data, detect and predict patterns, and learn from experience. This is a huge advance in artificial intelligence, a step closer to creating an artificial general intelligence (AGI), i.e. an AI that could successfully perform any intellectual task that a human being can.
Think of an AGI inside a humanoid robot, a machine that looks and behave like us, but with customizable skills and that can perform practically any task better than a real human. These robots could be self-aware and/or sentient, depending on how we choose to build them. Manufacturing robots wouldn’t need to be, but what about social robots living with us, taking care of the young, the sick or the elderly? Surely it would be nicer if they could converse with us as if they were conscious, sentient beings like us, a bit like the AI in Spike Jonze’s 2013 movie Her.
In a not too distant future, perhaps less than two decades, such robots could replace humans for practically any job, creating a society of abundance where humans can spend their time however they like. In this model, highly capable robots would run the economy for us. Food, energy and most consumer products would be free or very cheap, and people would receive a fixed monthly allowance from the government.
This all sounds very nice. But what about an AI that would be greatly surpass the brightest human minds ? An artificial superintelligence (ASI), or strong AI (SAI), with the ability to learn and improve on itself, and potentially becoming millions or billions of times more intelligent and capable than humans ? The creation of such an entity would theoretically lead to the mythical technological singularity.
Futurist and inventor Ray Kurzweil believes that the singularity will happen some time around 2045. Among Kurzweil’s critics is Microsoft cofounder Paul Allen, who believes that the singularity is still a long way off. Allen argues that for a real singularity-level computer intelligence to be built, the scientific understanding of how the human brain works will need to accelerate exponentially (like digital technologies), and that the process of original scientific discovery just doesn’t behave that way. He calls this issue the complexity brake.
Without interfering in the argument between Paul Allen and Ray Kurzweil (who replied convincingly here), the question I want to discuss is whether it is absolutely necessary to fully understand and replicate the way the human brain works to create an ASI.
GREAT INTELLIGENCE DOESN’T HAVE TO BE MODELLED ON THE HUMAN BRAIN
It is a natural for us to think that humans are the culmination of intelligence, simply because it is the case in the biological world on Earth. But that doesn’t mean that our brain is perfect or that other forms of higher intelligence cannot exist if they aren’t based on the same model.
If extraterrestrial beings with a greater intelligence than ours exist, it is virtually unthinkable that their brains be shaped and function like ours. The process of evolution is so random and complex that even if life were to be created again on a planet identical to Earth, it wouldn’t unfold the same way as it did for us, and consequently the species wouldn’t be the same. What if the Permian-Triassic extinction, or any other mass extinction event hadn’t occured ? We wouldn’t be there. But that doesn’t mean that other intelligent animals wouldn’t have evolved instead of us. Perhaps there would have been octopus-like creatures more intelligent than humans with a completely different brain structure.
It’s pure human vanity and short-sightedness to think that everything good and intelligent has to be modelled on us. That is the kind of thinking that led to the development of religions with anthropomorphized gods. Humble or unpretentious religions like animism or Buddhism either have no human-like deity or no god at all. More arrogant or self-righteous religions, be them polytheistic or monotheistic, have typically imagined gods as superhumans. We don’t want to make the same mistake with artificial superintelligence. Greater than human intelligence does not have to be an inflated version of human intelligence, nor should it be based on our biological brains.
The human brain is the fortuitious result of four billion years of evolution. Or rather, it is one tiny branch in the grand tree of evolution. Birds have much smaller brains than mammals and are generally considered stupid animals compared to most mammals. Yet, crows have reasoning skills that can exceed that of a preschooler. They display conscious, purposeful behaviour, a combined with a sense of initiative, elaborate problem solving abilities of their own, and can even use tools. All this with a brain the size of a fava-bean. A 2004 study from the departments of animal behavior and experimental psychology at the University of Cambridge claimed that crows were as clever as the great apes.
Clearly there is no need to replicate the intricacies of a human cortex to achieve consciousness and initiative. Intelligence does not depend only on brain size, the number of neurons, or cortex complexity, but also the brain-to-body mass ratio. That is why cattle, who have brains as big as chimpanzees, are stupider than ravens or mice.
But what about computers ? Computers are pure “brains”. They don’t have bodies. And indeed as computers get faster and more efficient, their size tend to decrease, not increase. This is yet another example of why we shouldn’t compare biological brains and computers.
As Ray Kurzweil explains in his reply to Paul Allen, learning about how the human brains works only serve to provide “biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. […] The way that these massively redundant structures in the brain differentiate is through learning and experience. The current state of the art in AI does, however, enable systems to also learn from their own experience.” He then adds that IBM’s Watson learned most of its knowledge by reading on its own.
In conclusion, there is no rational reason to believe that an artificial superintelligence couldn’t come into being without being entirely modelled on the human brain, or any animal brain. A computer chip will never be the same as a biochemical neural network, and a machine will never feel emotions the same way as us (although they may feel emotions that are out of the range of human perception). But notwithstanding these differences, some computers can already acquire knowledge on their own, and will become increasingly good at it, even if they don’t learn exactly the same way as humans. Once given the chance to improve on themselves, intelligent machines could set in motion a non-biological evolution leading to greater than human intelligence, and eventually to the singularity.
————–
This article was originally published on Life 2.0.
Where are the real-world proven-track records of and by the White Swan Author, Mr. Andres Agostini?
Where are the real-world proven-track records of and by the White Swan Author, Mr. Andres Agostini?
What are four (4) solid real-life examples that the White Swan Author has risk-managed? Andres has many letterhead testimonials about those. See the ensuing:
1.- World-class Petroleum Refineries whose risks that Andres has managed are available at https://lifeboat.com/blog/2014/05/white-swan-oil-refineries
2.- World-class Oil and Gas Tankers (maritime vessels) whose risks that Andres has managed are available at https://lifeboat.com/blog/2014/05/white-swan-oil-gas-tankers
3.- World-class Petroleum installations, equipments and hardware whose risks that Andres has managed are available at https://lifeboat.com/blog/2014/05/white-swan-petroleum-installations
4.- Toyota and Mitsubishi Motors factories and installations whose risks that Andres has managed are available at https://lifeboat.com/blog/2014/05/white-swan-cars
Net Neutrality & Government Hypocrisy on Web Freedom — @HJBentham
By Harry J. Bentham — More articles by Harry J. Bentham
Originally published on 22 May 2014 at Dissident Voice































