Toggle light / dark theme

IBM wants to give people night vision capabilities, and they are doing it using Google Glass. This patent “tricks” the eyes with red light in order to increase visibility when in a low light environment.

Upon entering a dark room, human eyes obviously take time to adjust in order to see clearly. That’s because there are two types of photoreceptors in our eyes — the rods and the cones. Rods are responsible for letting humans see in the dark; however, it takes around 30 minutes for our rods to fully adjust to the darkness.

Night vision is a very complicated biological process, but it seems that we may be able to tweak and enhance it, and we can do so without using genetic manipulation or any other equally invasive and transformative method. In fact, all we may need is glasses.

Read more

Most basic physics textbooks describe laser light in fairly simple terms: a beam travels directly from one point to another and, unless it strikes a mirror or other reflective surface, will continue traveling along an arrow-straight path, gradually expanding in size due to the wave nature of light. But these basic rules go out the window with high-intensity laser light.

Powerful laser beams, given the right conditions, will act as their own lenses and “self-focus” into a tighter, even more intense beam. University of Maryland physicists have discovered that these self-focused laser pulses also generate violent swirls of optical energy that strongly resemble smoke rings. In these donut-shaped light structures, known as “spatiotemporal optical vortices,” the light energy flows through the inside of the ring and then loops back around the outside.

The vortices travel along with the laser pulse at the speed of light and control the energy flow around it. The newly discovered optical structures are described in the September 9, 2016 issue of the journal Physical Review X.

Read more

Whether you believe the buzz about artificial intelligence is merely hype or that the technology represents the future, something undeniable is happening. Researchers are more easily solving decades-long problems like teaching computers to recognize images and understanding speech at a rapid space, and companies like Google goog and Facebook fb are pouring millions of dollars into their own related projects.

What could possibly go wrong?

For one thing, advances in artificial intelligence could eventually lead to unforeseen consequences. University of California at Berkeley professor Stuart Russell is concerned that powerful computers powered by artificial intelligence, or AI, could unintentionally create problems that humans cannot predict.

Read more

Synthetic biology is essentially an application of engineering principles to the fundamental molecular components of biology. Key to the process is the ability to design genetic circuits that reprogram organisms to do things like produce biofuels or excrete the precursors for pharmaceuticals, though whether this is commercially viable is another question.

MIT’s Jim Collins, one of the founders of synthetic biology, recently explained it to me as putting the engineering into genetic engineering.

“Genetic engineering is introducing a gene from species A to species B,” he said. “That’s the equivalent of replacing a red light bulb with a green light bulb. Synthetic biology is focused on designing the underlying circuitry expressing that red or green light bulb.”

Read more

In preparation for writing a review of the Unabomber’s new book, I have gone through my files to find all the things I and others had said about this iconic figure when he struck terror in the hearts of technophiles in the 1990s. Along the way, I found this letter written to a UK Channel 4 producer on 26 November 1999 by way of providing material for a television show in which I participated called ‘The Trial of the 21st Century’, which aired on 2 January 2000. I was part of the team which said things were going to get worse in the 21st century.

What is interesting about this letter is just how similar ‘The Future’ still looks, even though the examples and perhaps some of the wording are now dated. It suggests that there is a way of living in the present that is indeed ‘future-forward’ in the sense of amplifying certain aspects of today’s world beyond the significance normally given to them. In this respect, the science fiction writer William Gibson quipped that the future is already here, only unevenly distributed. Indeed, it seems to have been here for quite a while.

Dear Matt,

Here are the sum of my ideas for the Trial of the 21st Century programme, stressing the downbeat:

Although the use of the internet is rapidly spreading throughout the world, it is also spreading at an alarmingly uneven rate, creating class divisions within nations much sharper than before. (Instead of access to the means of production, it is now access to the means of communication that is the cause of these divisions.) A good example is India, where most of the population continues to live in abject poverty (actually getting poorer relative to the rest of the world), while a Silicon Valley style community thrives in Bangalore with close ties to the West and a growing scepticism toward India’s survival as a democracy that pretends to incorporate the interests of the entire country. (The BBC world service did a story a couple of years ago after one of the elections, arguing that this emerging techno-middle-class is, despite its Western ties, are amongst those most likely to accept the rule of a dictator who could do a ‘Mussolini’ and make the trains run on time, and otherwise protect the interests of these nouveaux riches, etc.) In this respect, the spread of the internet to the Third World is actually a politically destabilizing force, creating the possibility of a new round of authoritarian regimes. This tendency is compounded by a general decline of the welfare state mentality, so that these new dictators wouldn’t even need to pay lip service to taking care of the masses, as long as the middle classes are given preferential tax rates, etc.

But even in the West, the easy access to the internet has political unsavoury consequences. As more people depend on the internet as a provider of goods, information, entertainment, etc., and regulation of the net is devolved into many commercial hands, it will be increasingly tempting for techno-terrorists to strike by: corrupting, stealing and recoding materials stored therein. In other words, we should see a new generation of people who are the spiritual offspring of the Unabomber and average mischievous hacker. Indeed, many of these people may be motivated by a populist, democratic sentiment associated with a particular ethnic or cultural group that is otherwise ‘info-poor’. Such techno-terrorism is likely to be effective when the offending Western parties are far from those of the offended peoples – one wouldn’t need to smuggle people and arms into Heathrow; one could just push the delete button 5000 miles away… I am frankly surprised that the major stock exchanges and the air traffic control system haven’t yet been sabotaged, considering how easy it is for major disruptions to occur even without people trying very hard. These two computerized systems are prime candidates because the people most directly affected are likely to be relatively well-heeled. In contrast, sabotaging various military defence systems could lead to the death of millions of already disadvantaged people, so I doubt that they would be the target of techno-terrorists (though they may be the target of a sociopathic hacker…)

One seemingly good feature of our emerging networked world is that we can customize our consumption better than ever. However, this customization means that we are providing more of our details to sources capable of exploiting them — not only through marketing, but also through surveillance. In this respect, remarks about the ‘interactivity’ of the internet should be seen as implying that others may be able to ‘see ‘through’ you while you are merely ‘looking at’ them. While this opens up the possibility of government censorship, a bigger threat may be the way in which access to certain materials may be ‘implicitly regulated’ by the ‘invisible hand’ of website hits. Thus, if a site gets a consistently large number of hits, it may suddenly start charging a pay-per-view fee, whereas those getting few hits may simply be taken off cyberspace by commercial servers. This could have especially pernicious consequences for the amount and type of news available (think about what sorts of stories would be expensive to access if news coverage were entirely consumer-driven), as well as on-line distance learning courses.

Here we see the dark side of the ‘user friendliness’ of the net: it basically mimics and reinforces what we already do until we get locked in. (In other words: spontaneous preferences are turned into prejudices and perhaps even addictions.) In the past, government and even businesses saw themselves in the role of educating or, in some other way, challenging people to change their habits. But this is no longer necessary, and may be even inconvenient as a means to a docile citizenry. (Aldous Huxley’s Brave New World was ahead of the curve here.)

There are also some problems arising from advances in biotechnology:
1. As we learn more about people’s genetic makeup, that information will become part of the normal ways we account for ourselves – especially in legal settings. For example, you may be guilty of alcohol-related offences even if you are below the ‘legal limit’, if it’s shown that you’re genetically predisposed to get drunk easily. (Judges have already made such rulings in the US.) Ironically, then, although we have no say in our genetic makeup, we will be expected not only to know it, but also to take responsibility for it.
2. In addition, while our personal genetic information will be generally available (e.g. used by insurance companies to set premiums), it may also be patented as intellectual property legislation seems to be allowing the patenting of substances that already exist in nature as long as the means is artificial (e.g. biochemical synthesis of genetic material for medical treatments).
3. This fine-grained genetic information will refuel the fires of the politics of discrimination, both in its negative and positive extremes: i.e. those who want to take a distinctive genetic pattern as the basis of extermination or valorization. (A good case in point is the drive to recognize homosexuality as genetically based: both pro- and anti-gay groups seem to embrace this line, even though it could mean either preventing the birth of gay children or accepting gayness as a normal tendency in humanity)

Finally, there are some general problems with the future of knowledge production:
1. It will become increasingly difficult to find support – both intellectual and financial — for critical work that aims to overturn existing assumptions and open up new lines of inquiry. This is because current lines of research – especially in the experimentally driven side of the natural sciences – have already invested so much money, people and other resources that to suggest that, say, high-energy physics is intellectually bankrupt or that the human genome project isn’t telling us much more than we already know would amount to throwing lots of people out of work, ruining reputations and perhaps even causing a general backlash against science in society at large (since public conceptions of science are so closely tied to these high-profile projects).
2. Traditionally radical ideas have been promoted in science – at least in part –- because the research behind the ideas did not cost much to do, and not much was riding on who was ultimately correct. However, this idyllic state of affairs ended with World War II. Indeed, it has gotten so bad – and will get worse in the future – that one can speak of a kind of ‘financial censorship’ in science. For example, Peter Duesberg, who discovered the ‘retrovirus’, lost his grants from the US National Institute of Health because he publicly denied the HIV-AIDS link. One result of this financial censorship is that radical researchers will migrate to private funders who are willing to take some risks: e.g. cold fusion research continues today in this fashion. The big downside of this possibility, though, is that if this radical research does bear fruit, it’s likely to become the intellectual property of the private funder and not necessarily used for the public good.

I hope you find these remarks helpful. Leave a message at … when you’re able to talk.

Yours,

Steve

Model of the human genome.

A special nutrient must be fed to these bacteria or else they die off. Unless they find this selfsame nutrient in the environment, which Church says is unlikely, they would not be able to survive. Another fail-safe is a special barrier which has been erected to make it impossible for the bacteria to mate or reproduce, outside of the lab. But other experts wonder how “unbeatable” Church’s fail-safe’s actually are. Carr says that instead of discussing these measures as foolproof, we should be framing it in degrees of risk.

The next step is further testing of the artificial genes that have been made. Afterward, Church and colleagues will take this same genome and produce an entirely new organism with it. Since DNA is the essential blueprint for almost all life on earth, being able to rewrite it could give humans an almost god-like power over it. That capability is perhaps decades away. Even so, combined with gene editing and gene modification, and the idea of a race of super humans is not outside the realm of possibility.

Read more