Toggle light / dark theme

The Singularity Institute for Artificial Intelligence has announced the details of The Singularity Summit 2008. The event will be held October 25, 2008 at the Montgomery Theater in San Jose, California. Previous summits have featured Nick Bostrom, Eric Drexler, Douglas Hofstadter, Ray Kurzweil, and Peter Thiel.

Keynote speakers include Ray Kurzweil, author of The Singularity is Near, and Justin Rattner, CTO of Intel. At the Intel Developer Forum on August 21, 2008, Rattner explained why he thinks the gap between humans and machines will close by 2050. “Rather than look back, we’re going to look forward 40 years,” said Rattner. “It’s in that future where many people think that machine intelligence will surpass human intelligence.”

Other featured speakers include:

  • Dr. Ben Goertzel, CEO of Novamente, director of research at SIAI
  • Dr. Marvin Minsky
  • Nova Spivack, CEO of Radar Networks, creator of Twine.com
  • Dr. Vernor Vinge
  • Eliezer Yudkowsky

You can find a comprehensive list of other upcoming Singularity and Artificial Intelligence events here.

Something to post to your websites and to vote online.

Aubrey de Grey can get $1.5 million for the Methuselah Foundation if enough people vote.

Voting ends September 1st, take a second to vote now.
Any US Amex cardmember or US resident (who makes a guest account) can vote.

Here is the page where you can vote “nominate”

The Methuselah Foundation Page with some more details if you are interested, to vote though you only need click on the above link…

The UK’s Guardian today published details of a report produced by Britain’s Security Service (MI5) entitled, ‘Understanding radicalization and violent extremism in the UK’. The report is from MI5’s internal behavioral analysis unit and contains within it some interesting and surprising conclusions. The Guardian report covers many of these in depth (so no need to go over here) but one point, which is worth highlighting is the claim made within the report that religion is and was not a contributory factor in the radicalization of the home-grown terrorist threat that the UK faces. In fact, the report goes on to state that a strong religious faith protects individuals from the effects of extremism.This viewpoint is one that is gathering strength and coincides with an article written by Martin Amis in the Wall Street Journal, which also argues that ‘terrorism’s new structure’ is about the quest for fame and thirst for power, with religion simply acting as a “means of mobilization”.

All of this also tends to agree with the assertion made by Philip Bobbit in ‘Terror and Consent’, that al-Qaeda is simply version 1.0 of a new type of terrorism for the 21st century. This type of terrorism is attuned to the advantages and pressures of a market based world and acts more like a Silicon Valley start-up company than the Red Brigades — being flexible, fast moving and wired — taking advantage of globalization to pursue a violent agenda.

This all somewhat begs the question of, what next? If al-Qaeda is version 1.0 what is 2.0? This of course is hard to discern but looking at the two certain trends, which will shape humanity over the next 20 years — urbanization and virtualization — throws up some interesting potential opponents who are operating today. The road to mass urbanization is currently being highlighted by the 192021 project (19 cities, 20 million people in the 21st century) and amongst other things, points to the large use of slum areas to grow the cities of the 21st century. Slum areas are today being globally exploited from Delhi to Sao Paulo by Nigerian drug organizations that are able to recruit the indigenous people to build their own cities within cities. This kind of highly profitable criminal activity in areas beyond the vision of government is a disturbing incubator.

150px-anonymousdemotivator.jpg
Increased global virtualization complements urbanization as well as standing alone. Virtual environments provide a useful platform for any kind of real-life extremist (as is now widely accepted) but it is the formation of groups within virtual spaces that then spill-out into real-space that could become a significant feature of the 21st century security picture. This is happening with, ‘Project Chanology’ a group that was formed virtually with some elements of the Anonymous movement in order to disrupt the Church of Scientology. While Project Chanology (WhyWeProtest Website)began as a series of cyber actions directed at Scientology’s website, it is now organizing legal protests of Scientology buildings. A shift from the virtual to the real. A more sinister take on this is the alleged actions of the Patriotic Nigras — a group dedicated to the disruption of Second Life, which has reportedly taken to using the tactic of ‘swatting’ — which is the misdirection of armed police officers to a victim’s home address. A disturbing spill-over into real-space. Therefore, whatever pattern future terrorist movements follow, there are signs that religion will play a peripheral rather than central role.

Originally posted on the Counterterrorism blog.

Researchers from Imperial College in London, England, isolated the receptor in the lungs that triggers the immune overreaction to flu.

With the receptor identified, a therapy can be developed that will bind to the receptor, preventing the deadly immune response. Also, by targeting a receptor in humans rather than a particular strain of flu, therapies developed to exploit this discovery would work regardless of the rapid mutations that beguile flu vaccine producers every year.

The flu kills 250,000 to 500,000 people in an average year with epidemics reaching 1 to 2 million deaths (other than the spanish flu which was more severe

This discovery could lead to treatments which turn off the inflammation in the lungs caused by influenza and other infections, according to a study published today in the journal Nature Immunology. The virus is often cleared from the body by the time symptoms appear and yet symptoms can last for many days, because the immune system continues to fight the damaged lung. The immune system is essential for clearing the virus, but it can damage the body when it overreacts if it is not quickly contained.

The immune overreaction accounts for the high percentage of young, healthy people who died in the vicious 1918 flu pandemic. While the flu usually kills the very young or the sickly and old, the pandemic flu provoked healthy people’s stronger immune systems to react even more profoundly than usual, exacerbating the symptoms and ultimately causing between 50 and 100 million deaths world wide. These figures from the past make the new discovery that much more important, as new therapies based on this research could prevent a future H5N1 bird flu pandemic from turning into a repeat of the 1918 Spanish flu.

In the new study, the researchers gave mice infected with influenza a mimic of CD200, or an antibody to stimulate CD200R, to see if these would enable CD200R to bring the immune system under control and reduce inflammation.

The mice that received treatment had less weight loss than control mice and less inflammation in their airways and lung tissue. The influenza virus was still cleared from the lungs within seven days and so this strategy did not appear to affect the immune system’s ability to fight the virus itself.

The researchers hope that in the event of a flu pandemic, such as a pandemic of H5N1 avian flu that had mutated to be transmissible between humans, the new treatment would add to the current arsenal of anti-viral medications and vaccines. One key advantage of this type of therapy is that it would be effective even if the flu virus mutated, because it targets the body’s overreaction to the virus rather than the virus itself.

In addition to the possible applications for treating influenza, the researchers also hope their findings could lead to new treatments for other conditions where excessive immunity can be a problem, including other infectious diseases, autoimmune diseases and allergy.

Cross posted from Next big future by Brian Wang, Lifeboat foundation director of Research

I am presenting disruption events for humans and also for biospheres and planets and where I can correlating them with historical frequency and scale.

There has been previous work on categorizing and classifying extinction events. There is Bostroms paper and there is also the work by Jamais Cascio and Michael Anissimov on classification and identifying risks (presented below).

A recent article discusses the inevtiable “end of societies” (it refers to civilizations but it seems to be referring more to things like the end of the roman empire, which still ends up later with Italy, Austria Hungary etc… emerging)

The theories around complexity seem me that to be that core developments along connected S curves of technology and societal processes cap out (around key areas of energy, transportation, governing efficiency, agriculture, production) and then a society falls back (soft or hard dark age, reconstitutes and starts back up again).

Here is a wider range of disruption. Which can also be correlated to frequency that they have occurred historically.

High growth drop to Low growth (short business cycles, every few years)
Recession (soft or deep) Every five to fifteen years.
Depressions (50−100 years, can be more frequent)

List of recessions for the USA (includes depressions)

Differences recession/depression

Good rule of thumb for determining the difference between a recession and a depression is to look at the changes in GNP. A depression is any economic downturn where real GDP declines by more than 10 percent. A recession is an economic downturn that is less severe. By this yardstick, the last depression in the United States was from May 1937 to June 1938, where real GDP declined by 18.2 percent. Great Depression of the 1930s can be seen as two separate events: an incredibly severe depression lasting from August 1929 to March 1933 where real GDP declined by almost 33 percent, a period of recovery, then another less severe depression of 1937–38. (Depressions every 50–100 years. Were more frequent in the past).

Dark age (period of societal collapse, soft/light or regular)
I would say the difference between a long recession and a dark age has to do with breakdown of societal order and some level of population decline / dieback, loss of knowledge/education breakdown. (Once per thousand years.)

I would say that a soft dark age is also something like what China had from the 1400’s to 1970.
Basically a series of really bad societal choices. Maybe something between depressions and dark age or something that does not categorize as neatly but an underperformance by twenty times versus competing groups. Perhaps there should be some kind of societal disorder, levels and categories of major society wide screw ups — historic level mistakes. The Chinese experience I think was triggered by the renunciation of the ocean going fleet, outside ideas and tech, and a lot of other follow on screw ups.

Plagues played a part in weakening the Roman and Han empires.

Societal collapse talk which includes Toynbee analysis.

Toynbee argues that the breakdown of civilizations is not caused by loss of control over the environment, over the human environment, or attacks from outside. Rather, it comes from the deterioration of the “Creative Minority,” which eventually ceases to be creative and degenerates into merely a “Dominant Minority” (who forces the majority to obey without meriting obedience). He argues that creative minorities deteriorate due to a worship of their “former self,” by which they become prideful, and fail to adequately address the next challenge they face.

My take is that the Enlightenment would strengthened with a larger creative majority, where everyone has a stake and capability to creatively advance society. I have an article about who the elite are now.

Many now argue about how dark the dark ages were not as completely bad as commonly believed.
The dark ages is also called the Middle Ages

Population during the middle ages

Between dark age/social collapse and extinction. There are levels of decimation/devastation. (use orders of magnitude 90+%, 99%, 99.9%, 99.99%)

Level 1 decimation = 90% population loss
Level 2 decimation = 99% population loss
Level 3 decimation = 99.9% population loss

Level 9 population loss (would pretty much be extinction for current human civilization). Only 6–7 people left or less which would not be a viable population.

Can be regional or global, some number of species (for decimation)

Categorizations of Extinctions, end of world categories

Can be regional or global, some number of species (for extinctions)

== The Mass extinction events have occurred in the past (to other species. For each species there can only be one extinction event). Dinosaurs, and many others.

Unfortunately Michael’s accelerating future blog is having some issues so here is a cached link.

Michael was identifying manmade risks
The Easier-to-Explain Existential Risks (remember an existential risk
is something that can set humanity way back, not necessarily killing
everyone):

1. neoviruses
2. neobacteria
3. cybernetic biota
4. Drexlerian nanoweapons

The hardest to explain is probably #4. My proposal here is that, if
someone has never heard of the concept of existential risk, it’s
easier to focus on these first four before even daring to mention the
latter ones. But here they are anyway:

5. runaway self-replicating machines (“grey goo” not recommended
because this is too narrow of a term)
6. destructive takeoff initiated by intelligence-amplified human
7. destructive takeoff initiated by mind upload
8. destructive takeoff initiated by artificial intelligence

Another classification scheme: the eschatological taxonomy by Jamais
Cascio on Open the Future. His classification scheme has seven
categories, one with two sub-categories. These are:

0:Regional Catastrophe (examples: moderate-case global warming,
minor asteroid impact, local thermonuclear war)
1: Human Die-Back (examples: extreme-case global warming,
moderate asteroid impact, global thermonuclear war)
2: Civilization Extinction (examples: worst-case global warming,
significant asteroid impact, early-era molecular nanotech warfare)
3a: Human Extinction-Engineered (examples: targeted nano-plague,
engineered sterility absent radical life extension)
3b: Human Extinction-Natural (examples: major asteroid impact,
methane clathrates melt)
4: Biosphere Extinction (examples: massive asteroid impact,
“iceball Earth” reemergence, late-era molecular nanotech warfare)
5: Planetary Extinction (examples: dwarf-planet-scale asteroid
impact, nearby gamma-ray burst)
X: Planetary Elimination (example: post-Singularity beings
disassemble planet to make computronium)

A couple of interesting posts about historical threats to civilization and life by Howard Bloom.

Natural climate shifts and from space (not asteroids but interstellar gases).

Humans are not the most successful life, bacteria is the most successful. Bacteria has survived for 3.85 billion years. Humans for 100,000 years. All other kinds of life lasted no more than 160 million years. [Other species have only managed to hang in there for anywhere from 1.6 million years to 160 million. We humans are one of the shortest-lived natural experiments around. We’ve been here in one form or another for a paltry two and a half million years.] If your numbers are not big enough and you are not diverse enough then something in nature eventually wipes you out.

Following the bacteria survival model could mean using transhumanism as a survival strategy. Creating more diversity to allow for better survival. Humans adapted to living under the sea, deep in the earth, in various niches in space, more radiation resistance,non-biological forms etc… It would also mean spreading into space (panspermia). Individually using technology we could become very successful at life extension, but it will take more than that for a good plan for human (civilization, society, species) long term survival planning.

Other periodic challenges:
142 mass extinctions, 80 glaciations in the last two million years, a planet that may have once been a frozen iceball, and a klatch of global warmings in which the temperature has soared by 18 degrees in ten years or less.

In the last 120,000 years there were 20 interludes in which the temperature of the planet shot up 10 to 18 degrees within a decade. Until just 10,000 years ago, the Gulf Stream shifted its route every 1,500 years or so. This would melt mega-islands of ice, put out our coastal cities beneath the surface of the sea, and strip our farmlands of the conditions they need to produce the food that feeds us.

The solar system has a 240-million-year-long-orbit around the center of our galaxy, an orbit that takes us through interstellar gas clusters called local fluff, interstellar clusters that strip our planet of its protective heliosphere, interstellar clusters that bombard the earth with cosmic radiation and interstellar clusters that trigger giant climate change.

[Crossposted from the blog of Starship Reckless]

Views of space travel have grown increasingly pessimistic in the last decade. This is not surprising: SETI still has received no unambiguous requests for more Chuck Berry from its listening posts, NASA is busy re-inventing flywheels and citizens even of first-world countries feel beleaguered in a world that seems increasingly hostile to any but the extraordinarily privileged. Always a weathervane of the present, speculative fiction has been gazing more and more inwardly – either to a hazy gold-tinted past (fantasy, both literally and metaphorically) or to a smoggy rust-colored earthbound future (cyberpunk).

The philosophically inclined are slightly more optimistic. Transhumanists, the new utopians, extol the pleasures of a future when our bodies, particularly our brains/minds, will be optimized (or at least not mind that they’re not optimized) by a combination of bioengineering, neurocognitive manipulation, nanotech and AI. Most transhumanists, especially those with a socially progressive agenda, are as decisively earthbound as cyberpunk authors. They consider space exploration a misguided waste of resources, a potentially dangerous distraction from here-and-now problems – ecological collapse, inequality and poverty, incurable diseases among which transhumanists routinely count aging, not to mention variants of gray goo.

And yet, despite the uncoolness of space exploration, despite NASA’s disastrous holding pattern, there are those of us who still stubbornly dream of going to the stars. We are not starry-eyed romantics. We recognize that the problems associated with spacefaring are formidable (as examined briefly in Making Aliens 1, 2 and 3). But I, at least, think that improving circumstances on earth and exploring space are not mutually exclusive, either philosophically or – perhaps just as importantly – financially. In fact, I consider this a false dilemma. I believe that both sides have a much greater likelihood to implement their plans if they coordinate their efforts, for a very simple reason: the attributes required for successful space exploration are also primary goals of transhumanism.

Consider the ingredients that would make an ideal crewmember of a space expedition: robust physical and mental health, biological and psychological adaptability, longevity, ability to interphase directly with components of the ship. In short, enhancements and augmentations eventually resulting in self-repairing quasi-immortals with extended senses and capabilities – the loose working definition of transhuman.

Coordination of the two movements would give a real, concrete purpose to transhumanism beyond the rather uncompelling objective of giving everyone a semi-infinite life of leisure (without guarantees that either terrestrial resources or the human mental and social framework could accommodate such a shift). It would also turn the journey to the stars into a more hopeful proposition, since it might make it possible that those who started the journey could live to see planetfall.

Whereas spacefaring enthusiasts acknowledge the enormity of the undertaking they propose, most transhumanists take it as an article of faith that their ideas will be realized soon, though the goalposts keep receding into the future. As more soundbite than proof they invoke Moore’s exponential law, equating stodgy silicon with complex, contrary carbon. However, despite such confident optimism, enhancements will be hellishly difficult to implement. This stems from a fundamental that cannot be short-circuited or evaded: no matter how many experiments are performed on mice or even primates, humans have enough unique characteristics that optimization will require people.

Contrary to the usual supposition that the rich will be the first to cross the transhuman threshold, it is virtually certain that the frontline will consist of the desperate and the disenfranchised: the terminally ill, the poor, prisoners and soldiers – the same people who now try new chemotherapy or immunosuppression drugs, donate ova, become surrogate mothers, “agree” to undergo chemical castration or sleep deprivation. Yet another pool of early starfarers will be those whose beliefs require isolation to practice, whether they be Raëlians or fundamentalist monotheists – just as the Puritans had to brave the wilderness and brutal winters of Massachusetts to set up their Shining (though inevitably tarnished) City on the Hill.

So the first generation of humans adjusted to starship living are far likelier to resemble Peter Watts’ marginalized Rifters or Jay Lake’s rabid Armoricans, rather than the universe-striding, empowered citizens of Iain Banks’ Culture. Such methods and outcomes will not reassure anyone, regardless of her/his position on the political spectrum, who considers augmentation hubristic, dehumanizing, or a threat to human identity, equality or morality. The slightly less fraught idea of uploading individuals into (ostensibly) more durable non-carbon frames is not achievable, because minds are inseparable from the neurons that create them. Even if technological advances eventually enable synapse-by synapse reconstructions, the results will be not transfers but copies.

Yet no matter how palatable the methods and outcomes are, it seems to me that changes to humans will be inevitable if we ever want to go beyond the orbit of Pluto within one lifetime. Successful implementation of transhumanist techniques will help overcome the immense distances and inhospitable conditions of the journey. The undertaking will also bring about something that transhumanists – not to mention naysayers – tend to dread as a danger: speciation. Any significant changes to human physiology (whether genetic or epigenetic) will change the thought/emotion processes of those altered, which will in turn modify their cultural responses, including mating preferences and kinship patterns. Furthermore, long space journeys will recreate isolated breeding pools with divergent technology and social mores (as discussed in Making Aliens 4, 5 and 6).

On earth, all “separate but equal” doctrines have wrought untold misery and injustice, whether those segregated are genders in countries practicing Sharia, races in the American or African South, or the underprivileged in any nation that lacks decent health policies, adequate wages and humane laws. Speciation of humanity on earth bids fair to replicate this pattern, with the ancestral species (us) becoming slaves, food, zoo specimens or practice targets to our evolved progeny, Neanderthals to their Cro-Magnons, Eloi to their Morlocks. On the other hand, speciation in space may well be a requirement for success. Generation of variants makes it likelier that at least one of our many future permutations will pass the stringent tests of space travel and alight on another habitable planet.

Despite their honorable intentions and progressive outlook, if the transhumanists insist on first establishing a utopia on earth before approving spacefaring, they will achieve either nothing or a dystopia as bleak as that depicted in Paolo Bacigalupi’s unsparing stories. If they join forces with the space enthusiasts, they stand a chance to bring humanity through the Singularity some of them so fervently predict and expect – except it may be a Plurality of sapiens species and inhabited worlds instead.

I was born into a world in which no individual or group claimed to own the mission embodied in the Lifeboat Foundation’s two-word motto. Government agencies, charitable organizations, universities, hospitals, religious institutions — all might have laid claim to some peace of the puzzle. But safeguarding humanity? That was out of everyone’s scope. It would have been a plausible motto only for comic-book organizations such as the Justice League or the Guardians of the Universe.

Take the United Nations, conceived in the midst of the Second World War and brought into its own after the war’s conclusion. The UN Charter states that the United Nations exists:

  • to save succeeding generations from the scourge of war, which twice in our lifetime has brought untold sorrow to mankind, and
  • to reaffirm faith in fundamental human rights, in the dignity and worth of the human person, in the equal rights of men and women and of nations large and small, and
  • to establish conditions under which justice and respect for the obligations arising from treaties and other sources of international law can be maintained, and
  • to promote social progress and better standards of life in larger freedom

All of these are noble, and incredibly important, aims. But even the United Nations manages to name only one existential risk, warfare, which it is pledged to help prevent. Anyone reading this can probably cite a half dozen more.

It is both exciting and daunting to live in an age in which a group like the Lifeboat Foundation can exist outside of the realm of fantasy. It’s exciting because our awareness of possibility is so much greater than it was even a generation or two ago. And it is daunting for exactly the same reason. We can envision plausible triumphs for humanity that really do transcend our wildest dreams, or at least our most glorious fantasies as articulated a few decades ago. Likewise, that worst of all possible outcomes — the sudden and utter disappearance of our civilization, or of our species, or of life itself — now presents itself as the end result of not just one possible calamity, but of many.

I’ve spent the last few years writing about many of those plausible triumphs, while paying less attention to the possible calamities. But I’m not sure that this is a clear-cut dichotomy. Pursuing the former may ultimately provide us with the tools and resources we will need to contend with the latter. So my own personal motto becomes something of a double-edged sword. I encourage everyone to strive to “live to see it.” But maybe we also need to figure out how we can see it…to live.

With that in mind, perhaps “safeguarding humanity” takes on a double meaning, too. We must find a way for humanity to survive in the face of these very real threats. Moreover, we must find a way for humanity — the values, the accomplishments, the sense of purpose which has defined the entire human experience — to survive. And that may be the most audacious mission statement of all.

Stephen Gordon and I will be interviewing the Lifeboat Foundation’s International Spokesperson Philippe Van Nedervelde on our podcast, FastForward Radio on Feb 17, 2008 at 7:00 PM Pacific / 10:00 PM Eastern. We’ll be talking about risks and the role of Lifeboat in helping to mitigate against them.

Last year, the Singularity Institute raised over $500,000. The World Transhumanist Association raised $50,000. The Lifeboat Foundation set a new record for the single largest donation. The Center for Responsible Nanotechnology’s finances are combined with those of World Care, a related organization, so the public can’t get precise figures. But overall, it’s safe to say, we’ve been doing fairly well. Most not-for-profit organizations aren’t funded adequately; it’s rare for charities, even internationally famous ones, to have a large full-time staff, a physical headquarters, etc.

The important question is, now that we’ve accumulated all of this money, what are we going to spend it on? It’s possible, theoretically, to put it all into Treasury bonds and forget about it for thirty years, but that would be an enormous waste of expected utility. In technology development, the earlier the money is spent (in general), the larger the effect will be. Spending $1M on a technology in the formative stages has a huge impact, probably doubling the overall budget or more. Spending $1M on a technology in the mature stages won’t even be noticed. We have plenty of case studies: Radios. TVs. Computers. Internet. Telephones. Cars. Startups.

The opposite danger is overfunding the project, commonly called “throwing money at the problem”. Hiring a lot of new people without thinking about how they will help is one common symptom. Having bloated layers of middle management is another. To an outside observer, it probably seems like we’re reaching this stage already. Hiring a Vice President In Charge Of Being In Charge doesn’t just waste money; it causes the entire organization to lose focus and distracts everyone from the ultimate goal.

I would suggest a top-down approach: start with the goal, figure out what you need, and get it. The opposite approach is to look for things that might be useful, get them, then see how you can complete a project with the stuff you’ve acquired. NASA is an interesting case study, as they followed the first strategy for a number of years, then switched to the second one.

The second strategy is useful at times, particularly when the goal is constantly changing. Paul Graham suggests using it as a strategy for personal success, because the ‘goal’ is changing too rapidly for any fixed plan to remain viable. “Personal success” in 2000 is very different from “success” in 1980, which was different from “success” in 1960. If Kurzweil’s graphs are accurate, “success” in 2040 will be so alien that we won’t even be able to recognize it.

But when the goal is clear- save the Universe, create an eternal utopia, develop new technology X- you simply need to smash through whatever problems show up. Apparently, money has been the main blocker for some time, and it looks like we’ve overcome that (in the short-term) through large-scale fundraising. There’s a large body of literature out there on how to deal with organizational problems; thousands of people have done this stuff before. I don’t know what the main blocker is now, but odds are it’s in there somewhere.

Cross posted from Next big future

Since a journal article was submitted to the Royal Society of Chemistry, the U of Alberta researchers have already made the processor and unit smaller and have brought the cost of building a portable unit for genetic testing down to about $100 Cdn. In addition, these systems are also portable and even faster (they take only minutes). Backhouse, Elliott and McMullin are now demonstrating prototypes of a USB key-like system that may ultimately be as inexpensive as standard USB memory keys that are in common use – only tens of dollars. It can help with pandemic control and detecting and control tainted water supplies.

This development fits in with my belief that there should be widespread inexpensive blood, biomarker and genetic tests to help catch disease early and to develop an understanding of biomarker changes to track disease and aging development. We can also create adaptive clinical trials to shorten the development and approval process for new medical procedures


The device is now much smaller than size of a shoe-box (USB stick size) with the optics and supporting electronics filling the space around the microchip

Canadian scientists have succeeded in building the least expensive portable device for rapid genetic testing ever made. The cost of carrying out a single genetic test currently varies from hundreds to thousands of pounds, and the wait for results can take weeks. Now a group led by Christopher Backhouse, University of Alberta, Edmonton, have developed a reusable microchip-based system that costs just 500 (pounds) to build, is small enough to be portable, and can be used for point-of-care medical testing.

To keep costs down, ‘instead of using the very expensive confocal optics systems currently used in these types of devices we used a consumer-grade digital camera’, Backhouse explained.

The device can be adapted for used in many different genetic tests. ‘By making small changes to the system you could test for a person’s predisposition to cancer, carry out pharmacogenetic tests for adverse drug reactions or even test for pathogens in a water supply,’ said Backhouse.

The heart of the unit, the ‘chip,’ looks like a standard microscope slide etched with fine silver and gold lines. That microfabricated chip applies nano-biotechnologies within tiny volumes, sometimes working with only a few molecules of sample. Because of this highly integrated chip (containing microfluidics and microscale devices), the remainder of the system is inexpensive ($1,000) and fast.

There are many possible uses for such a portable genetic testing unit:

Backhouse notes that adverse drug reactions are a major problem in health care. By running a quick genetic test on a cancer patient, for example, doctors might pinpoint the type of cancer and determine the best drug and correct dosage for the individual.

Or health-care professionals can easily look for the genetic signature for a virus or E. coli – also making it useful for testing water quality.

“From a public health point of view, it would be wonderful during an epidemic to be able to do a quick test on a patient when they walk into an emergency room and be able to say, ‘you have SARS, you need to go into that (isolation) room immediately.’ ”

A family doctor might determine a person’s genetic predisposition to an illness during an office visit and advise the patient on preventative lifestyle changes.

FURTHER READING
Microfabrication technologies research at the University of Alberta

Rapid genetic analysis

In collaboration with the Glerum Lab we have been developing microchip based implementations of genetic amplification (PCR — the polymerase chain reaction) and capillary electrophoresis (CE) that are extremely fast.

- Cancer diagnostics

- Cell manipulation on a chip

- On chip PCR (polymerase chain reaction)

- Single cell PCR

- DNA Sequencing