Toggle light / dark theme

Could Synthetic DNA Be the Next Tech Breakthrough?

Is Synbio the next big thing? Hmmm; depends. If we’re talking about ensuring that we have a solid foundation/ infrastructure (including platforms; etc.) on QC 1st then with the existing evolution and maturity of the fundamentals around Synbio as a 1st step; then accelerating the further maturity of Synbio into creating super humans and singularity? My answer is yes. If we’re not even considering that we need QC and just focused on Synbio only; my answer is No as QC will be required as a foundation for things like real Humanoid AI, cell circuited humans/ super humans, etc.


Why we might soon be buying silk, wood, and more fabricated out of genetic code.

Read more

Singularity CGI

Carrie Fisher just died but she will likely come back to life because the Singularity is Near and is bringing Singularity CGI with it!

Bringing the dead back to life

In the latest Star Wars movie, Rogue One, five characters were brought back as they would have looked between episodes 3 and 4 of Star Wars. They were Princess Leia, Grand Moff Tarkin, Dr. Cornelius Evazan (who said “I have the death sentence on twelve systems” in episode 4), General Dodonna, and Mon Mothma. General Dodonna and Mon Mothma were brought back with the traditional method of using actors who looked similar to the original actors. The other three were brought back with CGI (computer-generated imagery), more specifically CGI enhanced with motion capture.

Princess Leia (Carrie Fisher) created by computer in Rogue One (credit: Lucasfilm)

Grand Moff Tarkin created by computer in Rogue One (credit: Lucasfilm)
(He looks better because he had more screen time so they spent more money on him.)

Motion capture has been used for a while including depicting Gollum in Lord of the Rings and all the apes in the latest Planet of the Apes movies but Grand Moff Tarkin was considered possibly too hard to do with today’s technology because people are better at seeing flaws in CGI humans than CGI apes and CGI twisted hobbits.

Here is how Grand Moff Tarkin was created:

English actor Guy Henry (Harry Potter) wore motion capture materials on his head so that his face could eventually be replaced with a digital likeness of Cushing’s (the original actor for Moff Tarkin). He was picked because he had a similar build and stature as Cushing and was able to speak in a similar manner.

Guy Henry looked a little different before the CGI was placed on top of him.
(credit: Industrial Light & Magic/Lucasfilm)

John Knoll, the chief creative officer at Industrial Light & Magic, wasn’t sure they were going to be able to pull this off. He said, “We did talk about Tarkin participating in conversations via hologram, or transferring that dialogue to other characters.” Tarkin ended up looking pretty good (actually fooling some critics!) although Dr. Evazan looked even better for his scene mostly because his distorted face doesn’t look completely human.

It takes a lot of work to make CGI look realistic!
(credit: Industrial Light & Magic/Lucasfilm)

Nvidia

Was this CGI good enough to envision Princess Leia having a significant role using this technology? Probably not. But something amazing happened in 2016. Nvidia completed an over $2 billion project to develop a new class of graphics/AI chips. An example of how good they were is that the new version of the Nvidia Titan X can do AI (artificial intelligence) 566% faster than the previous Titan X that was only released a year before. This $1,200 card is so popular that half a year after its release Nvidia is still rationing this card to two per customer. (Disney gets to use Nvidia cards that are even more powerful than the Titan X!)

Why does Nvidia matter to Star Wars fans? Because Nvidia’s chips power the CGI in virtually all movies. Nvidia is now making 100% of the profits of all companies selling graphics cards, with their chips being put into everything from Nintendo’s newest console to every Tesla car currently being built. This has caused Nvidia to soar in value, with their stock over tripling in 2016. This means that Nvidia is going to be able to spend a lot more than $2 billion on their next generation of graphics/AI chips! (Nvidia has found a way to use about the same architecture for AI chips and regular graphics chips.)

Not only is Nvidia’s new graphics/AI chip design affecting movies but they are helping make Ray Kurzweil’s prediction of human equivalent AI by 2029 a reality. Nvidia’s CEO Jen-Hsun Huang said, “AI is going to increase in capability faster than Moore’s Law. I believe it’s a kind of a hyper Moore’s Law phenomenon because it has the benefit of continuous learning. It has the benefit of large-scale networked continuous learning. Today, we roll out a new software package, fix bugs, update it once a year. That rhythm is going to change. Software will learn from experience much more quickly. Once one smart piece of software on one device learns something, then you can over-the-air (OTA) it across the board. All of a sudden, everything gets smarter.”

Singularity CGI

Nvidia’s improved chips are part of a three layer accelerating returns juggernaut that is affecting CGI and which we will call Singularity CGI.

Layer one of Singularity CGI is better hardware with Nvidia having just switched from 28 nanometer transistors to 16/14 nanometer transistors. (They are getting 16 nanometer technology from TSMC and 14 nanometer from Samsung.) A 14 nanometer part is only 140 atoms wide! Both Samsung and TSMC will be producing 10 nanometer parts in 2017, with large Nvidia parts likely in 2018. (The 10 nanometer parts in 2017 will be for companies like Apple who don’t need large, complex chips as Nvidia does.)

Layer two of Singularity CGI is better Nvidia designs which is what Nvidia just spent over two billion dollars on. Layer three of Singularity CGI is better software. An example of this exponentially improved software is that Disney Research and Carnegie Mellon University scientists have found that three computer vision methods commonly used to reconstruct 3D scenes produce superior results in capturing facial details when they are performed simultaneously, rather than independently.

“The quality of a 3D model can make or break the perceived realism of an animation,” said Paulo Gotardo, an associate research scientist at Disney Research. “That’s particularly true for faces; people have a remarkably low threshold for inaccuracies in the appearance of facial features. PGSF (photogeometric scene flow) could prove extremely valuable because it can capture dynamically moving objects in high detail and accuracy.”

Conclusion

The growing power of Singularity CGI means that when Disney is ready to release episode 9 in 2019, Disney will be able to use much more powerful Nvidia cards and more sophisticated software to bring Carrie Fisher back to life to finish her story whose first two parts were in The Force Awakens (episode 7) in 2015 and episode 8 in 2017. (Carrie already filmed her scenes for episode 8.)

To increase the chance of success of creating a realistic Carrie Fisher, Disney will even be given $50 million thanks to insurance through Lloyd’s of London to fund this effort. May the force be with us!

What does being on track for the predicted Technological Singularity mean and are we on track?

Ray Kurzweil is famous for his vision and prediction of a Technological Singularity by 2049 Although whenever Ray predicts a date like 2049, based on Kurzweil’s own past reviews of his predictions, he gives his predictions ten years late or early to develop. So by Ray’s personal standard his prediction timing of being correct on the Technological Singularity would be if it happened in the 2041 to 2059 time window. Usually his predictions are based upon exponential developments and progress, so he rarely would make an error in predicting something happening too early.

The technological singularity is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.

Some use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology, although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.

Read more

Aubrey de Grey: Indefinite Lifespans And Rationalizing Death

Aubrey and Kurzweil.


Don’t miss new Futurology videos! Subscribe by clicking here : https://goo.gl/wzFPRK

Enchance your focus and concentration like you never thought possible : http://bit.ly/2iPoRWl

Watch other videos :

This Is How Quantum Computing Will Change The World: https://youtu.be/0Hlssbyc49o

Let’s cut to the chase – there have never been times as uncertain as these in the world of business

There is no written rule-book to follow when it comes to career survival. The “Future of Work” is about making ourselves employable in a workforce where the priority of business leaders is to invest in automation and digital technology, more than training and developing their own workforces.

As our soon-to-be-released State of Operations and Outsourcing 2017 study, conducted in conjunction with KPMG across 454 major enterprise buyers globally, shows a dramatic shift in priorities from senior managers (SVPs and above), where 43% are earmarking significant investment in robotic automation of processes, compared with only 28% placing a similar emphasis on training and change management. In fact, the same number of senior managers are as focused on cognitive computing as their own people … yes, folks, this is the singularity of enterprise operations, where cognitive computing now equals employees’ brains when it comes to investment!

Read more

Is it Possible to Defeat Death? SENS Research Over 9000!

Dr. Aubrey de Grey on the case again in this amusing video.


Dr. Aubrey de Grey in a new video where people ask questions via Twitter. It is a bit tongue in cheek and sorry about the title but hopefully you will enjoy it,

If you liked this video and agree that eliminating age-related diseases is a good idea please consider visiting our website and making a donation for science on the link below:

Donate

Read more

Why we are still light years away from full artificial intelligence

The future is here… or is it?

With so many articles proliferating the media space on how humans are at the cusp of full AI (artificial intelligence), it’s no wonder that we believe that the future — which is full of robots and drones and self-driven vehicles, as well as diminishing human control over these machines — is right on our doorstep.

But are we really approaching the singularity as fast as we think we are?

Read more

Proof that Moore’s Law has been replaced by a Virtual Moore’s Law that is Accelerating and Bringing the Singularity With It

Introduction

Moore’s Law says that the number of transistors per square inch will double approximately every 18 months. This article will show how many technologies are providing us with a new Virtual Moore’s Law that proves computer performance will at least double every 18 months for the foreseeable future thanks to many new technological developments.

This Virtual Moore’s Law is propelling us towards the Singularity where the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.

Going Vertical

In the first of my “proof” articles two years ago, I described how it has become harder to miniaturize transistors, causing computing to go vertical instead. 2 years ago, Samsung was mass producing 24-layer 3D NAND chips and had announced 32-layer chips. As I write this, Samsung is mass producing 48-layer 3D NAND chips with 64-layer chips rumored to appear within a month or so. Even more importantly, it is expected that by the end of 2017, the majority of NAND chips produced by all companies will be 3D. Currently Samsung and its competitors are working 24/7 to transform their 2D factories to 3D factories causing a dramatic change in how NAND flash chips are created.

Cross section of 48-layer 3D NAND chip (Learn more!)

Going Massively Parallel

Moore’s law only talks about the number of transistors per square inch. It doesn’t directly mean that a chip will run any faster. Unfortunately, since 2006 Intel’s CPUs have dramatically slowed their increase in performance, averaging about 10% a year. The cause of this problem is that the average Intel CPU only has 2 to 4 cores and it has become difficult to speed up these cores.

Nvidia has been promoting a different architecture, which averages thousands of cores. In 2016, they had a HUUGE success with this idea causing their company to soar in value with the market capitalization of Nvidia reaching about one third of Intel’s value. (This is impressive as Intel is a very profitable company with profits exceeding $15 billion in 2015.)

titan-x-pascal

Titan X Pascal is 566% faster at AI than last year’s model!

Exactly what did Nvidia achieve? Their new Titan X runs AI instructions 566% faster than the old Titan X card that was only released a year earlier. Also, Nvidia got a half-size version of their Drive PX 2 put in all Tesla cars and this chip is about 4000% faster than the chip it replaced. Finally, Nvidia is working on a successor to the Drive PX 2 chip called Xavier that is rumored to come out in about a year and to be at least 400% more energy efficient than the current chip.

These numbers of 566%, 4000%, and 400% are much bigger than Intel’s 10% and are causing a fundamental change in how computing is done. It is worth noting that Nvidia’s main competitor AMD has also been blowing past Intel’s 10% annual performance gains so the idea of many cores has been proven by multiple companies. In fact, even the graphical part of Intel’s chips has been blowing past this 10% performance gain per year.

Thousands of programs have been designed to take advantage of this parallel computing performance. For example, the program BlazingDB runs over 100 times as fast as MySQL which was only designed to run on CPUs. As the performance gap increases between a standard CPU with a handful of cores and a GPU with thousands of cores, more and more programs are being written to take advantage of GPUs. And the growing market for massively parallel chips means that Nvidia can now afford to spend lots of money in making their chips better. For example, their latest generation of chips called Pascal cost over $2 billion to develop. (All this money goes into making a better chip design, Nvidia doesn’t actually build chips. They currently use TSMC and Samsung for that.)

Lightning-Fast Data

For a long time, data for programs was stored on slow hard drives. Then it was moved to SATA SSDs which rapidly sped up each year until they finally hit the bandwidth limits of the SATA standard. Now data is moving to PCIe SSDs that currently have 6 times the bandwidth of SATA drives with even faster PCIe drives planned. (A PCIe drive that used 16 lanes like a graphics card would have 4 times the bandwidth of current PCIe drives.) Both Intel’s coming Optane 3D XPoint SSDs and Samsung’s Z-NAND SSDs are examples of such faster PCIe drives and a handful of enterprise SSDs already exist that use 16 lanes.

Even faster than all these drives is the idea of storing everything in memory which is becoming more and more common. When Watson won at Jeopardy in 2011, it used the trick of using its 16 terabytes of RAM to store everything in memory instead of using its drives during the competition. Today Samsung sells 2.5D memory cards that hold 128GB each.

Intel’s Xeons with the highest memory capacity can handle 3 terabytes of memory per chip and motherboards are being sold that can hold 3 terabytes of Samsung’s 2.5D memory. 2.5D means that four layers of chips are “soldered” on top of each other. (Cheaper non-Xeon systems now hold as much as 128GB 2D memory which is pretty good for a home computer.)

Computers Programming Computers

computer-programming

Nvidia’s CEO Jen-Hsun Huang said, “AI is going to increase in capability faster than Moore’s Law. I believe it’s a kind of a hyper Moore’s Law phenomenon because it has the benefit of continuous learning. It has the benefit of large-scale networked continuous learning. Today, we roll out a new software package, fix bugs, update it once a year. That rhythm is going to change. Software will learn from experience much more quickly. Once one smart piece of software on one device learns something, then you can over-the-air (OTA) it across the board. All of a sudden, everything gets smarter.”

Computers Designing Chips

Since the mid-1970s, programs have been used to design chips as chips have become too complicated for any team of humans to handle. (Nvidia’s Tesla P100 GPUs have 150 billion transistors when you include the memory “soldered” to the top of them!)

A quantum leap in chip design may happen in the near future as Nvidia recently built a supercomputer for internal research out of mainly Nvidia Tesla P100 GPUs. This supercomputer was ranked 28 out of all computers in the world. What will this computer be used for?

Nvidia said, “We’re also training neural networks to understand chipset design and very-large-scale-integration, so our engineers can work more quickly and efficiently. Yes, we’re using GPUs to help us design GPUs.”

This is a very interesting area to watch as today’s chips are so complicated that they are likely very inefficient with massive speedups being available if we could find a better way to optimize them. An example of the gains possible is that Nvidia got about a 50% performance increase between its Kepler and Maxwell generations despite both microarchitectures using the same 28nm technology.

Conclusion

The new Virtual Moore’s Law is already having a massive effect. Jen-Hsun said, “By collaborating with AI developers, we continued to improve our GPU designs, system architecture, compilers, and algorithms, and sped up training deep neural networks by 50x in just three years — a much faster pace than Moore’s Law.”

With chips going vertical, chip architectures going massively parallel, lightning-fast data, computers programming computers, and computers designing chips, the Singularity is closer than you think!

Technical Note for Geeks

Here is what I actually meant by “soldered”:

Conventional chip packages interconnect die stacks using wire bonding, whereas in TSV packages, the chip dies are ground down to a few dozen micrometers, pierced with hundreds of fine holes and vertically connected by electrodes passing through the holes, allowing for a significant boost in signal transmission. TSV stands for Through-Silicon Vias.