Toggle light / dark theme

Why not as we will see we will indeed require cell circuited technology for QBS to be full effective/ enhanced.


The TV commercial is nearly 20 years old but I remember it vividly: a couple is driving down a street when they suddenly realize the music on their tape deck is in sync with the repetitive activity on the street. From the guy casually dribbling a basketball to people walking along the sidewalk to the delivery people passing packages out of their truck, everything and everyone is moving rhythmically to the beat.

The ending tag line was, “Sometimes things just come together,” which is quite true. Many of our basic daily activities like breathing and walking just come together as a result of repetitive movement. It’s easy to take them for granted but those rhythmic patterns ultimately rely on very intricate, interconnected signals between nerve cells, also called neurons, in the brain and spinal cord.

Interesting study on brain receptors.


Researchers from UZH have discovered how the perception of meaning changes in the brain under the influence of LSD. The serotonin 2A receptors are responsible for altered perception. This finding will help develop new courses of pharmacotherapy for psychiatric disorders such as depression, addictions or phobias.

Humans perceive everyday things and experiences differently and attach different meaning to pieces of music, for instance. In the case of psychiatric disorders, this perception is often altered. For patients suffering from addictions, for instance, drug stimuli are more meaningful than for people without an addiction. Or patients with phobias perceive the things or situations that scare them with exaggerated significance compared to healthy people. A heightened negative perception of the self is also characteristic of depressive patients. Just how this so-called personal relevance develops in the brain and which neuropharmacological mechanisms are behind it, however, have remained unclear.

Researchers from the Department of Psychiatry, Psychotherapy and Psychosomatics at Zurich University Hospital for Psychiatry now reveal that LSD influences this process by stimulating the serotonin 2A receptor, one of the 14 serotonin receptors in the brain. Before the study began, the participants were asked to categorize 30 pieces of music as personally important and meaningful or without any personal relevance. In the subsequent experiment, LSD altered the attribution of meaning compared to a placebo: “Pieces of music previously classified as meaningless suddenly became personally meaningful under the influence of LSD,” explains Katrin Preller, who conducted the study in conjunction with Professor Franz Vollenweider and the Neuropsychopharmacology and Brain Imaging research team.

Actually, I have began looking seriously into an at home robot for my home; still not where I want them to be why I began looking closer and more seriously at building my own line.


One of the coming great challenges of senior care is facilitating assisted living, according to experts. The so-called Baby Boomer generation that is now entering retirement age lives longer, expects the world from its twilight years, and insists on staying independent for as long as possible. Most Boomers don’t even think about going out quietly, withering away in homes that offer little more than warehousing. Instead, they want to stay active and engaged until the very end, and they welcome all the help they can get to achieve that goal. And when they cannot do it anymore on their own, futuristic technology like robots for personal use may just be the ticket.

If you have seen the 2012 movie “Robot & Frank,” you already had a – albeit comical – glimpse of how the future of assisted living might look like. In a nutshell, the story is about the “relationship” between an elderly gentleman (played by Frank Langella), who just retired from a lifetime career as a cat burglar, and a humanoid robot given to him by his children as a home caretaker. Of course, the film’s particular angle on robotic technology is not to be taken too seriously. But the fact is that intelligent machines are progressively affecting every aspect of life as we know it, and will do so much more in coming years.

On a recent trip to Tokyo, I had the chance to see for myself how far we have already moved in that direction. Here, robots designed for personal assistance are readily available in department stores, just like any other household appliances. Although, many of the existing models have only limited capabilities like finding information on the Internet or compiling music playlists, or even less useful features like responding with a cute smile and offering a handshake when approached, it is clear that these creatures of our own making will eventually be the ones we partner up with on countless tasks, both at work and in our homes.

Read more

Carrie Fisher just died but she will likely come back to life because the Singularity is Near and is bringing Singularity CGI with it!

Bringing the dead back to life

In the latest Star Wars movie, Rogue One, five characters were brought back as they would have looked between episodes 3 and 4 of Star Wars. They were Princess Leia, Grand Moff Tarkin, Dr. Cornelius Evazan (who said “I have the death sentence on twelve systems” in episode 4), General Dodonna, and Mon Mothma. General Dodonna and Mon Mothma were brought back with the traditional method of using actors who looked similar to the original actors. The other three were brought back with CGI (computer-generated imagery), more specifically CGI enhanced with motion capture.

Princess Leia (Carrie Fisher) created by computer in Rogue One (credit: Lucasfilm)

Grand Moff Tarkin created by computer in Rogue One (credit: Lucasfilm)
(He looks better because he had more screen time so they spent more money on him.)

Motion capture has been used for a while including depicting Gollum in Lord of the Rings and all the apes in the latest Planet of the Apes movies but Grand Moff Tarkin was considered possibly too hard to do with today’s technology because people are better at seeing flaws in CGI humans than CGI apes and CGI twisted hobbits.

Here is how Grand Moff Tarkin was created:

English actor Guy Henry (Harry Potter) wore motion capture materials on his head so that his face could eventually be replaced with a digital likeness of Cushing’s (the original actor for Moff Tarkin). He was picked because he had a similar build and stature as Cushing and was able to speak in a similar manner.

Guy Henry looked a little different before the CGI was placed on top of him.
(credit: Industrial Light & Magic/Lucasfilm)

John Knoll, the chief creative officer at Industrial Light & Magic, wasn’t sure they were going to be able to pull this off. He said, “We did talk about Tarkin participating in conversations via hologram, or transferring that dialogue to other characters.” Tarkin ended up looking pretty good (actually fooling some critics!) although Dr. Evazan looked even better for his scene mostly because his distorted face doesn’t look completely human.

It takes a lot of work to make CGI look realistic!
(credit: Industrial Light & Magic/Lucasfilm)

Nvidia

Was this CGI good enough to envision Princess Leia having a significant role using this technology? Probably not. But something amazing happened in 2016. Nvidia completed an over $2 billion project to develop a new class of graphics/AI chips. An example of how good they were is that the new version of the Nvidia Titan X can do AI (artificial intelligence) 566% faster than the previous Titan X that was only released a year before. This $1,200 card is so popular that half a year after its release Nvidia is still rationing this card to two per customer. (Disney gets to use Nvidia cards that are even more powerful than the Titan X!)

Why does Nvidia matter to Star Wars fans? Because Nvidia’s chips power the CGI in virtually all movies. Nvidia is now making 100% of the profits of all companies selling graphics cards, with their chips being put into everything from Nintendo’s newest console to every Tesla car currently being built. This has caused Nvidia to soar in value, with their stock over tripling in 2016. This means that Nvidia is going to be able to spend a lot more than $2 billion on their next generation of graphics/AI chips! (Nvidia has found a way to use about the same architecture for AI chips and regular graphics chips.)

Not only is Nvidia’s new graphics/AI chip design affecting movies but they are helping make Ray Kurzweil’s prediction of human equivalent AI by 2029 a reality. Nvidia’s CEO Jen-Hsun Huang said, “AI is going to increase in capability faster than Moore’s Law. I believe it’s a kind of a hyper Moore’s Law phenomenon because it has the benefit of continuous learning. It has the benefit of large-scale networked continuous learning. Today, we roll out a new software package, fix bugs, update it once a year. That rhythm is going to change. Software will learn from experience much more quickly. Once one smart piece of software on one device learns something, then you can over-the-air (OTA) it across the board. All of a sudden, everything gets smarter.”

Singularity CGI

Nvidia’s improved chips are part of a three layer accelerating returns juggernaut that is affecting CGI and which we will call Singularity CGI.

Layer one of Singularity CGI is better hardware with Nvidia having just switched from 28 nanometer transistors to 16/14 nanometer transistors. (They are getting 16 nanometer technology from TSMC and 14 nanometer from Samsung.) A 14 nanometer part is only 140 atoms wide! Both Samsung and TSMC will be producing 10 nanometer parts in 2017, with large Nvidia parts likely in 2018. (The 10 nanometer parts in 2017 will be for companies like Apple who don’t need large, complex chips as Nvidia does.)

Layer two of Singularity CGI is better Nvidia designs which is what Nvidia just spent over two billion dollars on. Layer three of Singularity CGI is better software. An example of this exponentially improved software is that Disney Research and Carnegie Mellon University scientists have found that three computer vision methods commonly used to reconstruct 3D scenes produce superior results in capturing facial details when they are performed simultaneously, rather than independently.

“The quality of a 3D model can make or break the perceived realism of an animation,” said Paulo Gotardo, an associate research scientist at Disney Research. “That’s particularly true for faces; people have a remarkably low threshold for inaccuracies in the appearance of facial features. PGSF (photogeometric scene flow) could prove extremely valuable because it can capture dynamically moving objects in high detail and accuracy.”

Conclusion

The growing power of Singularity CGI means that when Disney is ready to release episode 9 in 2019, Disney will be able to use much more powerful Nvidia cards and more sophisticated software to bring Carrie Fisher back to life to finish her story whose first two parts were in The Force Awakens (episode 7) in 2015 and episode 8 in 2017. (Carrie already filmed her scenes for episode 8.)

To increase the chance of success of creating a realistic Carrie Fisher, Disney will even be given $50 million thanks to insurance through Lloyd’s of London to fund this effort. May the force be with us!

Pre-order at http://www.superpedestrian.com This is the first commercial version of the Copenhagen Wheel. Now available for sale.
Own a limited edition, hand-crafted Copenhagen Wheel, invented and built in Cambridge, MA.

The Copenhagen wheel Technical specifications:
MOTOR US: 350W / EU: 250W
WEELE SIZE 26″ or 700c rim
BATTERY Removable 48Volt Lithium
CONNECTIVITY Bluetooth 4.0
BATTERY LIFE 1000 cycles
SMARTPHONE OS iOS, Android
CHARGE TIME 4 hours
COMPATIBILITY Single Speed or 9/10 Speed Free Hub (email us your bike specs if you have doubts: [email protected])
TOP SPEED US: 20 mph
EU: 25 km/h
BRAKE TYPE Rim brake and regenerative braking (downhill and back-pedal)
RANGE Up to 50 km / 31 mi
WEIGHT 5.9 kg / 13 lbs
DROPOUT 135 mm

Video:
Directed by : Alon Seifert
Concept & script by Assaf Biderman
Production: papush.net
Supervised by Nili (Onili) Ohayon
Lead Photographer : Frank Sum
Animation director: Omer ben David
Photographers: Danny Dwyer, John David, Habib Yazdi
Additional 3D animation: Yishay Shemesh
Video editor: Alison Mao
Additional Editing: Habib Yazdi
Narration by Andrew Finn Magill
Additional animation: Dan G Windsor
Additional graphic design: Eitan Cohen
Music by The secret project
sound mix by Nili Ohayon
Stills photos and additional production: Dan Mason
Bike Mechanic: Edward Thomas

Riders: Chris Green, Frank, Nili Ohayon, Eli Pe’er

Special thanks to the Superpedestrian Team:
John Ibsen, Basak Ozer, Ruben Cagnie, James Simard, Julian Fong, Eric Barber, Jon Stevens, Nili Ohayon, Jeanne Dasaro and of course: Assaf Biderman. Extra thanks to Chris Green for script assistance.

Special Thanks to Harris bicycle Shop in Newton MA.

Read more

No surprises here. We all have known that with tech in medical research and development would and will continue to solve many diseases such as cancer as we are already seeing with gene and cell circuitry technology.


Silicon Valley thrives on disrupting the traditional ways we do many things: education, consuming music and other media, communicate with others, even how we stay healthy. Bill Gates and Dr. Patrick Soon-Shiong know a few things about how to spend a lot of money to disrupt mainstream research while searching for cures in medicine.

Sean Parker hopes to join their ranks. In 1999, he co-founded the file-sharing service Napster, and in 2004, he became the first president of Facebook. Today, Parker announced his latest endeavor: a $250 million bet on eradicating cancer. Through the Parker Institute for Cancer Immunotherapy, he says his plan is just a matter of time until it works.

Billionaire behind Cancer Moonshot 2020

Quantum mechanics dictates sensitivity limits in the measurements of displacement, velocity and acceleration. A recent experiment at the Niels Bohr Institute probes these limits, analyzing how quantum fluctuations set a sensor membrane into motion in the process of a measurement. The membrane is an accurate model for future ultraprecise quantum sensors, whose complex nature may even hold the key to overcome fundamental quantum limits. The results are published in the scientific journal, Proceedings of the National Academy of Sciences.

Vibrating strings and membranes are at the heart of many musical instruments. Plucking a string excites it to vibrations, at a frequency determined by its length and tension. Apart from the fundamental frequency — corresponding to the musical note — the string also vibrates at higher frequencies. These overtones influence how we perceive the ‘sound’ of the instrument, and allow us to tell a guitar from a violin. Similarly, beating a drumhead excites vibrations at a number of frequencies simultaneously.

These matters are not different when scaling down, from the half-meter bass drum in a classic orchestra to the half-millimeter-sized membrane studied recently at the Niels Bohr Institute. And yet, some things are not the same at all: using sophisticated optical measurement techniques, a team lead by Professor Albert Schliesser could show that the membrane’s vibrations, including all its overtones, follow the strange laws of quantum mechanics. In their experiment, these quantum laws implied that the mere attempt to precisely measure the membrane vibrations sets it into motion. As if looking at a drum already made it hum!

Read more

We all become accustomed to the tone and pattern of human speech at an early age, and any deviations from what we have come to accept as “normal” are immediately recognizable. That’s why it has been so difficult to develop text-to-speech (TTS) that sounds authentically human. Google’s DeepMind AI research arm has turned its machine learning model on the problem, and the resulting “WaveNet” platform has produced some amazing (and slightly creepy) results.

Google and other companies have made huge advances in making human speech understandable by machines, but making the reply sound realistic has proven more challenging. Most TTS systems are based on so-called concatenative technologies. This relies upon a database of speech fragments that are combined to form words. This tends to sound rather uneven and has odd inflections. There is also some work being done on parametric TTS, which uses a data model to generate words, but this sounds even less natural.

DeepMind is changing the way speech synthesis is handled by directly modeling the raw waveform of human speech. The very high-level approach of WaveNet means that it can conceivably generate any kind of speech or even music. Listen above for an example of WaveNet’s voice synthesis. There’s an almost uncanny valley quality to it.

Read more

Tune In, Take Control.

With OV, your day becomes more productive, enjoyable, and just a whole lot easier. Use your voice to play a song, order groceries and check the news. Switch seamlessly between the best music and calls, voice commands and real world conversations, without missing a beat and without touching your phone.

Read more