Menu

Blog

Archive for the ‘information science’ category: Page 284

Apr 10, 2016

Alphabet Inc Uses Its Head in AI

Posted by in categories: business, information science, robotics/AI, security, singularity

I imagine that Alphabet has been already exploring the whole online bot technology with its cloud as well as other AI technology. However, one real opportunity in the online cloud services is the “personable” experiences for consumers and businesses. Granted big data & analytics in the cloud is proving to be exceptional for researchers and industry; however, how do we now make the leap to make things more of a personable experience as well as make it available/ attractive for individual consumers & small business especially we look at connected AI & singularity. Personally, I have not seen any viable and good answers at the moment to my question. Security & privacy still is a huge hurdle that must be addressed properly to ensure adoption by consumers from a personable experience perspective.


The market for cloud services is expected to skyrocket in the years ahead. With hundreds of billions of dollars at stake, industry leaders including Microsoft, IBM, and Alphabet are going all-in to capture their fair share of the cloud revenue pie. Alphabet has taken a different path than its tech brethren in the cloud market, but it appears that’s about to change.

Until recently, Alphabet seemed content to focus its cloud efforts on data hosting, or Infrastructure-as-a-Service (IaaS). Not a bad plan given that the amount of data amassed in today’s digital world is unparalleled and is expected to continue growing as consumers become more connected. But even at this early stage of the cloud, data hosting has become a commodity. The real opportunity lies in cloud-based Software-as-a-Service (SaaS) and data analytics solutions, which Alphabet is beginning to address.

Continue reading “Alphabet Inc Uses Its Head in AI” »

Apr 7, 2016

‘The Next Rembrandt’ is a 3D-printed take on the painter’s style

Posted by in categories: 3D printing, information science, media & arts

A new Rembrandt painting has been unveiled in Amsterdam on Tuesday, and we’re not talking about a newly discovered work. No, this one called The Next Rembrandt is truly brand new, created using data, algorithms and a 3D printer within the span of 18 months. A team of data scientists, engineers and scientists from various institutions, including Microsoft and the Rembrandt House Museum, joined forces to create this homage to the great painter. The team examined all the Dutch master’s known paintings to come up with the perfect project: a portrait of a 30 to 40-year-old Caucasian male with facial hair, wearing dark clothes with a collar and a hat on his head, facing to the right.

They then developed algorithms to extract what features make a painting a Rembrandt, such as the face’s shape and proportions. Ron Augustus, Microsoft’s SMB Markets Director, said: “You could say that we used technology and data like Rembrandt used his paints and his brushes to create something new.” To give their work a real painting’s texture, they used 3D printing techniques to print oil paint in layers. As a result, the portrait feels like it was actually painted by a human artist.

The project, which the Netherlands’ ING Bank commissioned ad agency J Walter Thompson to develop, most likely began as a promotional undertaking. As you can see, though, the final product turned out so good that the same technique could be used to make more affordable replicas (maybe even forgeries) of masterpieces.

Continue reading “‘The Next Rembrandt’ is a 3D-printed take on the painter’s style” »

Apr 6, 2016

Mapping the Brain to Build Better Machines

Posted by in categories: information science, neuroscience, robotics/AI

Interesting; especially since things have been very quite around IARPA and DARPA on their BMI efforts lately.


The Microns project aims to decipher the brain’s algorithms in an effort to revolutionize machine learning.

Read more

Apr 5, 2016

Introduction: Explaining the Future of Synthetic Biology with Computer Programming’s Past

Posted by in categories: bioengineering, biotech/medical, business, computing, genetics, information science, mathematics, Ray Kurzweil, singularity

Like this article highlights; we will see a day soon when all techies will need some level of bio-science and/ or medical background especially as we move closer to Singularity which is what we have seen predicted by Ray Kurzweil and others. In the coming decade/s we will no longer see tech credentials relying strictly on math/ algorithms, code, etc, Techies will need some deeper knowledge around the natural sciences.


If you are majoring in biology right now, I say to you: that was a good call. The mounting evidence suggests that you placed your bet on the right degree. With emergent genetic recombination technologies improving at breakneck speed alongside a much deepened understanding of biological circuitry in simple, “home grown” metabolic systems, this field is shaping up to be a tinkerer’s paradise.

Many compare this stage of synthetic biology to the early days of microprocessing (the precursor to computers) when Silicon Valley was a place for young entrepreneurs to go if they needed a cheap place to begin their research or tech business. One such tech entrepreneur, the founder of O’Reilly media, Tim O’Reilly — who also coined the term “open source” — made this comparison in an interview with Wired magazine., O’Reilly further commented on synthetic biology saying, “It’s still in the fun stage.”

Continue reading “Introduction: Explaining the Future of Synthetic Biology with Computer Programming’s Past” »

Apr 5, 2016

Nvidia Unveils New Supercomputers and AI Algorithms

Posted by in categories: information science, robotics/AI, space travel, supercomputing, virtual reality

Big day for Nvidia with announcements on AI and VR.


The first day of the company’s GPU Technology Conference was chock full of self-driving cars, trips to Mars, and more.

Read more

Apr 5, 2016

Facebook begins using artificial intelligence to describe photos to blind users

Posted by in categories: food, information science, internet, mobile phones, robotics/AI, transportation

Ask a member of Facebook’s growth team what feature played the biggest role in getting the company to a billion daily users, and they’ll likely tell you it was photos. The endless stream of pictures, which users have been able to upload since 2005, a year after Facebook’s launch, makes the social network irresistible to a global audience. It’s difficult to imagine Facebook without photos. Yet for millions of blind and visually impaired people, that’s been the reality for over a decade.

Not anymore. Today Facebook will begin automatically describing the content of photos to blind and visually impaired users. Called “automatic alternative text,” the feature was created by Facebook’s 5-year-old accessibility team. Led by Jeff Wieland, a former user researcher in Facebook’s product group, the team previously built closed captioning for videos and implemented an option to increase the default font size on Facebook for iOS, a feature 10 percent of Facebook users take advantage of.

Automatic alt text, which is coming to iOS today and later to Android and the web, recognizes objects in photos using machine learning. Machine learning helps to build artificial intelligences by using algorithms to make predictions. If you show a piece of software enough pictures of a dog, for example, in time it will be able to identify a dog in a photograph. Automatic alt text identifies things in Facebook photos, then uses the iPhone’s VoiceOver feature to read descriptions of the photos out loud to users. While still in its early stages, the technology can reliably identify concepts in categories including transportation (“car,” “boat,” “airplane”), nature (“snow,” “ocean,” “sunset”), sports (“basketball court”), and food (“sushi”). The technology can also describe people (“baby,” “smiling,” beard”), and identify a selfie.

Continue reading “Facebook begins using artificial intelligence to describe photos to blind users” »

Apr 4, 2016

Quantum physics has just been found hiding in one of the most important mathematical models of all time

Posted by in categories: information science, mathematics, particle physics, quantum physics, space

Game theory is a branch of mathematics that looks at how groups solve complex problems. The Schrödinger equation is the foundational equation of quantum mechanics — the area of physics focused on the smallest particles in the Universe. There’s no reason to expect one to have anything to do with the other.

But according to a team of French physicists, it’s possible to translate a huge number of problems in game theory into the language of quantum mechanics. In a new paper, they show that electrons and fish follow the exact same mathematics.

Schrödinger is famous in popular culture for his weird cat, but he’s famous to physicists for being the first to write down an equation that fully describes the weird things that happen when you try to do experiments on the fundamental constituents of matter. He realised that you can’t describe electrons or atoms or any of the other smallest pieces of the Universe as billiard balls that will be exactly where you expect them to be exactly when you expect them to be there.

Continue reading “Quantum physics has just been found hiding in one of the most important mathematical models of all time” »

Mar 30, 2016

Global championship of driverless cars

Posted by in categories: information science, robotics/AI, transportation

Autonomous cars compete in driving algorithms on Formula E tracks.

Read more

Mar 29, 2016

Neuromorphic supercomputer has 16 million neurons

Posted by in categories: information science, neuroscience, robotics/AI, supercomputing

Today, Lawrence Livermore National Lab (LLNL) and IBM announced the development of a new Scale-up Synaptic Supercomputer (NS16e) that highly integrates 16 TrueNorth Chips in a 4×4 array to deliver 16 million neurons and 256 million synapses. LLNL will also receive an end-to-end software ecosystem that consists of a simulator; a programming language; an integrated programming environment; a library of algorithms as well as applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement.

The $1 million computer has 16 IBM microprocessors designed to mimic the way the brain works.

IBM says it will be five to seven years before TrueNorth sees widespread commercial use, but the Lawrence Livermore test is a big step in that direction.

Continue reading “Neuromorphic supercomputer has 16 million neurons” »

Mar 28, 2016

DARPA Announces Next Grand Challenge — Spectrum Collaboration Challenge

Posted by in categories: information science, internet, military, mobile phones, robotics/AI

DARPA’s new “Spectrum Collaboration Challenge” with a $2million prize for who can motivate a machine learning approach to dynamically sharing the RF Spectrum.


WASHINGTON, March 28, 2016 /PRNewswire-iReach/ — On March 23rd, 2016 DARPA announced its next Grand Challenge at the International Wireless Conference Expo in Las Vegas, Nevada. Program Manager, Paul Tilghman of DARPA’s Microsystems Technology Office (MTO), made the announcement to industry leaders following the conferences Dynamic Spectrum Sharing Summit. The challenge will motivate a machine learning approach to dynamically sharing the RF Spectrum and has been named the “Spectrum Collaboration Challenge.” A top prize of $2million dollars has been announced.

While mostly transparent to the typical cell phone or Wi-Fi user, the problem of spectrum congestion has been a long standing issue for both the commercial sector and Department of Defense. The insatiable appetite for wireless connectivity over the last 30 years has grown at such a hurried pace that within the RF community the term spectrum scarcity has been coined. RF bandwidth, the number of frequencies available to communicate information over, is a relatively fixed resource, and advanced communication systems like LTE and military communications systems consume a lot of it. As spectrum planners prepare for the next big wave of connected devices, dubbed the Internet of Things, they wonder where they will find the spectrum bandwidth they need to support these billions of new devices. Equally challenging, is the military’s desire to connect every soldier on the battlefield, while using these very same frequencies.

Continue reading “DARPA Announces Next Grand Challenge — Spectrum Collaboration Challenge” »