Chrome is about to load web pages a lot faster than you’ve experienced up until now. Thanks to a new compression algorithm called Brotli, which Google introduced last September, Chrome will be able to compress data up to 26 percent more than its existing compression engine, Zopfli, which is an impressive jump.
According to Google’s web performance engineer Ilya Grigorik, Brotli is ready to roll out, so Chrome users should expect to see a bump in load times once the next version of Chrome is released. Google also says Brotli will help mobile Chrome users experience “lower data transfer fees and reduced battery use.” The company is hailing Brotli as “a new data format” that Google hopes will be adopted by other web browsers in the near future, with Firefox seemingly next in line to adopt it. But for now, expect to notice your web pages loading a bit faster in the coming weeks.
Update:January 20th 10:30AM: Updated to note that Firefox will also adopt Brotli in a future update.
This totally makes sense to me. Whenever, you’re monitoring any type of patterns for collective reasoning or predictive analysis such measuring what voters clap to or respond positively to as well as build out your entire campaign strategy and speeches; AI is your go to solution. So, AI is a must have tool for politicians who strategically plan to win in future elections. No more need for a campaign manager like Karl Rove, etc.
Political speeches are often written for politicians by trusted aides and confidantes. Could an AI algorithm do as well?
Ask the average passerby on the street to describe artificial intelligence and you’re apt to get answers like C-3PO and Apple’s Siri. But for those who follow AI developments on a regular basis and swim just below the surface of the broad field , the idea that the foreseeable AI future might be driven more by Big Data rather than big discoveries is probably not a huge surprise. In a recent interview with Data Scientist and Entrepreneur Eyal Amir, we discussed how companies are using AI to connect the dots between data and innovation.
Image credit: Startup Leadership Program Chicago
According to Amir, the ability to make connections between big data together has quietly become a strong force in a number of industries. In advertising for example, companies can now tease apart data to discern the basics of who you are, what you’re doing, and where you’re going, and tailor ads to you based on that information.
“What we need to understand is that, most of the time, the data is not actually available out there in the way we think that it is. So, for example I don’t know if a user is a man or woman. I don’t know what amounts of money she’s making every year. I don’t know where she’s working,” said Eyal. “There are a bunch of pieces of data out there, but they are all suggestive. (But) we can connect the dots and say, ‘she’s likely working in banking based on her contacts and friends.’ It’s big machines that are crunching this.”
Amir used the example of image recognition to illustrate how AI is connecting the dots to make inferences and facilitate commerce. Many computer programs can now detect the image of a man on a horse in a photograph. Yet many of them miss the fact that, rather than an actual man on a horse, the image is actually a statue of a man on a horse. This lack of precision in analysis of broad data is part of what’s keep autonomous cars on the curb until the use of AI in commerce advances.
“You can connect the dots enough that you can create new applications, such as knowing where there is a parking spot available in the street. It doesn’t make financial sense to put sensors everywhere, so making those connections between a bunch of data sources leads to precise enough information that people are actually able to use,” Amir said. “Think about, ‘How long is the line at my coffee place down the street right now?’ or ‘Does this store have the shirt that I’m looking for?’ The information is not out there, but most companies don’t have a lot of incentive to put it out there for third parties. But there will be the ability to…infer a lot of that information.”
This greater ability to connect information and deliver more precise information through applications will come when everybody chooses to pool their information, said Eyal. While he expects a fair bit of resistance to that concept, Amir predicts that there will ultimately be enough players working together to infer and share information; this approach may provide more benefits on an aggregate level, as compared to an individual company that might not have the same incentives to share.
As more data is collected and analyzed, another trend that Eyal sees on the horizon is more autonomy being given to computers. Far from the dire predictions of runaway computers ruling the world, he sees a ‘supervised’ autonomy in which computers have the ability to perform tasks using knowledge that is out-of-reach for humans. Of course, this means developing a sense trust and allowing the computer to make more choices for us.
“The same way that we would let our TiVo record things that are of interest to us, it would still record what we want, but maybe it would record some extras. The same goes with (re-stocking) my groceries every week,” he said. “There is this trend of ‘Internet of Things,’ which brings together information about the contents of your refrigerator, for example. Then your favorite grocery store would deliver what you need without you having to spend an extra hour (shopping) every week.”
On the other hand, Amir does have some potential concerns about the future of artificial intelligence, comparable to what’s been voiced by Elon Musk and others. Yet he emphasizes that it’s not just the technology we should be concerned about.
“At the end, this will be AI controlled by market forces. I think the real risk is not the technology, but the combination of technology and market forces. That, together, poses some threats,” Amir said. “I don’t think that the computers themselves, in the foreseeable future, will terminate us because they want to. But they may terminate us because the hackers wanted to.”
“[H]alf the people in the world cram into just 1 percent of the Earth’s surface (in yellow), and the other half sprawl across the remaining 99 percent (in black).”
Deep Learning in Action | A talk by Juergen Schmidhuber, PhD at the Deep Learning in Action talk series in October 2015. He is professor in computer science at the Dalle Molle Institute for Artificial Intelligence Research, part of the University of Applied Sciences and Arts of Southern Switzerland.
Juergen Schmidhuber, PhD | I review 3 decades of our research on both gradient based and more general problem solvers that search the space of algorithms running on general purpose computers with internal memory.
Architectures include traditional computers, Turing machines, recurrent neural networks, fast weight networks, stack machines, and others. Some of our algorithm searchers are based on algorithmic information theory and are optimal in asymptotic or other senses.
“You like your Tesla, but does your Tesla like you?” My new story for TechCrunch on robots understanding beauty and even whether they like your appearance or not:
Robots are starting to appear everywhere: driving cars, cooking dinners and even as robotic pets.
But people don’t usually give machine intelligence much credence when it comes to judging beauty. That may change with the launch of the world’s first international beauty contest judged exclusively by a robot jury.
The contest, which requires participants to take selfies via a special app and submit them to the contest website, is touting new sophisticated facial recognition algorithms that allow machines to judge beauty in new and improved ways.
Better cameras, along with more powerful algorithms for computer vision and emotion-sensing facial analysis software, could transform the way we interact with our devices.
A researcher at Singapore’s Nanyang Technological University (NTU) has developed a new technology that provides real-time detection, analysis, and optimization data that could potentially save a company 10 percent on its energy bill and lessen its carbon footprint. The technology is an algorithm that primarily relies on data from ubiquitous devices to better analyze energy use. The software uses data from computers, servers, air conditioners, and industrial machinery to monitor temperature, data traffic and the computer processing workload. Data from these already-present appliances are then combined with the information from externally placed sensors that primarily monitor ambient temperature to analyze energy consumption and then provide a more efficient way to save energy and cost.
The energy-saving computer algorithm was developed by NTU’s Wen Yonggang, an assistant professor at the School of Computer Engineering’s Division of Networks & Distributed Systems. Wen specializes in machine-to-machine communication and computer networking, including looking at social media networks, cloud-computing platforms, and big data systems.
Most data centers consume huge amount of electrical power, leading to high levels of energy waste, according to Wen’s website. Part of his research involves finding ways to reduce energy waste and stabilize power systems by scaling energy levels temporally and spatially.
In 2010, a Canadian company called D-Wave announced that it had begun production of what it called the world’s first commercial quantum computer, which was based on theoretical work done at MIT. Quantum computers promise to solve some problems significantly faster than classical computers—and in at least one case, exponentially faster. In 2013, a consortium including Google and NASA bought one of D-Wave’s machines.
Over the years, critics have argued that it’s unclear whether the D-Wave machine is actually harnessing quantum phenomena to perform its calculations, and if it is, whether it offers any advantages over classical computers. But this week, a group of Google researchers released a paper claiming that in their experiments, a quantum algorithm running on their D-Wave machine was 100 million times faster than a comparable classical algorithm.
Scott Aaronson, an associate professor of electrical engineering and computer science at MIT, has been following the D-Wave story for years. MIT News asked him to help make sense of the Google researchers’ new paper.