Toggle light / dark theme

In a wide-ranging interview at the WSJ Tech Live conference that touched on topics like the future of remote work, AI innovation, employee activism and even misinformation on YouTube, Alphabet CEO Sundar Pichai also shared his thoughts on the state of tech innovation in the U.S. and the need for new regulations. Specifically, Pichai argued for the creation of a federal privacy standard in the U.S., similar to the GDPR in Europe. He also suggested it was important for the U.S. to stay ahead in areas like AI, quantum computing and cybersecurity, particularly as China’s tech ecosystem further separates itself from Western markets.

In recent months, China has been undergoing a tech crackdown, which has included a number of new regulations designed to combat tech monopolies, limit customer data collection and create new rules around data security, among other things. Although many major U.S. tech companies, Google included, don’t provide their core services in China, some who did are now exiting — like Microsoft, which just this month announced its plan to pull LinkedIn from the Chinese market.

Pichai said this sort of decoupling of Western tech from China may become more common.

The reason AI exists is to make our lives simpler, actions faster, knowledge more usable and decision-making more assured. In these regards, AI has done a fine job and continues to do so in both personal and professional contexts. Despite this, you would come across countless concerns about the ‘ethical issues’ posed by the technology. Pay closer attention, and you will realize that most of these issues stem from human negligence or ignorance.

It goes without saying that the relationship between AI and human rights can only be as good as we humans enable it to be. AI-powered systems act on the basis of how competently they have been built and trained. So, executing those two tasks ethically can make sure that AI tools and applications will never violate any human right.

Even swarms of self-replicating robots.

If alien civilizations exist, they may have opened a Pandora’s box.

It may sound far-fetched, but self-replicating probes from an alien civilization could become a serious nuisance to budding societies like ours. While this is pure speculation, we have an ace in the hole: China’s new massive radio telescope might be capable of detecting swarms of alien probes, also called von Neumann probes, at relatively vast distances from our sun, according to a recent study shared on a preprint server.

We are living in a time when we can see what needs to be done, but the industrial legacy of the last century has such power invested, politically and in the media, and so much money at its disposal due to the investors who have too much to lose to walk away, and so they throw good money after bad to desperately try to save their stranded assets.

Well, the next decade will bring new technologies which will rupture the business models of the old guard, tipping the balance on their huge economies of scale, which will quickly disintegrate their advantage before consigning them to history, and these new ways of doing things will be better for us and the environment, and cheaper than every before. Just look at how the internet and the smart phone destroyed everything from cameras to video shops to taxis and the very high street itself.

The rest is not far behind and it all holds the opportunity to mend the damage we have done.

If you want to know more about what lies ahead, check out this video.

LONDON, Oct 20 (Reuters) — Executives, beware! You could become your own worst enemy.

CEOs and other managers are increasingly under the microscope as some investors use artificial intelligence to learn and analyse their language patterns and tone, opening up a new frontier of opportunities to slip up.

In late 2,020 according to language pattern software specialist Evan Schnidman, some executives in the IT industry were playing down the possibility of semiconductor chip shortages while discussing supply-chain disruptions.

Every time a human or machine learns how to get better at a task, a trail of evidence is left behind. A sequence of physical changes — to cells in a brain or to numerical values in an algorithm — underlie the improved performance. But how the system figures out exactly what changes to make is no small feat. It’s called the credit assignment problem, in which a brain or artificial intelligence system must pinpoint which pieces in its pipeline are responsible for errors and then make the necessary changes. Put more simply: It’s a blame game to find who’s at fault.

AI engineers solved the credit assignment problem for machines with a powerful algorithm called backpropagation, popularized in 1986 with the work of Geoffrey Hinton, David Rumelhart and Ronald Williams. It’s now the workhorse that powers learning in the most successful AI systems, known as deep neural networks, which have hidden layers of artificial “neurons” between their input and output layers. And now, in a paper published in Nature Neuroscience in May, scientists may finally have found an equivalent for living brains that could work in real time.

A team of researchers led by Richard Naud of the University of Ottawa and Blake Richards of McGill University and the Mila AI Institute in Quebec revealed a new model of the brain’s learning algorithm that can mimic the backpropagation process. It appears so realistic that experimental neuroscientists have taken notice and are now interested in studying real neurons to find out whether the brain is actually doing it.

It’s actually about a company called Varda Space Industries.


After signing a deal with Varda Space Industries, Elon Musk’s next plan could be to revolutionize manufacturing in space. Stay tuned for the latest SpaceX news and subscribe to Futurity.

#spacex #elonMusk #space.