Toggle light / dark theme

Boeing has hired a former SpaceX and Tesla executive with autonomous technology experience to lead its software development team.

Effective immediately, Jinnah Hosein is Boeing’s vice-president of software engineering, a new position that includes oversight of “software engineering across the enterprise”, Boeing says.

“Hosein will lead a new, centralised organisation of engineers who currently support the development and delivery of software embedded in Boeing’s products and services,” the Chicago-based airframer says. “The team will also integrate other functional teams to ensure engineering excellence throughout the product life cycle.”

Another argument for government to bring AI into its quantum computing program is the fact that the United States is a world leader in the development of computer intelligence. Congress is close to passing the AI in Government Act, which would encourage all federal agencies to identify areas where artificial intelligences could be deployed. And government partners like Google are making some amazing strides in AI, even creating a computer intelligence that can easily pass a Turing test over the phone by seeming like a normal human, no matter who it’s talking with. It would probably be relatively easy for Google to merge some of its AI development with its quantum efforts.

The other aspect that makes merging quantum computing with AI so interesting is that the AI could probably help to reduce some of the so-called noise of the quantum results. I’ve always said that the way forward for quantum computing right now is by pairing a quantum machine with a traditional supercomputer. The quantum computer could return results like it always does, with the correct outcome muddled in with a lot of wrong answers, and then humans would program a traditional supercomputer to help eliminate the erroneous results. The problem with that approach is that it’s fairly labor intensive, and you still have the bottleneck of having to run results through a normal computing infrastructure. It would be a lot faster than giving the entire problem to the supercomputer because you are only fact-checking a limited number of results paired down by the quantum machine, but it would still have to work on each of them one at a time.

But imagine if we could simply train an AI to look at the data coming from the quantum machine, figure out what makes sense and what is probably wrong without human intervention. If that AI were driven by a quantum computer too, the results could be returned without any hardware-based delays. And if we also employed machine learning, then the AI could get better over time. The more problems being fed to it, the more accurate it would get.

Looks like inventory robots won’t be replacing humans in Walmart for now. 😃

I’m a bit sad for the supplier of the robots. But I’m glad that people will keep their jobs in Walmart.


Bitter Reality

Unfortunately, the news was devastating for Bossa Nova, the robotics firm that provided Walmart with its inventory robots. The firm, a Carnegie Mellon University-born startup, laid off half of its staff as it tries to drum up replacement business.

“We see an improvement in stores with the robots,” Walmart told Bossa Nova, someone familiar with the deal told the WSJ, “but we don’t see enough of an improvement.”

Military observers said the disruptive technologies – those that fundamentally change the status quo – might include such things as sixth-generation fighters, high-energy weapons like laser and rail guns, quantum radar and communications systems, new stealth materials, autonomous combat robots, orbital spacecraft, and biological technologies such as prosthetics and powered exoskeletons.


Speeding up the development of ‘strategic forward-looking disruptive technologies’ is a focus of the country’s latest five-year plan.

EPFL engineers have developed a computer chip that combines two functions—logic operations and data storage—into a single architecture, paving the way to more efficient devices. Their technology is particularly promising for applications relying on artificial intelligence.

It’s a major breakthrough in the field of electronics. Engineers at EPFL’s Laboratory of Nanoscale Electronics and Structures (LANES) have developed a next-generation circuit that allows for smaller, faster and more energy-efficient devices—which would have major benefits for artificial-intelligence systems. Their revolutionary technology is the first to use a 2-D material for what’s called a logic-in–, or a single architecture that combines logic operations with a memory function. The research team’s findings appear today in Nature.

Until now, the energy efficiency of has been limited by the von Neumann architecture they currently use, where and take place in two separate units. That means data must constantly be transferred between the two units, using up a considerable amount of time and energy.

The Float is a concept car by Yunchen Chai. It won the design competition hosted by Renault and Central Saint Martins. The participants of the competition had to design a car that emphasized electric power, autonomous driving, and connected technology.

This car uses Meglev technology, is non-directional, and a magnetic belt to attach multiple pods. The Float would even come with an app. This could be the future of car design.

FACEBOOK: https://www.facebook.com/mashable/
TWITTER: https://twitter.com/mashable
INSTAGRAM: https://www.instagram.com/mashable/

Blizzard president J. Allen Brack said the system has dramatically reduced toxic chat and repeating offenses.


In April 2019, Blizzard shared some insights into how it was using machine learning to combat abusive chat in games like Overwatch. It’s a very complicated process, obviously, but it appears to be working out: Blizzard president J. Allen Brack said in a new Fireside Chat video that it has resulted in an “incredible decrease” in toxic behavior.

“Part of having a good game experience is finding ways to ensure that all are welcome within the worlds, no matter their background or identity,” Brack says in the video. “Something we’ve spoken about publicly a little bit in the past is our machine learning system that helps us verify player reports around offensive behavior and offensive language.”

Artificial intelligence helps scientists make discoveries, but not everyone can understand how it reaches its conclusions. One UMaine computer scientist is developing deep neural networks that explain their findings in ways users can comprehend, applying his work to biology, medicine and other fields.

Interpretable machine learning, or AI that creates explanations for the findings it reaches, defines the focus of Chaofan Chen’s research. The assistant professor of computer science says interpretable machine learning also allows AI to make comparisons among images and predictions from data, and at the same time, elaborate on its reasoning.

Scientists can use interpretable machine learning for a variety of applications, from identifying birds in images for wildlife surveys to analyzing mammograms.

How do you *feel* about that?


Much of today’s discussion around the future of artificial intelligence is focused on the possibility of achieving artificial general intelligence. Essentially, an AI capable of tackling an array of random tasks and working out how to tackle a new task on its own, much like a human, is the ultimate goal. But the discussion around this kind of intelligence seems less about if and more about when at this stage in the game. With the advent of neural networks and deep learning, the sky is the actual limit, at least that will be true once other areas of technology overcome their remaining obstructions.

For deep learning to successfully support general intelligence, it’s going to need the ability to access and store much more information than any individual system currently does. It’s also going to need to process that information more quickly than current technology will allow. If these things can catch up with the advancements in neural networks and deep learning, we might end up with an intelligence capable of solving some major world problems. Of course, we will still need to spoon-feed it since it only has access to the digital world, for the most part.

If we desire an AGI that can consume its own information, there are a few more advancements in technology that only time can deliver. In addition to the increased volume of information and processing speed, before any AI will be much use as an automaton, it will need to possess fine motor skills. An AGI with control of its own faculty can move around the world and consume information through its various sensors. However, this is another case of just waiting. It’s also another form of when not if these technologies will catch up to the others. Google has successfully experimented with fine motor skills technology. Boston Dynamics has canine robots with stable motor skills that will only improve in the coming years. Who says our AGI automaton needs to stand erect?