Toggle light / dark theme

This March, we, a group of educators, scientists, and psychologists started an educational non-profit (501 c3) Earthlings Hub, helping kids in refugee camps and evacuated orphanages. We are getting lots of requests for help, and are in urgent need to raise funds. If you happen to have any connections to educational and humanitarian charities, or if your universities or companies may be interested in providing some financial support to our program, we would really appreciate that! Please share with everyone who might be able to offer help or advice.

Our advisory board includes NASA astronaut Greg Chamitoff, Professor Uri Wilensky, early math educator Maria Droujkova, AI visionary Joscha Bach, and others.


Support Us The Earthlings Hub works with a fiscal sponsor Blue Marble Space. CREDIT CARD & PAYPAL Please contact us if you would like to via other means, such as checks, stocks, cryptocurrency, or using your Donor Advised Fund: [email protected]

“We’ve spent a lot of time in education to help people understand that just like an automobile extends your capabilities in the physical domain, artificial intelligence extends your abilities within the data domain and the information domain,” the general said Wednesday.

AI and its traces can be found across the Pentagon and its many enclaves and alcoves. The department has for years recognized its value as well, describing the tech in a 2018 strategy as rapidly changing businesses, industries and military threats. More can be done, Groen said.

“Implementation in the department, of course, is always a challenge, as new technology meets legacy processes, legacy organizations and legacy technology,” he said, later adding: “We believe that a lot of the rules have to change, a lot of the thought processes have been rendered obsolete, and, maybe, the cores of how our organizational processes work have to be reevaluated through the lens of artificial intelligence and data.”

A new training approach yields artificial intelligence that adapts to diverse play-styles in a cooperative game, in what could be a win for human-AI teaming.

As artificial intelligence gets better at performing tasks once solely in the hands of humans, like driving cars, many see teaming intelligence as a next frontier. In this future, humans and AI are true partners in high-stakes jobs, such as performing complex surgery or defending from missiles. But before teaming intelligence can take off, researchers must overcome a problem that corrodes cooperation: humans often do not like or trust their AI partners.

Now, new research points to diversity as being a key parameter for making AI a better team player.

The speed of operations leaves manual inspectors with just seconds to decide if the product is really defective, or not.

That’s where Microsoft’s Project Brainwave could come in. Project Brainwave is a hardware architecture designed to accelerate real-time AI calculations. The Project Brainwave architecture is deployed on a type of computer chip from Intel called a field programmable gate array, or FPGA, to make real-time AI calculations at competitive cost and with the industry’s lowest latency, or lag time. This is based on internal performance measurements and comparisons to other organizations’ publicly posted information.

At Microsoft’s Build developers conference in Seattle this week, the company is announcing a preview of Project Brainwave integrated with Azure Machine Learning, which the company says will make Azure the most efficient cloud computing platform for AI.

For instance, continuous-variable (CV) QKD has its own distinct advantages at a metropolitan distance36,37 due to the use of common components of coherent optical communication technology. In addition, the homodyne38 or heterodyne39 measurements used by CV-QKD have inherent extraordinary spectral filtering capabilities, which allows the crosstalk in wavelength division multiplexing (WDM) channels to be effectively suppressed. Therefore, hundreds of QKD channels may be integrated into a single optical fiber and can be cotransmitted with classic data channels. This allows QKD channels to be more effectively integrated into existing communication networks. In CV-QKD, discrete modulation technology has attracted much attention31,40,41,42,43,44,45,46,47,48,49,50 because of its ability to reduce the requirements for modulation devices. However, due to the lack of symmetry, the security proof of discrete modulation CV-QKD also mainly relies on numerical methods43,44,45,46,47,48,51.

Unfortunately, calculating a secure key rate by numerical methods requires minimizing a convex function over all eavesdropping attacks related with the experimental data52,53. The efficiency of this optimization depends on the number of parameters of the QKD protocol. For example, in discrete modulation CV-QKD, the number of parameters is generally \(1000–3000\) depending on the different choices of cutoff photon numbers44. This leads to the corresponding optimization possibly taking minutes or even hours51. Therefore, it is especially important to develop tools for calculating the key rate that are more efficient than numerical methods.

In this work, we take the homodyne detection discrete-modulated CV-QKD44 as an example to construct a neural network capable of predicting the secure key rate for the purpose of saving time and resource consumption. We apply our neural network to a test set obtained at different excess noises and distances. Excellent accuracy and time savings are observed after adjusting the hyperparameters. Importantly, the predicted key rates are highly likely to be secure. Note that our method is versatile and can be extended to quickly calculate the complex secure key rates of various other unstructured quantum key distribution protocols. Through some open source deep learning frameworks for on-device inference, such as TensorFlow Lite54, our model can also be easily deployed on devices at the edge of the network, such as mobile devices, embedded Linux or microcontrollers.

Amazon and Max Planck Society announced the formation of a Science Hub—a collaboration that marks the first Amazon Science Hub to exist outside the United State… See more.


Amazon and Max Planck Society (also known as Max-Planck-Gesellschaft or MPG) today announced the formation of a Science Hub. The collaboration marks the first Amazon Science Hub to exist outside the United States and will focus on advancing artificial intelligence research and development throughout Germany.

The hub’s goal is to advance the frontiers of AI, computer vision, and machine learning research to ensure that research is creating solutions whose benefits are shared broadly across all sectors of society. To achieve that end, the collaboration will include sponsored research; open research; industrial fellowships co-supervised by Max Planck and Amazon; and community events funding to enrich the MPG and Amazon research communities.

The hub opens doors to further scientific collaboration with Max Planck Institutes (MPI), including the MPI for Intelligent Systems, the MPI for Software Systems, the MPI for Informatics, and the MPI for Biological Cybernetics.

(2022). Journal of Experimental & Theoretical Artificial Intelligence. Ahead of Print.


AI has for decades attempted to code commonsense concepts, e.g., in knowledge bases, but struggled to generalise the coded concepts to all the situations a human would naturally generalise them to, and struggled to understand the natural and obvious consequences of what it has been told. This led to brittle systems that did not cope well with situations beyond what their designers envisaged. John McCarthy (1968) said ‘a program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows’; that is a problem that has still not been solved. Dreifus (1998) estimated that ‘Common sense is knowing maybe 30 or 50 million things about the world and having them represented so that when something happens, you can make analogies with others’. Minsky presciently noted that common sense would require the capability to make analogical matches between knowledge and events in the world, and furthermore that a special representation of knowledge would be required to facilitate those analogies. We can see the importance of analogies for common sense in the way that basic concepts are borrowed, e.g., the tail of an animal, or the tail of a capital ‘Q’, or the tail-end of a temporally extended event (see also examples of ‘contain’, ‘on’, in Sec. 5.3.1). More than this, for known facts, such as ‘a string can pull but not push an object’, an AI system needs to automatically deduce (by analogy) that a cloth, sheet, or ribbon, can behave analogously to the string. For the fact ‘a stone can break a window’, the system must deduce that any similarly heavy and hard object is likely to break any similarly fragile material. Using the language of Sec. 5.2.1, each of these known facts needs to be treated as a schema,14 and then applied by analogy to new cases.

Projection is a mechanism that can find analogies (see Sec. 5.3.1) and hence could bridge the gap between models of commonsense concepts (i.e., not the entangled knowledge in word embeddings learnt from language corpora) and text or visual or sensorimotor input. To facilitate this, concepts should be represented by hierarchical compositional models, with higher levels describing relations among elements in the lower-level components (for reasons discussed in Sec. 6.1). There needs to be an explicit symbolic handle on these subcomponents; i.e., they cannot be entangled in a complex network. For visual object recognition, a concept can simply be a set of spatial relations among component features, but higher concepts require a complex model involving multiple types of relations, partial physics theories, and causality. Secs. 5.2 and 5.3 give a hint of what these concepts may look like, but a full example requires a further paper.

Moving beyond the recognition of individual concepts, a complete cognitive system needs to represent and simulate what is happening in a situation, based on some input, e.g., text, visual. This means instantiating concepts in some workspace to flesh out relevant details of a scenario. Sometimes very little data is available for some part of a scenario, and it must be imagined. For example, suppose some machine in a wooden casing moves smoothly across a surface, but the viewer cannot see what mechanism is on the underside, the viewer may conjecture it rolls on wheels, and if it gets stuck one may imagine a wheel hitting a small stone. This type of imagination is another projection: assuming a prior model of a wheeled vehicle is available, then the parts of this can be projected to positions in the simulation (parts unseen in the actual scenario). Similarly for a wheel hitting a stone: a schema abstracted from a previously experienced episode of such an occurrence can serve as a model. Simulation and projection must work together to imagine scenarios, because an unfolding simulation may trigger new projections. If the simulation is of something happening in the present, then sensor data can enter to constrain the possibilities for the simulation. The importance of analogy for this kind of reasoning in a human-level cognitive agent has also been recognised by other AI researchers (K. D. Forbus & Hinrichs, 2006 ; Forbus et al., 2008).