Menu

Blog

Archive for the ‘information science’ category: Page 156

Sep 21, 2020

Neuroscience study finds ‘hidden’ thoughts in visual part of brain

Posted by in categories: information science, neuroscience

How much control do you have over your thoughts? What if you were specifically told not to think of something—like a pink elephant?

A recent study led by UNSW psychologists has mapped what happens in the brain when a person tries to suppress a . The neuroscientists managed to ‘decode’ the complex brain activity using functional brain imaging (called fMRI) and an imaging algorithm.

The findings suggest that even when a person succeeds in ignoring a thought, like the pink elephant, it can still exist in another part of the brain—without them being aware of it.

Sep 20, 2020

Using Machine Learning to Convert Your Image to Vaporwave or Other Artistic Styles

Posted by in categories: information science, robotics/AI

TL;DR: This article walks through the mechanism of a popular machine learning algorithm called neural style transfer (NST), which is able…

Sep 18, 2020

NASA to test precision automated landing system designed for the moon and Mars on upcoming Blue Origin mission

Posted by in categories: information science, robotics/AI, space travel

NASA is going to be testing a new precision landing system designed for use on the tough terrain of the moon and Mars for the first time during an upcoming mission of Blue Origin’s New Shepard reusable suborbital rocket. The “Safe and Precise Landing – Integrated Capabilities Evolution” (SPLICE) system is made up of a number of lasers, an optical camera and a computer to take all the data collected by the sensors and process it using advanced algorithms, and it works by spotting potential hazards, and adjusting landing parameters on the fly to ensure a safe touchdown.

SPLICE will get a real-world test of three of its four primary subsystems during a New Shepard mission to be flown relatively soon. The Jeff Bezos –founded company typically returns its first-stage booster to Earth after making its trip to the very edge of space, but on this test of SPLICE, NASA’s automated landing technology will be operating on board the vehicle the same way they would when approaching the surface of the moon or Mars. The elements tested will include “terrain relative navigation,” Doppler radar and SPLICE’s descent and landing computer, while a fourth major system — lidar-based hazard detection — will be tested on future planned flights.

Currently, NASA already uses automated landing for its robotic exploration craft on the surface of other planets, including the Perseverance rover headed to Mars. But a lot of work goes into selecting a landing zone with a large area of unobstructed ground that’s free of any potential hazards in order to ensure a safe touchdown. Existing systems can make some adjustments, but they’re relatively limited in that regard.

Sep 14, 2020

Playing with Realistic Neural Talking Head Models

Posted by in categories: information science, robotics/AI

Researchers at the Samsung AI Center in Moscow (Russia) have recently presented interesting work called Living portraits: they made Mona Lisa and other subjects of photos and art alive using video of real people. They presented a framework for meta-learning of adversarial generative models called “Few-Shot Adversarial Learning”.

You can read more about details in the original paper.

Here we review this great implementation of the algorithm in PyTorch. The author of this implementation is Vincent Thévenin — research worker in De Vinci Innovation Center.

Sep 14, 2020

C-MIMI: Use of AI in radiology is evolving

Posted by in categories: biotech/medical, information science, robotics/AI

September 14, 2020 — The use of artificial intelligence (AI) in radiology to aid in image interpretation tasks is evolving, but many of the old factors and concepts from the computer-aided detection (CAD) era still remain, according to a Sunday talk at the Conference on Machine Intelligence in Medical Imaging (C-MIMI).

A lot has changed as the new era of AI has emerged, such as faster computers, larger image datasets, and more advanced algorithms — including deep learning. Another thing that’s changed is the realization of additional reasons and means to incorporate AI into clinical practice, according to Maryellen Giger, PhD, of the University of Chicago. What’s more, AI is also being developed for a broader range of clinical questions, more imaging modalities, and more diseases, she said.

At the same time, many of the issues are the same as those faced in the era of CAD. There are the same clinical tasks of detection, diagnosis, and response assessment, as well as the same concern of “garbage in, garbage out,” she said. What’s more, there’s the same potential for off-label use of the software, and the same methods for statistical evaluations.

Sep 11, 2020

More laser power allows faster production of ultra-precise polymeric parts across 12 orders of magnitude

Posted by in categories: 3D printing, information science, nanotechnology

A high-power laser, optimized optical pathway, a patented adaptive resolution technology, and smart algorithms for laser scanning have enabled UpNano, a Vienna-based high-tech company, to produce high-resolution 3D-printing as never seen before.

“Parts with nano- and microscale can now be printed across 12 orders of magnitude—within times never achieved previously. This has been accomplished by UpNano, a spin-out of the TU Wien, which developed a high-end two-photon polymerization (2PP) 3D-printing system that can produce polymeric parts with a volume ranging from 100 to 1012 cubic micrometers. At the same time the printer allows for a nano- and microscale resolution,” the company said in a statement.

Recently the company demonstrated this remarkable capability by printing four models of the Eiffel Tower ranging from 200 micrometers to 4 centimeters—with perfect representation of all minuscule structures within 30 to 540 minutes. With this, 2PP 3D-printing is ready for applications in R&D and industry that seemed so far impossible.

Sep 10, 2020

The World’s First Living Machines

Posted by in categories: biotech/medical, information science, robotics/AI

Teeny-tiny living robots made their world debut earlier this year. These microscopic organisms are composed entirely of frog stem cells, and, thanks to a special computer algorithm, they can take on different shapes and perform simple functions: crawling, traveling in circles, moving small objects — or even joining with other organic bots to collectively perform tasks.


The world’s first living robots may one day clean up our oceans.

Sep 10, 2020

Joscha Bach — GPT-3: Is AI Deepfaking Understanding?

Posted by in categories: existential risks, information science, mathematics, media & arts, particle physics, quantum physics, robotics/AI, singularity

On GPT-3, achieving AGI, machine understanding and lots more… Will GPT-3 or an equivalent be used to deepfake human understanding?


Joscha Bach on GPT-3, achieving AGI, machine understanding and lots more
02:40 What’s missing in AI atm? Unified coherent model of reality
04:14 AI systems like GPT-3 behave as if they understand — what’s missing?
08:35 Symbol grounding — does GPT-3 have it?
09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
11:13 GPT-3 temperature parameter. Strange output?
13:09 GPT-3 a powerful tool for idea generation
14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
16:32 Increasing GPT-3 input context may have a high impact
16:59 Identifying grammatical structure & language
19:46 What is the GPT-3 transformer network doing?
21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
24:07 GPT-3 can’t write a good novel
25:09 GPT-3 needs to become sensitive to multi-modal sense data — video, audio, text etc
26:00 GPT-3 a universal chat-bot — conversations with God & Johann Wolfgang von Goethe
30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
38:06 Deep-faking understanding
40:06 The metaphor of the Golem applied to civ
42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
44:32 GPT-3 babbling at the level of non-experts
45:14 Our civilization lacks sentience — it can’t plan ahead
46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
47:41 Google GShard with 600 billion input parameters — Amazon may be doing something similar — future experiments
49:12 Ideal grounding in machines
51:13 We live inside a story we generate about the world — no reason why GPT-3 can’t be extended to do this
52:56 Tracking the real world
54:51 MicroPsi
57:25 What is computationalism? What is it’s relationship to mathematics?
59:30 Stateless systems vs step by step Computation — Godel, Turing, the halting problem & the notion of truth
1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
1:03:54 Infinities can’t describe a consistent reality without contradictions
1:06:04 Stevan Harnad’s understanding of computation
1:08:32 Causation / answering ‘why’ questions
1:11:12 Causation through brute forcing correlation
1:13:22 Deep learning vs shallow learning
1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain — would it wake up?
1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
1:19:56 Software/OS as spirit — spiritualism vs superstition. Empirically informed spiritualism
1:23:53 Can we build AI that shares our purposes?
1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
1:31:29 Intelligent design
1:33:09 Category learning & categorical perception: Models — parameters constrain each other
1:35:06 Surprise minimization & hidden states; abstraction & continuous features — predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
1:37:29 ‘Category’ is a useful concept — gradients are often hard to compute — so compressing away gradients to focus on signals (categories) when needed
1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
1:49:18 The term ‘general intelligence’ inherits it’s essence from behavioral psychology; a behaviorist black box approach to measuring capability
1:52:15 How we perceive color — natural synesthesia & induced synesthesia
1:56:37 The g factor vs understanding
1:59:24 Understanding as a mechanism to achieve goals
2:01:42 The end of science?
2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
2:10:05 The Fermi paradox
2:12:19 Existence, death and identity construction.

Sep 9, 2020

Blockchain-powered ‘Smart Brain’ to govern China’s new ‘Aerospace City’

Posted by in categories: bitcoin, governance, government, information science, robotics/AI, satellites

Singapore-based blockchain data firm CyberVein has become one of 12 firms participating in the construction of China’s Hainan Wenchang International Aerospace City. Construction commenced last month, with the site previously hosting a satellite launch center. Described as “China’s first aerospace cultural and tourism city,” it will be a hub for the development of aerospace products and support services intended for use in Chinese spacecraft and satellite launch missions. The 12-million-square-meter facility will host the country’s first aerospace super-computing center, and will focus on developing 40 technological areas including big data, satellite remote sensing and high precision positioning technology. CyberVein will work alongside major Chinese firms, including Fortune 500 companies Huawei and Kingsoft Cloud, and will leverage its blockchain, artificial intelligence and big data technologies to support the development of the city’s Smart Brain Planning and Design Institute.”


Blockchain firm CyberVein is partnering with the Chinese government to build a blockchain-powered governance system for its aerospace ‘smart city.’

Listen to article.

Sep 9, 2020

Researchers design system to visualize objects through clouds and fog

Posted by in categories: biotech/medical, information science, robotics/AI

Like a comic book come to life, researchers at Stanford University have developed a kind of X-ray vision—only without the X-rays. Working with hardware similar to what enables autonomous cars to “see” the world around them, the researchers enhanced their system with a highly efficient algorithm that can reconstruct three-dimensional hidden scenes based on the movement of individual particles of light, or photons. In tests, detailed in a paper published Sept. 9 in Nature Communications, their system successfully reconstructed shapes obscured by 1-inch-thick foam. To the human eye, it’s like seeing through walls.

“A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible,” said Gordon Wetzstein, assistant professor of electrical engineering at Stanford and senior author of the paper. “This is really pushing the frontier of what may be possible with any kind of sensing system. It’s like superhuman vision.”

This technique complements other vision systems that can see through barriers on the —for applications in medicine—because it’s more focused on large-scale situations, such as navigating self-driving cars in fog or heavy rain and satellite imaging of the surface of Earth and other planets through hazy atmosphere.