robotics – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Mon, 13 Jan 2020 18:43:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Gender and Smart Learning Technologies https://lifeboat.com/blog/2020/01/gender-and-smart-learning-technologies Mon, 13 Jan 2020 18:43:17 +0000 https://lifeboat.com/blog/?p=100779

How can we tackle gender imbalance in the personalities of AI learning tools?

The Gendering of AI

The expected growth in use of artificial intelligence (AI) in learning applications is raising concerns about both the potential gendering of these tools and the risk that they will display the inherent biases of their developers. Why the concern? Well, to make it easier for us to integrate AI tools and chatbots into our lives, designers often give them human attributes. For example, applications and robots are often given a personality and gender. Unfortunately, in many cases, gender stereotypes are being perpetuated. The type of roles robots are designed to perform usually reflect gendered over generalizations of feminine or masculine attributes.

Feminine personalities in AI tools such as chatbots and consumer devices like Amazon’s Alexa are often designed to have sympathetic features and perform tasks related to care giving, assistantship, or service. Many of these applications have been created to work as personal assistants, in customer service or teaching. Examples include Emma the floor cleaning robot and Apple’s Siri your personal iPhone assistant. Conversely, male robots are usually designed as strong, intelligent and able to perform “dirty jobs”. They typically work in analytical roles, logistics, and security. Examples include Ross the legal researcher, Stan the robotic parking valet and Leo the airport luggage porter.

Gendering of technology is problematic because it perpetuates stereotypes and struggles present in society today. It can also help reinforce the inequality of opportunities between genders. These stereotypes aren´t beneficial for either males or females as they can limit a person´s possibilities and polarize personalities with artificial boundaries.

Response Strategies

We propose four strategies to help tackle this issue at different stages of the problem:

  • Mix it up – Developers of AI learning solutions can experiment with allocating different genders and personality traits to their tools.
  • Gender based testing – New tools can be tested on different audience to assess the impact of say a quantum mechanics teaching aide with a female voice but quite masculine persona.
  • Incentives for women in technology - By the time we reach developer stage the biases may have set in. So, given the likely growth in demand for AI based applications in learning and other domains, organizations and universities could sponsor women to undertake technology degrees and qualifications which emphasize a more gender balanced approach across all that they do from the make-up of faculty to the language used.
  • Gender neutral schooling – The challenge here is to provide gender neutral experiences from the start, as the early stages experiences offered to children usually perpetuate stereotypes. How many opportunities do boys have to play with dolls at school without being bullied? Teachers’ interactions are crucial in role modeling and addressing “appropriate” or “inappropriate behavior”. For example, some studies show teachers give boys more opportunities to expand ideas orally and are more rewarded to do so than girls. Conversely girls can be punished more severely for the use of bad language.

A version of this article originally appeared in Training Journal.

Image https://pixabay.com/images/id-3950719/ by john hain


Author Bios

The authors are futurists with Fast Future — a professional foresight firm specializing in delivering keynote speeches, executive education, research, and consulting on the emerging future and the impacts of change for global clients. Fast Future publishes books from leading future thinkers around the world, exploring how developments such as AI, robotics, exponential technologies, and disruptive thinking could impact individuals, societies, businesses, and governments and create the trillion-dollar sectors of the future. Fast Future has a particular focus on ensuring these advances are harnessed to unleash individual potential and enable a very human future. See: www.fastfuture.com

Rohit Talwar is a global futurist, award-winning keynote speaker, author, and the CEO of Fast Future. His prime focus is on helping clients understand and shape the emerging future by putting people at the center of the agenda. Rohit is the co-author of Designing Your Future, lead editor and a contributing author for The Future of Business, and editor of Technology vs. Humanity. He is a co-editor and contributor for the recently published Beyond Genuine Stupidity – Ensuring AI Serves Humanity and The Future Reinvented – Reimagining Life, Society, and Business, and two forthcoming books — Unleashing Human Potential – The Future of AI in Business, and 50:50 – Scenarios for the Next 50 Years.

Helena Calle is a researcher at Fast Future.  She is a recent graduate from the MSc. program in Educational Neuroscience at Birkbeck, University of London, and has eight years of international experience as a teacher, teacher trainer, pedagogic coordinator, and education consultant. Helena coordinates Fast Future’s research on the future of learning.



]]>
Tilly Lockey, the “Real Alita” / Bionic Girl on ideaXme https://lifeboat.com/blog/2019/09/tilly-lockey-the-real-alita-bionic-girl-on-ideaxme Mon, 02 Sep 2019 11:24:13 +0000 https://lifeboat.com/blog/?p=95675 ]]> Fembots vs. HAL: Who are the people of AI? https://lifeboat.com/blog/2019/05/fembots-vs-hal-who-are-the-people-of-ai Fri, 17 May 2019 16:00:34 +0000 https://lifeboat.com/blog/?p=90840 From Watson to Sophia, who are the artificially intelligent robot personas of today, and what can they tell us about our future?

Siri.  Alexa. Cortana. These familiar names are the modern-day Girl Fridays making everyone’s life easier.  These virtual assistants powered by artificial intelligence (AI) bring to life the digital tools of the information age.  One of the subtle strategies designers use to make it easier for us to integrate AI into our lives is “anthropomorphism” -  the attribution of human-like traits to non-human objects.  However, the rise of AI with distinct personalities, voices, and physical forms is not as benign as it might seem. As futurists who are interested in the impacts of technology on society, we wonder what role human-like technologies play in achieving human-centred futures. 

For example, do anthropomorphized machines enable a future wherein humanity can thrive?  Or, do human-like AIs foreshadow a darker prognosis, particularly in relation to gender roles and work?  This article looks at a continuum of human-like personas that give a face to AI technology.  We ask: what does it mean for our collective future that technology is increasingly human-like and gendered?  And, what does it tell us about our capacity to create a very human future?

The Women of AI

One of the most important observations we want to convey is that the typical consumer-facing AI persona is highly feminine and feminized.  There are several robots and AI that take a female form.  The examples below show the sheer breadth of applications where a feminine persona and voice are deliberately used to help us feel comfortable with increasingly invasive technology:

  • Emma:  Brain Corp’s autonomous floor cleaner Emma (Enabling Mobile Machine Automation) is no chatty fembot.  She is designed to clean large spaces like schools and hospitals. Currently, Emma is being piloted at various Wal-Mart locations, where the human cleaning crew is being asked to embrace a robot-supporting role – even though it may ultimately replace some of them.  Emma washes floors independently using AI, the lidar light based remote sensing method, and smart sensors.
  • Alexa:  Amazon’s Alexa is the disembodied feminine AI that lives inside a smart device.  As a personal assistant, Alexa does it all.  There are versions of Alexa for hotels, some that act as your DJ, and those that provide medical advice.  There is another side to Alexa, however; one that secretly records your private conversations.  This is a great example of how companion AIs embody the surveillance of Big Brother with the compassion of Big Mother rolled into one.
  • Siri:  Like Alexa, Apple’s Siri is an AI-powered woman’s voice.  The iPhone assistant is helpful and direct.  You can find information, get where you need to go, and organize your schedule.  Lately, Siri is attempting to learn jokes and develop more of a natural rapport with users.  Can brushing up on social skills help virtual assistant AIs shed their reputation for being both nosy and dull?
  • Cara:  In the legal industry Casetext’s Cara (Case Analysis Research Assistant) is an algorithmic legal assistant that uses machine-learning to conduct research.  Cara is widely available to attorneys and judges, a great example of AI replacing professional jobs with a powerfully smart feminine figure.  With Cara, we have to wonder if there are too many outdated assumptions about gender involved—why is Cara a legal assistant, and not an attorney like Ross, the world’s first robot lawyer?
  • Kate:  This specialized travel robot from SITA, is an AI mobile passenger check-in kiosk.  Kate uses big data related to airport passenger flow to move autonomously about the airport, going where she is most needed to reduce lines and wait times.  Kate, like many AI programs, uses big data predictively, perhaps displaying something similar to women’s intuition.
  • Sophia:  This humanoid robot from Hanson robotics gained notoriety as the first robot to claim a form of citizenship.  Debuted in 2017, Sophia is a recognized citizen of the nation of Saudi Arabia, and the first robot with legal personhood.  Sophia can carry on conversations and answer interesting questions.  But with her quirky personality and exaggerated female features, we would categorize Sophia as a great example of AI as hype over substance.
  • Ava:  As one of the newest female AIs, Autodesk’s Ava seems to take extreme feminization a step further.  A “digital human”, Ava is a beautiful and helpful AI chatbot avatar that can read people’s body language.  Ava is programmed to be emotionally expressive.  Her customer service job is to support engineering and architectural software product users in real time.  Being able to detect emotions puts Ava in an entirely new league of female virtual assistants. So do her looks:  Ava’s appearance is literally based on a stunning actress from New Zealand.

The Men of AI

What about the male personas?  Probably the most well-known AI is Watson, the IBM machine that’s matched its immense wits against human opponents at chess and the trivia gameshow Jeopardy.  Watson has also been used in cancer diagnosis and has a regular role in many more industries, including transportation, financial services, and education. When it comes to the masculine, it seems both brain and brawn are required.  In many cases, male robots do the literal heavy lifting.  Here are some examples of the jobs male-personified AIs do.

  • Botler:  A chatbot called Botler seems enlightened.  He provides legal information and services for immigrants and victims of sexual harassment.  Botler wears a smile and tuxedo with bowtie, appearing to be a helpful proto-butler-like gentleman.
  • Stan:  Stanley Robotics’ robotic valet Stan parks your car.  An autonomous forklift, Stan is able to strategically fill parking garages to capacity.  Does Stan reinforce gender-based stereotypes about cars and driving?
  • FRAnky:  At Frankfurt Airport you can meet FRAnky, a Facebook Messenger-based chatbot that can search for flights and give information about restaurants, shops and airport wifi service.
  • Leo:  Another travel pro, SITA’s Leo is a luggage-drop robot who prints a bag tag, checks your suitcase, then prints a baggage receipt.  The curbside helper is strong and smart.
  • Ross:  The world’s first robo-lawyer.  The phenomenal computational power Ross uses for legal research saves attorneys time, effort and mistakes.  The proliferation of data is the main rationale for the rise of the robo-lawyer.  Human attorneys are expensive and time-consuming when it comes to the drudge work of digging up information; proponents of Ross say the AI saves 20–30 hours research time per case.
  • DaVinci:  Intuitive Surgical’s DaVinci surgical assistant is one of the most established names in the robotics field.  Named after the artist Leonardo DaVinci, this robot is reported to be cutting hospital stay times, improving patient outcomes, and reducing medical mistakes.  Like Ross, DaVinci suggests a future where even highly skilled professional roles could be at risk from robots, which could impact the large proportion of men in these jobs.

These examples raise the question of how much does technology shape reality?  The personal computer and the mobile phone, for instance, have had immeasurable impacts across society and changed everything from work and healthcare to politics and education.  Think about all the things that didn’t exist before the rise of the iPhone: texting and driving, selfies, online dating, Uber and Twitter, these are just some of the new normal.  The way we work, live, and play have all been transformed by the rise of the information age.  Hence, as we scan the next horizon, there is a strong sense that AI will form the basis of the near-future evolution of society. 

Overall, we find it interesting to ponder the human-like manifestations among AI companions.  A close look at the people of AI raises many questions:  What is the role of human intelligence in an AI world?  What will the relationship between robots and people be like in the workplace and in the home?  How might humanity be re-defined as more AI computers gain citizenship, emotional intelligence, and possibly even legal rights?  How can we avoid reinforcing unhealthy gender stereotypes through technology?  We don’t expect to get at the answers.  Rather, we use these questions to start meaningful conversations about how to construct a very human future.

About the Authors

The authors are futurists with Fast Future — a professional foresight firm specializing in delivering keynote speeches, executive education, research, and consulting on the emerging future and the impacts of change for global clients. Fast Future publishes books from leading future thinkers around the world, exploring how developments such as AI, robotics, exponential technologies, and disruptive thinking could impact individuals, societies, businesses, and governments and create the trillion-dollar sectors of the future. Fast Future has a particular focus on ensuring these advances are harnessed to unleash individual potential and enable a very human future. See: www.fastfuture.com

Rohit Talwar is a global futurist, award-winning keynote speaker, author, and the CEO of Fast Future. His prime focus is on helping clients understand and shape the emerging future by putting people at the center of the agenda. Rohit is the co-author of Designing Your Future, lead editor and a contributing author for The Future of Business, and editor of Technology vs. Humanity. He is a co-editor and contributor for the recently published Beyond Genuine Stupidity – Ensuring AI Serves Humanity and The Future Reinvented – Reimagining Life, Society, and Business, and two forthcoming books — Unleashing Human Potential – The Future of AI in Business, and 50:50 – Scenarios for the Next 50 Years.

Steve Wells is an experienced strategist, keynote speaker, futures analyst, partnership working practitioner, and the COO of Fast Future. He has a particular interest in helping clients anticipate and respond to the disruptive bursts of technological possibility that are shaping the emerging future. Steve is a contributor to the recently published Beyond Genuine Stupidity – Ensuring AI Serves Humanity and The Future Reinvented – Reimagining Life, Society, and Business, co-editor of The Future of Business, and Technology vs. Humanity. He is a co-editor and contributor to two forthcoming books on Unleashing Human Potential – The Future of AI in Business, and 50:50 – Scenarios for the Next 50 Years.

Alexandra Whittington is a futurist, writer, foresight director of Fast Future, and a faculty member on the Futures program at the University of Houston. She has a particular expertise in future visioning and scenario planning. Alexandra is a contributor to The Future of Business, the recently published Beyond Genuine Stupidity – Ensuring AI Serves Humanity and The Future Reinvented – Reimagining Life, Society, and Business. She is also a co-editor and contributor for forthcoming books on Unleashing Human Potential – The Future of AI in Business, and 50:50 – Scenarios for the Next 50 Years.

Helena Calle is a researcher at Fast Future.  She is a recent graduate from the MSc. program in Educational Neuroscience at Birkbeck, University of London, and has eight years of international experience as a teacher, teacher trainer, pedagogic coordinator, and education consultant. Helena coordinates Fast Futures’ growing research on the future of learning.

]]>
Ekaterina Bereziy, CEO of ExoAtlet, a Russian company developing medical exoskeletons to enable people walk again — IdeaXme — Ira Pastor https://lifeboat.com/blog/2019/05/ekaterina-bereziy-ceo-of-exoatlet-a-russian-company-developing-medical-exoskeletons-to-enable-people-walk-again-ideaxme-ira-pastor Tue, 07 May 2019 10:50:23 +0000 https://lifeboat.com/blog/?p=90368 ]]> First Robotics- Who Are the Celebrities of the Future? https://lifeboat.com/blog/2015/01/first-robotics-who-are-the-celebrities-of-the-future-2 https://lifeboat.com/blog/2015/01/first-robotics-who-are-the-celebrities-of-the-future-2#comments Thu, 22 Jan 2015 20:30:23 +0000 http://lifeboat.com/blog/?p=12804 At the most basic level The FIRST Robotics Competition, founded by inventor Dean Kamen, looks to the future by developing the next generation of the world’s engineers. Many of the students at FIRST go on to work at very influential titans of technology, or at future oriented organizations such as NASA. This documentary on FIRST Robotics is our eighth main piece in our Galactic Public Archives series in which we explore compelling visions of our future from influential individuals. So far, we’ve covered an interesting collection of viewpoints and topics regarding our possible future, ranging from the future of longevity, to the future of search and even the future of democracy. FIRST seemed like a natural opportunity to explore another ‘puzzle-piece’ of what the future might look like. And of course, the competition features Robots, which are an integral piece of any self-respecting utopian or dystopian future. What we did not realize as we started our exploration of the program was that FIRST is not attempting to be a humble building block towards the future. Although only time will tell to what degree it succeeds, it aspires to be a catalyst for much more far-reaching change.

In a society that praises the utmost competitive spirit in all the wrong ways, Inventor Dean Kamen noticed less and less youth using this spirit towards opportunities in math and science, instead aspiring to become celebrities, or sports superstars. In turn, he provided an answer to make kids excited about changing the world through technology. Kamen’s endeavor, FIRST Robotics offers teens a chance, in competition form, to use their skills and teamwork to problem solve a piece of machinery to life.

FIRST was modeled off the allure of professional sports leagues but without – hopefully — the dog eat dog spirit. David Lavery, FIRST Robotics Mentor and NASA Engineer, grew up during the Cold War when competition through technology meant joining in on the race to the moon. An interesting aspect of FIRST’s philosophy, is that as much as it embraces competition, students are also forced to realize that your greatest competitor could – in the future — work as one of your greatest collaborators. This generation may be bombarded with news about Kardashians as opposed to scientists, astronauts and cosmonauts — but what FIRST aims to cultivate, is a hunger to make a difference – made possible now more than ever due to widespread access to information.

Directly and tangentially, the experiment of FIRST both tackles and raises an entire swarth of deeper questions about our future. What values will our culture celebrate in the future? What will the repercussions be of the values that we celebrate today? How much time do we have to solve some of the great challenges looming on the horizon? Will there be enough individuals with the skills required to tackle those problems? To what degree will the ‘fixes’ be technological vs. cultural? How will the longstanding ideological struggle of competition vs. cooperation evolve as the next generations take over? What is the future of education? What is the proper role of a teacher? A mentor? Where does cultural change come from? Where should it come from? It’s an impressive list of questions to be raised by a competition involving robots shooting frisbees. We hope you find it as compelling as we did.

]]>
https://lifeboat.com/blog/2015/01/first-robotics-who-are-the-celebrities-of-the-future-2/feed 1
FIRST Robotics- Who are the celebrities of the future? https://lifeboat.com/blog/2014/12/first-robotics-who-are-the-celebrities-of-the-future Thu, 04 Dec 2014 20:30:52 +0000 http://lifeboat.com/blog/?p=12782

“You get what you celebrate.” In 1989 Dean Kamen created FIRST Robotics to change the culture from one that idolizes entertainment celebrities and sports stars to one that celebrates scientists, engineers and visionaries. Over 20 years later, how much has changed?

]]>
Verne, Wells, and the Obvious Future Part 3 https://lifeboat.com/blog/2012/09/verne-wells-and-the-obvious-future-part-3 https://lifeboat.com/blog/2012/09/verne-wells-and-the-obvious-future-part-3#comments Sun, 02 Sep 2012 22:02:22 +0000 http://lifeboat.com/blog/?p=4763 A secret agent travels to a secret underground desert base being used to develop space weapons to investigate a series of mysterious murders. The agent finds a secret transmitter was built into a supercomputer that controls the base and a stealth plane flying overhead is controlling the computer and causing the deaths. The agent does battle with two powerful robots in the climax of the story.

Gog is a great story worthy of a sci fi action epic today- and was originally made in 1954. Why can’t they just remake these movies word for word and scene for scene with as few changes as possible? The terrible job done on so many remade sci fi classics is really a mystery. How can such great special effects and actors be used to murder a perfect story that had already been told well once? Amazing.

In contrast to Gog we have the fairly recent movie Stealth released in 2005 that has talent, special effects, and probably the worst story ever conceived. An artificially intelligent fighter plane going off the reservation? The rip-off of HAL from 2001 is so ridiculous.

Fantastic Voyage (1966) was a not so good story that succeeded in spite of stretching suspension of disbelief beyond the limit. It was a great movie and might succeed today if instead of miniaturized and injected into a human body it was instead a submarine exploring a giant organism under the ice of a moon in the outer solar system. Just an idea.

And then there is one of the great sci-fi movies of all time if one can just forget the ending. The Abyss of 1989 was truly a great film in that aquanauts and submarines were portrayed in an almost believable way.

From wiki: The cast and crew endured over six months of grueling six-day, 70-hour weeks on an isolated set. At one point, Mary Elizabeth Mastrantonio had a physical and emotional breakdown on the set and on another occasion, Ed Harris burst into spontaneous sobbing while driving home. Cameron himself admitted, “I knew this was going to be a hard shoot, but even I had no idea just how hard. I don’t ever want to go through this again”

Again, The Abyss, like Fantastic Voyage, brings to mind those oceans under the icy surface of several moons in the outer solar system.

I recently watched Lockdown with Guy Pearce and was as disappointed as I thought I would be. Great actors and expensive special effects just cannot make up for a bad story. When will they learn? It is sad to think they could have just remade Gog and had a hit.

The obvious futures represented by these different movies are worthy of consideration in that even in 1954 the technology to come was being portrayed accurately. In 2005 we have a box office bomb that as a waste of money is parallel to the military industrial complex and their too-good-to-be-true wonder weapons that rarely work as advertised. In Fantastic Voyage and The Abyss we see scenarios that point to space missions to the sub-surface oceans of the outer planet moons.

And in Lockdown we find a prison in space where the prisoners are the victims of cryogenic experimentation and going insane as a result. Being an advocate of cryopreservation for deep space travel I found the story line.……extremely disappointing.

]]>
https://lifeboat.com/blog/2012/09/verne-wells-and-the-obvious-future-part-3/feed 4
Artilects Soon to Come https://lifeboat.com/blog/2012/08/artilects-soon-to-come Mon, 20 Aug 2012 02:53:42 +0000 http://lifeboat.com/blog/?p=4576 Whether via spintronics or some quantum breakthrough, artificial intelligence and the bizarre idea of intellects far greater than ours will soon have to be faced.

http://www.sciencedaily.com/releases/2012/08/120819153743.htm

]]>
The Electric Septic Spintronic Artilect https://lifeboat.com/blog/2012/08/the-electric-septic-spintronic-artilect Tue, 14 Aug 2012 00:17:44 +0000 http://lifeboat.com/blog/?p=4497 AI scientist Hugo de Garis has prophesied the next great historical conflict will be between those who would build gods and those who would stop them.

It seems to be happening before our eyes as the incredible pace of scientific discovery leaves our imaginations behind.

We need only flush the toilet to power the artificial mega mind coming into existence within the next few decades.  I am actually not intentionally trying to write anything bizarre- it is just this strange planet we are living on.

http://www.sciencedaily.com/releases/2012/08/120813155525.htm

http://www.sciencedaily.com/releases/2012/08/120813123034.htm

]]>
The Nature of Identity Part 3 https://lifeboat.com/blog/2011/08/the-nature-of-identity-part-3 https://lifeboat.com/blog/2011/08/the-nature-of-identity-part-3#comments Sat, 20 Aug 2011 13:18:45 +0000 http://lifeboat.com/blog/?p=2044 The Nature of Identity Part 3
(Drawings not reproduced here — contact the author for copies)
We have seen how the identity is defined by the 0,0 point – the centroid or locus of perception.

The main problem we have is finding out how neural signals translate into sensory signals – how neural information is translated into the language we understand – that of perception. How does one neural pattern become Red and another the Scent of coffee. Neurons do not emit any color nor any scent.

As in physics, so in cognitive science, some long cherished theories and explanations are having to change.

Perception, and the concept of an Observer (the 0,0 point), are intimately related to the idea of Identity.

Many years ago I was a member of what was called the Artorga Research Group – a group including some of the early cyberneticists – who were focussed on Artificial Organisms.

One of the main areas of concern was, of course, Memory.

One of our group was a young German engineer who suggested that perhaps memories were in fact re-synthesised in accordance with remembered rules, as opposed to storing huge amounts of data.

Since then similar ideas have arisen in such areas as computer graphics.

Here is an example,

It shows a simple picture on a computer screen. We want to store (memorize) this information.

One way is to store the information about each pixel on the screen – is it white or is it black. With a typical screen resolution that could mean over 2.5 million bits of information

But there is another way….

In this process one simply specifies the start point (A) in terms of its co-ordinates (300 Vertically, 100 Horizontally); and its end point (B) (600 Vertically, 800 Horizontally); and simply instructs – “Draw a line of thickness w between them”.

The whole picture is specified in just a few bits..

The first method, specifying bit by bit, known as the Bit Mapped Protocol (.BMP), uses up lots of memory space.

The other method, based on re-synthesising according to stored instructions, is used in some data reduction formats; and is, essentially, just what that young engineer suggested, many years before.

On your computer you will have a screen saver –almost certainly a colorful scene – and of course that is stored, so that if you are away from the computer for a time it can automatically come on to replace what was showing, and in this way “save” your screen.

So – where are those colors in your screensaver stored, where are the shapes shown in it stored? Is there in the computer a Color Storage Place? Is there a Shape Storage Place?

Of course not.

Yet these are the sort of old, sodden concepts that are sometimes still applied in thinking about the brain and memories.

Patterned streams of binary bits, not unlike neural signals , (but about 70 times larger), are fed to a computer screen. And then the screen takes these patterns of bits as instructions to re-synthesise glowing colors and shapes.

We cannot actually perceive the binary signals, and so they are translated by the screen into a language that we can understand. The screen is a translator – that is its sole function.

This is exactly analogous to the point made earlier about perception and neural signals.

The main point here, though, is that what is stored in the computer memory are not colors and shapes but instructions.

And inherent in these instructions as a whole, there must exist a “map”.

Each instruction must not only tell its bit of the screen what color to glow – but it must also specify the co-ordinates of that bit. If the picture is the head of a black panther with green eyes, we don’t want to see a green head and black eyes. The map has to be right. It is important.

Looking at it in another way the map can be seen as a connectivity table – specifying what goes where. Just two different ways of describing the same thing.

As well as simple perception there are derivatives of what has been perceived that have to be taken into account, for example, the factor called movement.

Movement is not in itself perceptible (as we shall presently show); it is a computation.

Take for example, the following two pictures shown side-by-side.

I would like to suggest that one of these balls is moving. And to ask — which one is moving?

If movement had a visual attribute then one could see which one it was – but movement has no visual attributes – it is a computation.

To determine the speed of something, one has to observe its current position, compare that with the record (memory) of its previous position; check the clock to determine the interval between the two observations; and then divide the distance between the two positions, s; by the elapsed time, t; to determine the speed, v,

s/t = v.

This process is carried out automatically, (subconsciously), in more elaborate organisms by having two eyes spaced apart by a known distance and having light receptors – the retina – where each has a fast turn-on and a slow (about 40 ms) turn off, all followed by a bit of straightforward neural circuitry.

Because of this system, one can look at a TV screen and see someone in a position A, near the left hand edge, and then very rapidly, a series of other still pictures in which the person is seen being closer and closer to B, at the right hand edge.

If the stills are shown fast enough – more than 25 a second — then we will see the person walking across the screen from left to right. What you see is movement – except you don’t actually see anything extra on the screen. Being aware of movement as an aid to survival is very old in evolutionary terms. Even the incredibly old fish, the coelacanth, has two eyes.

The information provided is a derivate of the information provided by the receptors.

And now we ought to look at information in a more mathematical way – as in the concept of Information Space (I-space).

For those who are familiar with the term, it is a Hilbert Space.

Information Space is not “real” space – it is not distance space – it is not measurable in metres and centimetres.

As an example, consider Temperature Space. Take the temperature of the air going in to an air-conditioning (a/c) system; the temperature of the air coming out of the a/c system; and the temperature of the room. These three provide the three dimensions of a Temperature Space. Every point in that space correlates to an outside air temperature, an a/c output temperature and the temperature of the room. No distances are involved – just temperatures.

This is an illustration of what it would look like if we re-mapped it into a drawing.

The drawing shows the concept of a 3-dimensional Temperature Space (T-space). The darkly outlined loop is shown here as a way of indicating the “mapping” of a part of T-space.

But what we are interested in here is I-space. And I-space will have many more dimensions than T-space.

In I-space each location is a different item of information, and the fundamental rule of I-space – indeed of any Hilbert space – is,

Similarity equals Proximity.

This would mean that the region concerned with Taste, for example, would be close to the area concerned with Smell, since the two are closely related.

Pale Red would be closer to Medium Red than to Dark Red.

Perception then would be a matter of connectivity.

An interconnected group we could refer to as a Composition or Feature.

Connect 4 & legs & fur & tail & bark & the word dog & the sound of the word dog – and we have a familiar feature.

Features are patterns of interconnections; and it is these features that determine what a thing or person is seen as. What they are seen as is taken as their identity. It is the identity as seen from outside.

To oneself one is here and now, a 0,0 reference point. To someone else one is not the 0,0 point – one is there — not here, and to that person it is they who are the 0,0 point.

This 0,0 or reference point is crucially important. One could upload a huge mass of data, but if there was no 0,0 point that is all it would be – a huge mass of data.

The way forward towards this evolutionary goal, is not to concentrate on being able to upload more and more data, faster and faster – but instead to concentrate on being able to identify the 0.0 point; and to be able to translate from neural code to the language of perception.

]]>
https://lifeboat.com/blog/2011/08/the-nature-of-identity-part-3/feed 1