As far as we know, the human being is the most “intelligent” organism. Their brain includes a number of fuctions so complex and sophisticated that, when the neurologist and psychologist Howard Gardner in his Frames of Mind tries to recap its structure, he comes up with seven sub-structures: Linguistic Intelligence, Logic-Mathematical Intelligence, Musical Intelligence, Bodily-Kinesthetic Intelligence, Spatial Intelligence, Interpersonal Intelligence and Intrapersonal Intelligence.

Surely thanks to a very refined biological evolution of its calculating skills, the human race has been able to reach an equal high level of social and technological evolution too. What we are about to reach now, is what can be called automaton era . Nevertheless, among us humans, some enlightened artists (such as W. Shelley , G. Orwell , P.K. Dick) have caught a glimpse of where we were heading for and thus tried to imagine what it would have been like in order to prevent the ethic and moral decadence of our species and our societies.

It was Isaac Asimov in particular who uttered the famous three Robotic rules: 1) A robot may not injure humanity, nor, through inaction, allow humanity to come to harm; 2) A robot has to obey what it is told by humans, unless their orders are in contrast with the First Rule; 3) A robot has to protect its existence, unless its self-defence is in contrast with the First and Second Rule.


While the artists anticipated the philosophical principles of the future relation between human beings and machines, on the other hand some scientists started to define actual principles. Among the others, it is worth remembering A. Turing, who defined through the notorious Turing Test the meaning of Artificial Intelligence. A machine can be defined “intelligent” if a human being is not able to tell human answers from machine answers during a conversation where the three of them are engaged.

Those two conceptual paradigms were crucial to everything that followed and guided the development of AI and robotics. Yet, as it has often happened in the history of ideas, what has been a point of reference and inspiration for decades, can suddenly turns into a hindrance for further developments in ideas. Probably both Asimov’s rules and Turing Test (as well as other authors’ rules) are currently too influential, while actually they are completely obsolete.

Since then things have changed and both the goals and the very definition of AI changed. As a matter of fact, Turing Test has been completely overcome and AI is moving fast towards the idea of Collective Intelligence – let’s take for example the Swarms and the Boids – while the robot’s shape itself is turning into a mechanical multidimensional structure able to adapt – see the Atrons- thus losing its canonic anthropomorphous aspect and its old static mono-corporal structure. Furthermore, AI has started widening its views and is aiming at interfacing with biology, leading us to consider the actual beginning of a Multiple, Manifold and Polymorphic Intelligence which implies a deep interaction at different levels. That is what happens in virtual worlds (such as SecondLife), as wellas in real worlds (such as MipTiles), and mixed realities (Stelarc, Talkers, Ambient Addiction).


In other words, it is no longer possible to look for a single artificial linear process within AI; on the contrary the new conception implies a multidimensional and not linear process which cannot be easily manipulated and which cannot be totally controlled . Things are already getting complicated if we consider that, instead of the old conceptions of interactivity (activate/deactivate/ignore) we start a real time multisensorial dynamic interaction (that is to say an interrelation) with a single “species” of artifact (or AI algorithm). But if we consider the possibilità of interacting with a multitude of artifacts (or AI algorithm) at h esame time, then things get “elusive”.

The obvious conclusion is that Asimov’s Rules become inconsistent with such assumptions . On the one hand, machines cannot be controlled any longer, for the problems we ask them to solve are not linear and incomplete from a mathematical point of view; on the other hand , being widely interconnected to one another, they are no longer the only and direct responsible for the general output of the system.


With this new point of view, we need to renew our methodology and leave the idea of Human-Machine Interaction (o Interaction Design ) to move towards the idea of Bio-Tech Interrelation, according to which the interaction with machines is simply more uncertain and, more important, completely different from the one we knew in the past.

As a matter of fact, the very interaction moves from a one-direction stream of intellingence (man=>machine) to a double-direction one (man ó machine) as well as a many-direction one (biological intelligences ó artificial intelligences). Actually, there is a factor that we may call here Imitative Intelligence which was not considered by Howard yet inderectly confirmed by Rizzolati’s la test experiments and, more generally by common sense, which suggests that that intellectual feedback we triggered with artificial intelligence has a real “boomerang” effect, an intellectual “Larsen effect” which has to be considered seriously since it will be surely crucial for the future theories on man-machine relations (or, as we like AI, Robot, Cyborg, or Android).

Those theories will inevitably lead to conceive intelligence no longer as an axclusively biological means, rather a domain crossbred by machines, thus Polymorphic Intelligence.