In a context where institutions and professionals have to fight harshly to grab the (few) research funds, the recent decision made up by the European Union of including the Human Brain Project in its “flagship projects” for the next decade, the so-called FET[i], has quite outraged the scientific community. The surprise also struck for the consequent money allocation: one billion euros in ten years, to create a complete simulation of the human brain through a network of supercomputers.

Many have seen all of this as an unrealistic target, or even an ill-posed problem[ii], but this and other similar projects do not come from nowhere[iii]. They have about 50 years of history behind, collected in the discipline known as Artificial Intelligence (A.I.).

The beginning of A.I. is commonly set, almost in a concordant way, in 1956, the year when the american mathematician John McCarthy organised a workshop in Dartmouth, New Hampshire (USA). The meeting was attended by scientists such as Claude Shannon and Marvin Minsky, both belonging to the world of engineering and logical-mathematical sciences, but also like Herbert Simon, an economist and future Nobel laureate, computer scientists Allen Newell.

The agenda was very “simple” topics: to develop a “conjecture, in which all the aspects related to learning and the analysis of intelligence could be described or simulated by a computer”.ai2

The years since 1956 to the mid-60s (conventionally to 1966, when the federal government of the United States cut fundings for A.I., judging the results produced hitherto mostly disappointing[iv]) are identified with the early stage of Artificial Intelligence.

The attention here was focused on things such as software implementations parsers or finite state automata for the handling of strings (i.e., sequences of symbols). The main results obtained focused primarily on the automatic resolution of mathematical problems, and it seems necessary to mention the software Logic Theorist created by Herbert Simon and Allen Newell, which was able to solve most of the theorems present in the “Principia Mathematica” of Russell and Whitehead.

The applications of natural languages processing didn’t shoot important targets. Nevertheless, remarkable results were achieved with the software ELIZA developed by the psychologist Joseph Weizenbaum and with the software SHRDLU by Terry Winograd.

From the ashes of the first approach, a second wave was born, the AI of the Expert System[v]. Like the previous, also this project was concerning software, but the basic concept was very innovative: it overcomes the idea that the programmer had to encode all possible strategies explicitly.

For this purpose, it was necessary to emulate what usually happens with the human experts who know not only a good number of rules, but also the appropriate cognitive schemas to make extrapolations. So, the new software included two key components. The first were some structures such as propositions if then else or decision trees, the so-called “knowledge base”.

The second was an “inference engines”, a mechanism which was able to convert an input in a solution (output) resulting from elaborative flows but not explicitly encoded in the programs. In these years the primary language for the implementation of A.I. projects was passing from LISP (invented by McCarthy in person) to PROLOG, allowing a greater use of previously little-explored strategies: recursion and backtracking.

Among the names of systems that have been fundamental, we may cite Mycin, Molgen and Teiresias. Possible applications could go from the analysis of noisy systems (diagnosis, interpretation of data), the forecasting, planning and scheduling (i.e., in the manufacturing sector but not only) to the design mode (machines to create other machines).

Even though Expert Systems were efficient in several areas, it was later clear that their purely computational (and associative) nature, could be an insuperable limit for further progress. Humans demonstrate their intelligence not only solving mathematical equations, engineering problems or learning languages but also moving, recognising shapes and colours and, above all, implementing behaviours.

In addition, the studies during the 80s and 90s began to highlight some interesting facts about the physical functioning of human and animal brains. It gradually became evident the that this system elaborates electrical signals, which are propagated in a network consisting of about 1010 neurones.

The neurones were connected to each other (but not to all) in a more or less stable way, with biological transmission lines (axons) and interface points (synapses), which were more or less “open” to the passage of signals, with a degree of opening influenced by the individual experience.

Still, it had been proved that neurones had the properties to electrically activate themselves[vi], according to the input, and transmit their state to other linked neurones. In the attempt to include all of this in a unique model[vii], sensorial data could be associated to the input of the overall system, while the electrical signals further downstream were connected to the command sent to the actuators (output).

ai3

The transition from these reasonings to a software implementation was, therefore, immediate, and these programs were called Artificial Neural Networks (ANN). Another thought influenced, in those years, the landscape of A.I.: the theory of Genetic Algorithms[viii]. It was the common heritage of the concept of natural selection in the biological field, as a development of the observations made by Charles Darwin.

The basic idea was that, from an initial population, and with a set of rules to obtain new genes from old ones, it would be able to trigger the mechanisms of selection by which certain individuals with certain characteristics became prevalent within a species. Was there some intelligence in this selection?

In part yes, because those remained after several generations were more suitable to live in its environment than its predecessors. If, instead of talking about genes we were to focus on biological variables, that influenced the exposed properties of objects (i.e., shape, colour, size).

Evolution, therefore, could be interpreted as a transformation of an initial pattern of the total population into a final one. The famous “Game of Life” by John Connway provides a powerful graphical representation of all of this. Artificial neural networks and genetic algorithms opened the third phase of A.I. in which the keywords were “black box system.”

The implementations on this stage not only created a transfer function (input – output) through a well-defined set of rules (such as the parser or expert systems). They also encoded the corresponding knowledge using a distributed approach, that is many elemental memory units (for example, weights in synaptic ANN), each of them not associated with any complete information.

The developers, in other words, let the programs build and use their intelligence – meaning the ability to solve a class of problems, as widely as possible – without any interference. The fact that such intelligence could be considered an emergent property (something not obtainable with the simple sum of the properties of the elementary constituents) caused the coupling of this phase with the theory of complex systems, gaining importance quickly in those years.

It also led to the emergence of a school of thought within A.I., the so-called connectionism that, for the first time, had found its most prominent representatives outside of computer science and logical-mathematical sciences (e.g. the psychologists David Rumelhart, David McClelland and the biologist Gerald Edelman).

The most successful applications of this type of systems have been in electronics, biology, economics for genetic algorithms and recognition audio/video in the field of data mining and forecasts (e.g., meteorology) for ANN. These were all considered relevant results and in some ways extraordinary, but still, nothing has made the scientific community agree in stating that the first human-made intelligence had been born.

We are, therefore, today, in the fourth quarter of A.I., trying to go beyond the partial previous failures, at least the so-called maximalist lens. However, why would a general AI type, that emulates human intelligence, be impossible?

ai5

Some scientists like John Lucas and Roger Penrose have set the central point into the limitations of every formal system, arising from the Gödel theorems of incompleteness and undecidability. According to them, the human mind would have its critical success factor in the non-symbolic or algorithmic reasoning.

Other researchers, indeed, have identified the problem in the absence of any exact definition of the intelligence. They have “brought to the court”, rather than the AI itself, the so-called Turing Test, the method to find – at least at the conceptual level – if one software is intelligent or not.

The latest widely accepted view is based on the apparent one-way interaction systems of A.I. (software> hardware) that contrasts with the observation of the bi-directionality of the mind – body, with the decisive role played not only by physical stimuli (environment) but also by the cultural ones (society)[ix].

Due to these doubts and to the fact that the best results were obtained only in vertical applications, the interest of researchers is mostly focusing on the integration of different intelligent software, though for extremely limited purposes (weak A.I.). Almost nobody, at present, cares about building machines or programs that can overcome the Turing test or demonstrate, however, the understanding of a broad set of behaviors.

So, we could ask ourselves, where did the strong A.I. go? In some way, it survived, even though it changed a lot from its original setting[x]. Firstly it has changed name: probably to detach from the previous failures some supporters of the strong A.I. redefine themselves with the new acronym AGI (Artificial General Intelligence)[xi], or under the title WBE (Whole Brain Emulation)[xii].

ai6

Today it’s difficult to understand which are its distinctive features. Certainly, there is a genuine will to decisively break with the past, conceiving deeper and more complete measuring methods, compared to the Turing test[xiii]. Another primary goal is the overcoming of traditional approaches, considered too much centred on information sciences. The core of AGI is, in fact, the integration at first between IT, Biology, Cognitive Sciences and Nanotechnology.

Of course, in contrast with this incredible deployment of scientific knowledge, some of the statements of those, who continue to believe in Strong AI even today, may seem to be quite absurd or unscientific, if not vaguely disturbing.

Let’s take the thought of singularity for example: among the current supporters of AGI and WBE, nearly all agree with the ideas of Ray Kurzweil[xiv], that the first significant results, regarding emulation of brain function, will emerge between 2015 and 2045. However, this would only be the first step to an ever-accelerating progress in which man, thanks to technology, would increase always more its potential to become something, which someone has called “Man 2.0”.

Questionable ideas? Oversimplification and confidence in the technology? Disinterest for all possible ethical or safety for the human race? Maybe. One thing is sure: to understand whether are getting closer to what is expected by Kurzweil or not, we won’t have to wait too long.


[i] “Human Brain Project”, 2013, URL: http://www.humanbrainproject.eu/

[ii] Vaughan Bell, The human brain is not as simple as we think, The Observer, 2013, URL: http://www.rawstory.com/rs/2013/03/03/the-human-brain-is-not-as-simple-as-we-think/

[iv] Pietro Greco, Einstein e il ciabattino, Dizionario asimmetrico dei concetti scientifici di interesse filosofico, Editori Riuniti, 2002

[v] Paola Mello, Lucidi sui Sistemi Esperti, Laboratorio di Informatica Avanzata, Università di Bologna, URL: http://www.lia.deis.unibo.it/Courses/AI/fundamentalsAI2011-12/lucidi/SistemiEsperti2011.pdf

[vi] Paul Churchland,  Il motore della ragione la sede dell’anima, Il Saggiatore, Milano, 1998

[vii] Giovanni Martinelli, Reti neurali e neurofuzzy, Euroma La Goliardica, 2000

[viii] John H. Holland, Genetic Algorithms Computer programs that “evolve” in ways that resemble natural selection can solve complex problems even their creators do not fully understand, URL: http://www2.econ.iastate.edu/tesfatsi/holland.GAIntro.htm

[ix] Gerald Edelman, La materia della mente, Adelphi, 1999

[x] Jonathan Russell, Peter Norvig, Intelligenza Artificiale: Un Approccio Moderno, Pearson, 2010

[xi] Pei Wang And Ben Goertzel, Introduction: Aspects of Artificial General Intelligence, 2006, URL: https://a316de03-a-62cb3a1a-s-sites.googlegroups.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf

[xii] Sandberg, A. & Bostrom, N. (2008): Whole Brain Emulation: A Roadmap, Technical Report #2008‐3, Future of Humanity Institute, Oxford University, URL: www.fhi.ox.ac.uk/reports/2008‐3.pdf

[xiii] Itamar Arel and Scott Livingston, Beyond theTuring Test, IEEE Computer Society, 2009, URL: http://web.eecs.utk.edu/~itamar/Papers/IEEE_Comp_Turing.pdf

[xiv] Ray Kurzweil, La singolarità è vicina, Apogeo, 2008