One high-level ability of the human brain is to understand what it has learned. This seems to be the crucial advantage in comparison to the brain activity of other primates. At present we are technologically almost ready to artificially reproduce human brain tissue, but we still do not fully understand the information processing and the related biological mechanisms underlying this ability. Thus an electronic clone of the human brain is still far from being realizable. At the same time, around twenty years after the revival of the connectionist paradigm, we are not yet satisfied with the typical subsymbolic attitude of devices like neural networks: we can make them learn to solve even difficult problems, but without a clear explanation of why a solution works. Indeed, to widely use these devices in a reliable and non elementary way we need formal and understandable expressions of the learnt functions. of being tested, manipulated and composed with These must be susceptible other similar expressions to build more structured functions as a solution of complex problems via the usual deductive methods of the Artificial Intelligence. Many effort have been steered in this directions in the last years, constructing artificial hybrid systems where a cooperation between the sub symbolic processing of the neural networks merges in various modes with symbolic algorithms. In parallel, neurobiology research keeps on supplying more and more detailed explanations of the low-level phenomena responsible for mental processes.
I: The Theoretical Bases Of Learning. 1. The Statistical Bases Of Learning; B. Apolloni, S. Bassis, S. Gaito, D. Malchiodi. 2. Pac Meditation On Boolean Formulas; B. Apolloni, S. Baraghini, G. Palmas. 3. Learning Regression Functions; B. Apolloni, S. Gaito, D. Iannizzi, D. Malchiodi. 4. Cooperative Games In A Stochastic Environment; B. Apolloni, S. Bassis, S. Gaito, D. Malchiodi. 5. If-Then-Else And Rule Extraction From Two Sets Of Rules; D. Mundici. 6. Extracting Interpretable Fuzzy Knowledge From Data; C. Mencar. 7. Fuzzy Methods For Simplifying A Boolean Formula Inferred From Examples; B. Apolloni, D. Malchiodi, C. Orovas, A.M. Zanaboni. II: Physical Aspects Of Learning. 8. On Mapping And Maps In The Central Nervous System; G.E.M. Biella. 9. Molecular Basis Of Learning And Memory: Modelling Based On Receptor Mosaics; L.F. Agnati, L.M. Santarossa, F. Benfenati, M. Ferri, A. Morpurgo, B. Apolloni, K. Fuxe. 10. Physiological And Logical Brain Functionalities: A Hypothesis For A Self-Referential Brain Activity; B. Apolloni, A. Morpurgo, L.F. Agnati. 11. Modeling Of Spontaneous Bursting Activity Observed In In-Vitro Neural Networks; M. Marinaro, S. Scarpetta. 12. The Importance Of Data For Training Intelligent Devices; A. Esposito. 13. Learning And Checking Confidence Regions For The Hazard Function Of Biomedical Data; B. Apolloni, S. Gaito, D. Malchiodi. III: Systems That Bridge The Gap. 14. Integrating Symbol-Oriented And Sub-Symbolic Reasoning Methods Into Hybrid Systems; F.J. Kurfess. 15. From The Unconscious ToThe Conscious; Ron Sun. 16. On Neural Networks, Connectionism And Brain-Like Learning; A. Roy. 17. Adaptive Computation In Data Structures And Webs; M. Gori. 18. Iuant: An Updating Method For Supervised Neural Structures; S. Gentili. Index.