Symbol grounding
The symbol grounding problem is related to the problem of how words get their meanings, and of what meanings are. The problem of meaning is in turn related to the problem of consciousness: how is it that it feels like something (to the symbol system) to mean something?
A symbol is an arbitrary object, an element of a code or formal notational system. It is interpretable as referring to something, but its shape is arbitrary in relation to its meaning: it neither resembles nor is causally connected to its referent (see Saussure's L'arbitraire du signe). Its meaning is agreed upon and shared by convention.
Individual objects alone cannot be symbols autonomously; symbols are elements in symbol systems. The meanings of the symbols in a symbol system are systematically interrelated and systematically interpretable. Symbols are combined and manipulated on the basis of formal rules that operate on their (arbitrary) shapes, not their meanings; i.e., the rules are syntactic, not semantic. Yet the syntactically well-formed combinations of symbols are semantically interpretable. (Think of words, combined and recombined to form sentences that all make coherent sense, locally and globally.)
There is no symbol grounding problem for symbols in external symbol systems, such as those in a mathematical formula or the words in a spoken or written sentence. The problem of symbol grounding arises only with internal symbols, symbols in the head – the symbols in what some have called the language of thought. External symbols get their meaning from the thoughts going on in the minds of their users and interpreters. But the internal symbols inside those users and interpreters need to be meaningful on their own, autonomously. Their meaning cannot just be based on a definition, because a definition is just a string of symbols, and those symbols need to have meaning too. Definitions are meaningful if their component symbols are meaningful, but what can give their component symbols meaning?
If it were formal definitions all the way down, this would lead to a problem of infinite regress. If the meaning depended on an external interpreter, then the system would not be autonomous. (It makes no sense to say that the meanings of the symbols in my head depend on the interpretation of someone outside my head.) Thus, the meaning of a symbol that occurs inside a sensorimotor system, is (1) whatever internal structures and processes give that robot the ability to detect, identify, and interact with that symbol's external referent plus (2) whatever makes it feel like something for the system to mean what it means when it means it. Without the functional autonomy, the robot's symbols are ungrounded. But without feeling they are still meaningless.
Designing a human-scale grounded symbol system that can pass the Turing Test (an agent whose performance capacities are functionally equivalent to and indistinguishable from those of humans) is one of the methodological and empirical goals of cognitive science. But would a Turing-scale robot also feel? Turing's test can confirm grounding but other minds problem remains the obstacle to confirming meaning.
References
- Harnad, Stevan (1990). The symbol grounding problem. Physica D, 42, 335–346.
- MacDorman, Karl F. (1999). Grounding symbols through sensorimotor integration. Journal of the Robotics Society of Japan, 17(1), 20-24. Online version
- Harnad, Stevan (2003). Symbol grounding problem. Encyclopedia of Cognitive Science, MacMillan, Nature Publishing Group. Eprint Scholarpedia entry
- Taddeo, Mariarosaria & Floridi, Luciano (2005). The symbol grounding problem: A critical review of fifteen years of research. Journal of Experimental and Theoretical Artificial Intelligence, 17(4), 419-445. Online version
- MacDorman, Karl F. (2007). Life after the symbol system metaphor. Interation Studies, 8(1), 143-158. Online version