Fuzzy Computational Semantics1

Burghard B. Rieger
Department of Computational Linguistics - University of Trier
P.O.Box 3825 - D-5500 TRIER - Germany

Abstract

Other than clear-cut symbolic representational formats employed sofar in natural language processing by machine, it is argued here, that fuzzy distributional representations correspond directly to the way word meanings are constituted and understood by (natural and artificial) information processing systems. Based upon such systems' theoretical performance in general and the pragmatics of communicative interaction by real language users in particular, the notions of situation and language game as introduced by Barwise/Perry and Wittgenstein respectively are combined to allow for a numerical reconstruction of processes that simulate the constitution of meaning and the interpretation of signs. This is achieved by modelling the linear or syntagmatic and selective or paradigmatic constraints which natural language structure imposes on the formation of (strings of) linguistic entities. A formalism with related algorithms and test results of their implementation are produced in order to substantiate the claim for a model of a semiotic cognitive information processing system (SCIPS) that operates in a linguistic environment as some meaning acquisition and understanding device.

1  The semiotic background

For researchers in knowledge representation and natural language semantics graph-theoretical formats have become standard representational means to deal with meaning, both as relational structure and denotational reference. Relating arc-and-node structures with sign-and-term labels in formats like trees and nets is but another aspect of the traditional mind-matter-duality according to which a realm of meanings is presupposed very much like the assumption of the pregiven structures of the real world that can be related to signs. Accepting this duality will neither allow to explain where the structures nor where the labels come from the relation of which is understood to represent meanings. Their emergence, therefore, never occurred to be in need of some explanatory modelling because the existence of objects, signs and meanings were taken for granted and hence seemed to be out of all scrutiny. Under this presupposition, fundamental semiotic questions of semantics simply did not come up, they have hardly been asked yet, and are still far from being solved.

1.1  

A recent attempt to classify approaches in cognitive science2 discern three categories of modelling cognition:
  • the cognitivistic approach presupposes the existence of the external world, structured by given objects and properties the internal representations of which are to be used by cognitive systems in order to act and react;
  • the emergent approach is described as based on the model concept of self-organization with the cognitive system constantly adapting to changing environmental conditions by modifying its internal representation of them. Whereas both these approaches appear to be based on the traditional rationalistic paradigm of mind-matter-duality-static the former, dynamic the latter-the third category or
  • the enactive approach is characterized as being based upon the notion of structural coupling. It dispenses with the assumed distinction of an external world and an internal representation of it as believed to constitute the individuated self, but considers instead mutually structured coupling the fundamental condition prior to and underlying any discernment between self and world, subject and object, the cognitive system and its environment, etc. Hence, the process of cognition and the procedural results of it appear to become indistinguishable (enaction), allowing meaning to emerge spontaneously, variable, and vague according to the history of constraints embodied by structured connections between an organism and its environment.
  • According to these categories of cognitive modelling, fuzzy computational semantics tries to model meaning enactively, reconstructing procedurally both, the significance of entities and the meanings of signs as a function of a first and second order semiotic embedding relation of situations (or contexts) and of language games (or cotexts).

    There is some chance for doing so because in linguistics we do not have to start cognitively ab ovo. Taking human beings as the most efficient natural SCIPS with high performance symbol manipulation and understanding capabilities, natural language provides a cognitively interesting meaning representation system whose outstanding structuredness in the aggregated form of texts in discourse situations may serve as guidelines rarely observed yet. In doing so, however, it is necessary to pass from traditional approaches in linguistics proper that analyse introspectively the propositional contents of singular sentences as conceived by idealized speakers on to approaches based upon the empirically well founded observation and rigourous mathematical description of global regularities in masses of texts produced by real speakers in actual situations of either performed or intended communication.

    1.2  

    As long as the concept of meaning was conceived as some independent, pre-existing and stabel entity, very much like that of objects in a presupposed real world, sucht meanings could be analyzed and represented accordingly, i.e. as entries to a knowledge base built up of structured sets of elements whose semantics were signalled symbolically by linguistic labels attached to them. However, the fundamental question of how a label may be associated with a node in order to let this node be understood to stand for the entitity (meaning or object) it is meant to represent in a knowledge base, has to be realized, explored, and eventually answered:
  • it has to be realized that there are certain entities in the world which are (or become) signs and have (or acquire) interpretable meaning in the sense of signifying something else they stand for, beyond their own physical existence (whereas other entities do not).
  • it has to be explored how these (semiotic) entities may be constituted and how the meaning relation be established on the basis of which regularities of observables (uniformities), controlled by what constraints, and under which boundary conditions of pragmatic configuration of communicative interactions (situations).
  • it has to be answered why some entities may signify others by serving as labels for them (or rather by the meanings these labels purport), instead of being signified semiotically by way of positions, load values and/or states distributed over a system of semiotic/non-semiotic entities. These allow for distinctions of different distributional patterns being made, not however, for representing the patterns by different (symbolic) labels.
  • In doing so, a semiotic paradigm will have to be followed which hopefully may allow to avoid (if not to solve) a number of spin-off problems, which originate in the traditional distinction and/or the methodological separation of the meaning of a language's term from the way it is employed in discourse. It appears that failing to mediate between these two sides of natural language semantics, phenomena like creativity, dynamism, efficiency, vagueness, and variability of meaning-to name only the most salient-have fallen in between, stayed (or be kept) out of the foci of interest, or were being overlooked altogether, sofar. Moreover, the classical approach in formal theory of semantics which is confined to the sentence boundary of propositional constructions, is badly in want of operational tools to bridge the gap between formal theory of language description (competence) and empirical analysis of language usage (performance) that is increasingly felt to be responsible for the unwarranted abstractions of fundamental properties of natural languages.

    In a rather sharp departure from more traditional ways of introspective analyses, our empirical approach in quantitative linguistics (QL) accepts the complex ontological status of natural languages in its aggregated form of texts as compiled in large, pragmatically homogeneous, linguistic corpora. Accordingly, the textual analyses will be concerned with entities whose first order situational significance appears to be identical with their being signs, aggregates, and structures thereof, and whose second order situational significance allows for their semantic interpretatibility as constituted by their being an instatiation of some language game. Therefore, word meaning may well be reconstructable as a function of the elastic constraints which these two levels of semiotic embedding impose on natural language texts constituting the structural coupling between the language users and the meanings they understand.

    2  The situational setting

    From the semiotic point-of-view any identification and interpretation of external structures has to be conceived as some form of information processing which (natural/artificial) systems-due to their own structuredness-are able to perform. These processes or the structures underlying them ought to be derivable from rather than presupposed to procedural models of meaning. Other than so-called knowledge-based approaches to cognitive tasks and natural language understanding employed sofar in information processing systems that artificial intelligence research (AI) or computational linguistics (CL) have advanced, it is argued here that meaning need not be introduced as a presupposition of semantics but may instead be derived as a result of procedural modelling3. The present approach is based upon a phenomenological (re)interpretation of the formal concept of situation and the analytical notion of language game. The combination of both lends itself easily to operational extensions in empirical analysis and procedural simulation of associative meaning constitution which grasps essential parts of what Peirce named semiosis 4.

    2.1  

    According to Situation Semantics 5, any language expression is tied to reality in two ways: by the discourse situation allowing an expression's meaning being interpreted and by the described situation allowing its interpretation being evaluated truth-functionally. Within this relational model of semantics, meaning may be considered the derivative of information processing which (natural/artificial) systems-due to their own structuredness-perform by recognizing similarities or invariants between situations that structure their surrounding realities (or fragments thereof).

    By recognizing these invariants and by mapping them as uniformities across situations, cognitive systems properly attuned to them are able to identify and understand those bits of information which appear to be essential to form these systems' particular view of reality: a flow of types of situations related by uniformities like e.g. individuals, relations, and time-space-locations. These uniformities constrain a system's external world to become its reality as its specific fragments of persistent courses of events whose expectability renders them interpretable.

    In semiotic sign systems like natural languages, such uniformities also appear to be signalled more basically by word-types whose employment as word-tokens in texts exhibit a special form of structurally conditioned constraints. Not only allows their use the speakers/hearers to convey/understand meanings differently in different discourse situations (efficiency), but at the same time the discourses' total vocabulary and word usages also provide an empirically accessible basis for the analysis of structural (as opposed to referencial) aspects of event-types and how these are related by virtue of word-uniformities accross phrases, sentences, and texts uttered. Thus, as a means for the intensional (as opposed to the extensional) description of (abstract, real, and actual) situations, the regularities of word-usages may serve as an access to and a representational format for those elastic constraints which underly and condition any word's linguistic meaning, the interpretations it allows within possible contexts of use, and the information its actual employment on a particular occasion may convey.

    Owing to Barwise's/ Perry's new approach-and notwithstanding its traditional (mis)conception as duality (i.e. the independent sign-meaning-view) of an information-processing system on the one hand which is confronted on the other hand with a prefixed external reality whose accessible fragments are to be recognized as its environment-this notion of situation proves to be pivotal for an empirical extension to their theory of semantics. Not only can it be employed to devise a procedural model for the situational embeddedness of cognitive systems as their primary means of mutual accessability6, but also does it allow to capture the semiotic unity as specified by Wittgenstein in his notion of language games 7 or the contextual (i.e. the usage-meaning-view).

    2.2  

    Trying to model language game performance along traditional lines of cybernetics by way of, say, an information processing subject, a set of objects surrounding it to provide the informatory environment's input, and some positive and/or negative feedback relations8 between them, these would hardly be able to capture the cognitive dynamism that self-organizing systems of knowledge acquisition and meaning understanding are capable of.

    In the sequel we will outline a feasible approach to have the meaning function's range being computed as a result of exactly those cognitive procedures by way of which structuredness emerges and understanding is produced from observing and analyzing the domain's regular constraints as imposed on the linear ordering (syntagmatics) and the selective combination (paradigmatics) of natural language items in texts produced in communicative performance. It will be modelled as a multi-level dynamic description which reconstructs the structural connections (couplings) of possible expressions towards the semiotic cognitive information processing systems (that may both intend/produce and realize/understand them) in respect to their situational settings, being specified by the expressions' pragmatics.

    Based upon the fundamentals of semiotics, the philosophical concept of communicative language games as specified by the formal notion of situations, not only allows for the formal identification of both, the (internal) structure of the cognitive subject with the (external) structure of its environment, but-being tied to the observables of actual language performance-opens up an empirical approach to procedural word semantics. Whatever can formally been analyzed as uniformities in Barwiseian discourse situations may be specified by word-type regularities as determined by co-occurring word-tokens in pragmatically homogeneous samples of language games. Going back to the fundamentals of structuralistic descriptions of regularities of syntagmatic linearity and paradigmatic selectivity of language items, the correlational analyses of discourse will allow for a two-level word meaning and world knowledge representation whose dynamism is a direct function of elastic constraints established and/or modified in communicative interaction by use of linguistic signs in language performance.

    Implemented, such a system will eventually lead to something like machine-simulated cognition, letting information be processed as a means of perceiving a (virtual) reality from its (textual) environment which is accessible through and structured by world-revealing (linguistic) elements of communicative sign usage. For natural language semantics this is tantamount to (re)present a term's meaning potential by a fuzzy distributional pattern of the modelled system's state changes rather than a single symbol whose structural relations are to represent the system's interpretation of its environment. Whereas the latter has to exclude, the former will automatically include the (linguistically) structured, pragmatic components which the system will both, embody and employ as its (linguistic) import to identify and to interpret its environmental structures by means of its own structuredness.

    3  The linguistic solution

    In linguistic semantics, cognitive psychology, and knowledge representation most of the necessary data concerning lexical, semantic and external world information is still provided introspectively. Researchers are exploring (or make test-persons explore) their own linguistic or cognitive capacities and memory structures to depict their findings (or to let hypotheses about them be tested) in static representational graphs. By definition, these approaches will map only what these representations are meant to depict, i.e. what is known to the analysts, not, however, what of the world's fragments under investigation might be conveyed in texts unknown to them. Being knowledge-based in the sense that-automatic procedures of knowledge acquisition being unavailable-human knowledge engineers have to fill and modify predefined shell structures, these representations will not only be restricted to predicative and propositional expressions which can be mapped in well established (concept-hierarchical, logically deductive) formats, but they will also lack the flexibility and dynamics of more re-constructive model structures designed for automatic analysis and representation of meanings from texts. Such devices have meanwhile been recognized to be essential9 for any simulative model capable to set up and modify a system's own knowledge structure, however shallow and vague its semantic knowledge and inferencing capacity may appear compared to human understanding.

    3.1  

    Other than introspective data acquisition and in contrast to classical formalisms for knowledge representation the present approach focusses on the structuredness which the communicative use of language in discourse by speakers/hearers will both, constitute and modify as a paradigm of cognition and a model of the emergence of meaning or semiosis. Under the notion of lexical relevance and semantic disposition 10, dynamic meaning representations have operationally been defined which may automatically be derived and filled from natural language texts.

    Operationalizing the Wittgensteinian notion of language games and drawing on his assumption that a great number of texts analysed for the terms' usage regularities will reveal essential parts of the concepts and hence the meanings conveyed11, such a description turns out to be identical with a analytical procedure. Starting from the universal constraints known to be valid for all natural languages, the present approch captures and operationalizes the restrictions which hold both for the syntagmatic and the paradigmatic relations of linguistic units observed.

    These constraints may be formalized as sets of fuzzy subsets of the vocabulary employed. Represented as a set-theoretical system of meaning points, the regularities detected will depict the distributional character of word meanings in an elastic mode of mutual constraints. Being composed of a number of operationally defined elements whose varying contributions can be identified with values of the respective membership functions, these can be derived from and specified by the differing usage regularities that the corresponding lexical items have produced in discourse. This translates the Wittgensteinian notion of meaning into an operation that may be applied empirically to any corpus of pragmatically homogeneous texts constituting a language game.

    Based upon the distinction of the syntagmatic and paradigmatic relatedness of language items in discourse, the core of the representational formalism can be characterized as a two-level process of abstraction (called a- and d-abstraction) providing the set of usage regularities and the set of meaning points of those word-types which are being instantiated by word-tokens as employed in natural language texts. The resultant structure of these constraints render the set of potential interpretations which are to be modelled in the sequel as the semantic hyperspace structure (SHS ).

    3.2  

    The statistics used so far for the analysis of syntagmatic and paradigmatic relations on the level of words in discourse, is basically descriptive. Developed from and centred around a correlational measure to specify intensities of co-occurring lexical items, these analysing algorithms allow for the systematic modelling of a fragment of the lexical structure constituted by the vocabulary employed in the texts as part of the concomitantly conveyed world knowledge.

    A modified correlation coefficient has been used as a first mapping function a. It allows to compute the relational interdependence of any two lexical items from their textual frequencies. For a text corpus

    of pragmatically homogeneous discourse, having an overall length

    measured by the number of word-tokens per text, and a vocabulary

    of n word-types of different identity i,j whose frequencies are denoted by

    the modified correlation-coefficient ai,j allows to express pairwise relatedness of word-types (xi,xj) Î V ×V in numerical values ranging from -1 to +1 by calculating co-occurring word-token frequencies in the following way

    (5)

    Evidently, pairs of items which frequently either co-occur in, or are both absent from, a number of texts will positively be correlated and hence called affined, those of which only one (and not the other) frequently occurs in a number of texts will negatively be correlated and hence called repugnant.

    As a fuzzy binary relation,

    can be conditioned on xn Î V which yields a crisp mapping

    where the tupels á(xn,1,[(a)\tilde](n,1)), ¼,(xn,N,[(a)\tilde](n,N))ñ represent the numerically specified, syntagmatic usage-regularities that have been observed for each word-type xi against all other xn Î V and can therefore be abstracted over one of the components in each ordered pair, thus, by a-abstraction defining an element

    Hence, the regularities of usage of any lexical item will be determined by the tupel of its affinity/repugnancy-values towards each other item of the vocabulary which-interpreted as coordinates- can be represented by points in a vector space C spanned by the number of axes each of which corresponds to an entry in the vocabulary.

    3.3  

    Considering C as representational structure of abstract entities constituted by syntagmatic regularities of word-token occurrences in pragmatically homogeneous discourse, then the similarities and/or dissimilarities between these abstract entities will capture the paradigmatic regularities of the correspondent word-types. These can be modelled by the d-abstraction which is based on a numerically specified evaluation of differences between any two of such points yi, yj Î C They will be the more adjacent to each other, the less the usages (tokens) of their corresponding lexical items xi, xj Î V (types) differ. These differences may be calculated by a distance measure d of, say, Eucledian metric.

    Thus, d may serve as a second mapping function to represent any item's differences of usage regularities measured against those of all other items. As a fuzzy binary relation, also

    can be conditioned on yn Î C which again yields a crisp mapping

    where the tupels á(yn,1,[(d)\tilde](n,1)), ¼, (yn,N[(d)\tilde](n,N))ñ represents the numerically specified paradigmatic structure that has been derived for each abstract syntagmatic usage-regularity yj against all other yn Î C . The distance values can therefore be abstracted again as in (7), this time, however, over the other of the components in each ordered pair, thus defining an element zj Î S called meaning point by

    By identifying zn Î S with the numerically specified elements of potential paradigms, the set of possible combinations S ×S may structurally be constrained and evaluated without (direct or indirect) recourse to any pre-existent external world. Introducing a Eucledian metric

    the hyperstructure áS,ñ or semantic hyper space (SHS) is constituted providing the meaning points according to which the stereotypes of associated lexical items may be generated as part of the semantic paradigms concerned.

    Table 1

    As a result of the two consecutive mappings (Tab. 1), any meaning point's position in SHS is determined by all the differences (d- or distance-values) of all regularities of usage (a- or correlation-values) each lexical item shows against all others in the discourse analysed. Thus, it is the basic analyzing algorithm which-by processing natural language texts-provides the processing system with the ability to recognize and represent and to employ and modify the structural information available to the system's performance constituting its understanding.

    Figure 1

    This answers the question where the label in our representation come from: put into a discourse environment, the system's text analyzing algorithm provides the means how the topological position of any meaning point z Î áS,ñ is identified and labeled by a vocabulary item x Î V according to the two consecutive mappings which can formally be stated as a composition of the two restricted relations [(d)\tilde] |  y and [(a)\tilde] |  x (Fig. 1). It is achieved without recurring to any investigator's or his test-persons' word or world knowledge (semantic competence), but solely on the basis of usage regularities of lexical items in discourse which are produced by real speakers/hearers in actual or intended acts of communication (communicative performance).

    3.4  

    Sofar the system of word meanings (lexical knowledge) has been represented as a relational data structure whose linguistically labeled elements (meaning points) and their mutual distances (meaning differences) form a system of potential stereotypes. Although theses representations by labeld points appears to be symbolic it is worth mentioning that in fact each such point is determined by a fuzzy distribution of wordtype-value-pairs which allows to be interpreted as a point in SHS whose very position is analogous to its symbolic meaning. Accordingly, based upon SHS-structure, the meaning of a lexical item may be described either as a fuzzy subset of the vocabulary, or as a meaning point vector, or as a meaning point's topological environment. The latter is determined by those points which are found to be most adjacent and hence will delimit the central point's meaning indirectly as its stereotype (Tab. ).

    Table 2

    Following the semiotic notion of understanding and meaning constitution, the SHS-structure may be considered the core of a two-level conceptual knowledge representation system12. Essentially, it separates the format of a basic (stereotype) word meaning representation from its latent (dependency) relational concept organization. Whereas the former is a rather static, topologically structured (associative) memory, the latter can be characterized as a collection of dynamic and flexible structuring procedures to re-organize the memory data by semiotic principles under various aspects13.

    SHS being a distance-ralational data structure, well-known algorithmic search strategies cannot immediately be made to work. They are mostly based upon some non-symmetric relational structure as e.g. directed graphs in traditional meaning and knowledge represenation formats. To convert the SHS-format into such a node-pointer-type structure, the SHS-model has to be considered as conceptual raw data or associative base structure which particular procedures may operate on to reorganize it. This is achieved by a recursively defined procedure that produces tree-structured hierarchies of meaning points under given aspects according to and in dependence of their meanings' relevancy.

    3.5  

    Other than in pre-defined semantic networks and predicative knowledge bases, and unlike conceptual representations that link nodes to one another according to what cognitive scientists believe to know or supposedly have found out about the way conceptual information is structured in memory, an algorithm has been devised which operates on the SHS-data to induce dispositional dependencies between its elements, i.e. among subsets of meaning points related by their position. The procedure detects fragments from SHS according to different perspectives as specified by the meaning point it is started with, and it (re-)organizes relevant meaning points according to the constraints of semantic similarity encountered during operation. Stop-conditions may deliberately be formulated either qualitatively (i.e. naming a target point) or quantitatively (i.e. number of points, realm of distance or criteriality to be processed).

    This so-called D-operation has been conceived as a modified derivative of a minimal spanning tree-algorithm14. The procedure is recursively defined to operate on the semantic hyper space data zn Î áS,ñ. Given one meaning point's position as a start, the algorithm will work its way through all labeled points-unless stopped under conditions of a given target node, number of nodes to be processed, or threshold of maximal distance-transforming prevailing similarities of paradigms as represented by adjacency of points to induce a binary, non-symmetric, and transitive relation of lexical relevance between them. This relation allows for the hierarchical reorganization of meaning points as nodes under a primed head in an n-ary tree called dispositional dependency structure (DDS)15. It is tantamount to a numerical assessment (criterialty)16 and a hierarchical re-structuring (tree) of elements under a head point's aspect according to the dependency relation between descendant points along which activation might spread in case of the head point's stimulation.

    To illustrate the feasibility of the D-operation's generative procedure, a subset of the relevant, linguistic constraints triggered by the lexical item xi,  i = COMPUTER/computer is given in the format of a weighted semantic DDS Fig.. It has been generated by the procedure described from the SHS-data as computed from the corpus of German newspaper texts17.

    Figure 2

    Weighted numerically as a function of an element's distance values and its associated node's level and position in the tree, DDS(zi) either is an expression of the head-node's zi meaning-dependencies on the daughter-nodes zn or, inversely, expresses their meaning-criterialities adding up to an aspect's interpretation determined by that head. For a wide range of purposes in processing DDS-trees, differing criterialities of nodes can be used to estimate which paths are more likely being taken against others being followed less likely under priming activated by certain meaning points.

    4  The need for SCIPS

    From the communicative point-of-view natural language texts, whether stored electronically or written conventionally, will in the foreseeable future provide the major source of scientifically, historically, and socially relevant information. Due to the new technologies, the amount of such textual information continues to grow beyond manageable quantities. Rapid access and availability of data, therefore, no longer serves to solve an assumed problem of lack of information to fill an obvious knowledge gap in a given instance, but is instead and will even more so in future create a new problem which arises from the abundance of information we are confronted with.

    Thus, actual and potential (human) problemsolvers feel the increasing need to employ computers more effectively than hitherto for informational search through masses of natural language material. Although the demand is high for intelligent machinery to assist in or even provide speedy and reliable selection of relevant information under individual aspects of interest within specifyable subject domains, such systems are not yet available.

    4.1  

    Development of earlier proposals18, resulted in some promising advances19 towards an artificial semiotic cognitive information processing system (SCIPS) which is capable of learning to understand (identify and interpret) the meanings in natural language texts by generating dynamic conceptual dependencies (for inferencing).

    Suppose we have an information processing system with an initial structure of constraints modelled as SHS . Provided the system is exposed to natural language discourse and capable of basic structural processing as postulated, then its (rudimentary) interpretations generated from given texts will not change its subsequent interpretations via altered input-cycles, but the system will come up with differing interpretations due to its modified old and/or established new constraints as structural properties of processing. Thus, it is the structure that determines the system's interpretations, and being subject to changes according to changing environments of the system, constitutes its autopoetic space 20.

    Considering a text understanding system as SCIPS and letting its environment consist of texts being sequences of words, then the system will not only identify these words but-according to its own capacity for a- and d-abstraction together with its D-operation-will at the same time realize the semantic connectedness between their meanings which are the system's state changes or dispositional dependencies that these words invoke. They will, however, not only trigger DDS but will at the same time-because of the prototypical or distributed representational SHS format being separated from the dynamic DDS organization of meaning points-modify the underlying data according to recurrent syntagmatic and paradigmatic structures as detected from the textual environment21.

    4.2  

    In view of a text skimming system under development22, a basic cognitive algorithm will detect from the textual environment the system is exposed to, those strucural information which the system is able to collect due to the two-level structure of its linguistic information processing and knowledge acquisition mechanisms. These allow for the automatic generation of a pre-predicative and formal representation of fuzzy lexical knowledge which the system will both, gather from and modify according to the input texts processed. The system's internal knowledge representation will be made accessible by a front-end which allows system-users to make the system skim masses of texts for them and display its acquired knowledge graphically in dynamic structures of semantic dispositional dependencies (DDS). These provide variable constraints for the procedural modelling of conceptual connectedness and non-propositional inferencing which both are based on the algorithmic induction of an aspect-dependent relevance relation connecting lexical meanings according to differing conceptual perspektives. Thus, the display of DDS s or their resultant graphs may serve the user to acquire an overall idea of what the texts processed are roughly about, or along what general lines of conceptual dependencies they deal with a topic. DDSs may as well be employed in an knowledge processing environment to provide the user with relevant new keywords for an optimized recall-precision ratio in intelligent retrieval tasks, helping for instance to avoid unnecessary reading of texts, irrelevant to topics the searcher is looking for.

    Dispositional dependencies appear to be a prerequisit not only to source-oriented, contents-driven search and retrieval procedures which may thus be performed effectively on any SHS-structure. Due to its procedural definition, it also allows to detect varying dependencies of identically labeled nodes under different aspects which might change dynamically and could therefore be employed in conceptual, pre-predicative, and semantic inferencing as opposed to propositional, predicative, and logic deduction.

    Table 3

    For this purpose a procedure was designed to operate simultaniously on two (or more) DDS-trees by way of (simulated) parallel processing. The algorithm is started by two (or more) meaning points which may be considered to represent conceptual premises. Their DDS can be generated while the actual inferencing procedure begins to work its way (breadth-first, depth-first, or according to highest criteriality) through both (or more) trees, tagging each encountered node. When the first node is met that has previously been tagged by activation from another premise, the search procedure stops to activate the dependency paths from this concluding common node back to the premises, listing the intermediate nodes to mediate (as illustrated in Tab. 3) the semantic inference paths as part of the dispositional dependencies structures DDS concerned.

    It is hoped that our system will prove to provide a flexible, source-oriented, contents-driven method for the multi-perspective induction of dynamic conceptual dependencies among stereotypically represented concepts which-being linguistically conveyed by natural language discourse on specified subject domains-may empirically be detected, formally be presented, and continuously be modified in order to promote the learning and understanding of meaning by semiotic cognitive information processing systems in fuzzy computational semantics.

    References

    Barwise, J./ Perry, J.(1983):
    Situations and Attitudes. Cambridge, MA (MIT)
    Lorch, R.F. (1982):
    Priming and Search Processes in Semantic Memory: a test of three models of Spreading Activation. Journal of Verbal Learning and Verbal Behavior 21, pp.468-492
    Maturana, H./ Varela, F. (1980):
    Autopoiesis and Cognition. The Realization of the Living. Dordrecht (Reidel)
    Peirce, C.S. (1906):
    Pragmatics in Retrospect: a last formulation. (CP 5.11 - 5.13), in: The Philosophical Writings of Peirce. Ed. by J. Buchler, New York (Dover), pp. 269-289
    Prim, R.C. (1957):
    Shortest connection networks and some generalizations, Bell Systems Technical Journal 36, pp. 1389-1401
    Rieger, B. (1981):
    Feasible Fuzzy Semantics. In: Eikmeyer, H.J./ Rieser, H. (Eds): Words, Worlds, and Contexts. New Approaches in Word Semantics. Berlin/ New York (de Gruyter), pp. 193-209
    Rieger, B.B. (1985a):
    Lexical Relevance and Semantic Disposition. On stereotype word meaning representation in procedural semantics. In: Hoppenbrouwes, G./ Seuren. P./ Weijters, T. (Eds.): Meaning and the Lexicon. Dordrecht (Foris), pp. 387-400
    Rieger, B. (1985b):
    On Generating Semantic Dispositions in a Given Subject Domain. in: Agrawal, J.C./ Zunde, P. (Eds.): Empirical Foundation of Information and Software Science. New York/ London (Plenum Press), pp. 273-291
    Rieger, B.B. (1988a):
    TESKI - A natural language TExt-SKImmer for shallow understanding and conceptual structuring of textually conveyed knowledge. LDV/CL-Report 10/88, Dept. of Computational Linguistics, University of Trier
    Rieger, B. (1988b):
    Definition of Terms, Word Meaning, and Knowledge Structure. On some problems of semantics from a computational view of linguistics. in: Czap, H./ Galinski, C. (Eds.): Terminology and Knowledge Engineering (Supplement). Frankfurt (Indeks Verlag), pp. 25-41
    Rieger, B. (1990):
    Situations and Dispositions. Some formal and empirical tools for semantic analysis. in: Bahner, W./ Schildt, J./ Viehweger, D. (Eds.): Proceedings of the XIV. International Congress of Linguists (CIPL), Vol.II, Berlin (Akademie Verlag) 1990, pp. 1233-1235
    Rieger, B. (1989):
    Unscharfe Semantik. Die empirische Analyse, quantitative Beschreibung, formale Repräsentation und prozedurale Modellierung vager Wortbedeutungen in Texten. Frankfurt/ Bern/ New York (P. Lang)
    Rieger, B. (1991a):
    Reconstructing Meaning from Texts. A Computational View of Natural Language Understanding. in: Raubold, E. (Ed.): Innovative Development and Applications of Microelectronics and Information Technology. (Proceedings of the 2nd German-Chinese Electronics Week (GCEW 91), Berlin/ Offenbach (VDE Verlag) 1991, pp. 193-200
    Rieger, B. (1991b):
    Distributed Semantic Representation of Word Meanings. in: Becker, J.D./ Eisele,I./ Mündemann, F.W.(Eds.): Parallelism, Learning, Evolution. [Lecture Notes in Artificial Intelligence 565 ], Berlin/ Heidelberg/ New York (Springer) 1991, pp. 243-273
    Rieger, B. (1991c):
    On Distributed Representation and Word Semantics. [ICSI-Report TR-91-012 ], International Computer ScienceInstitute, UC Berkeley, CA. 1991
    Rieger, B.B./ Thiopoulos, C. (1989):
    Situations, Topoi, and Dispositions. On the phenomenological modelling of meaning. in: Retti, J./ Leidlmair, K. (Eds.): 5th Austrian Artificial Intelligence Conference. (ÖGAI 89) Innsbruck; (KI-Informatik-Fachberichte Bd.208) Berlin/ Heidelberg/ New York (Springer), pp. 365-375
    Varela, F. (1979):
    Principles of Biological Autonomy. New York (North Holland)
    Varela, F.J./ Thompson, E./ Rosch, E. (1991):
    The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA. (MIT Press)
    Wiener, N. (1956):
    The Human Use of Human Beings. Cybernetics and Society. New York (Doubleday Anchor)
    Winograd, T. (1983):
    Language as a Cognitive Process. Vol.1 Syntax. Reading, MA (Addison-Wesley)
    Winograd,T./ Flores, F. (1986):
    Understanding Computers and Cognition: A New Foundation for Design. Norwood, NJ (Ablex)
    Wittgenstein, L. (1958):
    The Blue and Brown Books. Ed. by R. Rhees, Oxford (Blackwell)
    Wittgenstein, L. (1969):
    Über Gewißheit - On Certainty. New York/ San Francisco/ London (Harper & Row), [No.61-65], p.10e
    Zadeh,L.A. (1965):
    Fuzzy sets. Information and Control 8(1965), pp. 338-353
    Zadeh,L.A. (1975):
    Fuzzy logic and approximate reasoning. Synthese 30, pp. 407-428
    Zadeh,L.A. (1981):
    Test-score semantics for natural languages and meaning representation via PRUF. in: Rieger, B. (Ed.): Empirical Semantics, Vol.I [Qunatitative Linguistics 13], Bochum (Brockmeyer), pp. 281-349

    Footnotes:

    1Published in: Japanese-German Center Berlin (Eds.): Joint Japanese-European Symposium on Fuzzy Systems 1992 [Publications of the JGCB: Series 3 Vol. 8], Berlin (JDZB) 1994, pp. 197-217

    2Varela/Thompson/Rosch (1991)

    3Procedural models denote a class of models whose interpretation is not derived from the semantics of an underlying theory or its representation but consists in the processes that these procedures instantiate when implemented in the computer. The lack of an abstract (theoretical) level of representation for these processes (and their results) other than the notation of their underlying procedures (in some formal language) is one of the reasons why fuzzy set theory - Zadeh (1965), (1975), (1981) - and its derivates may provide a representational format for computational approaches to natural language semantics.

    4By semiosis I mean [...] an action, or influence, which is, or involves, a coöperation of three subjects, such as sign, its object, and its interpretant, this tri-relative influence not being in any way resolvable into actions between pairs. (Peirce 1906, p. 282)

    5Barwise/Perry (1983)

    6Rieger/Thiopoulos (1989); Rieger (1991a), (1991b)

    7''There are ways of using signs simpler than those in which we use the signs of our highly complicated everyday language. Language games are the forms of language with which a child begins to make use of words. [ ... ] We are not, however, regarding the language games which we describe as incomplete parts of a language, but as languages complete in themselves, as complete systems of human communication.'' (Wittgenstein 1958, pp. 17 and 81; [my italics ])

    8"[...] feedback is a method of controlling a system by reinserting into it the results of its past performance. If these results are merely used as numerical data for the criticism of the system and its regulations, we have the simple feedback of control engineers. If, however, the information which proceeds backward from the performance is able to change the general method and pattern of perfomance, we have a process which may well be called learning." (Wiener 1958, p. 60)

    9Winograd (1986)

    10Rieger 1985a

    11Wittgenstein (1969)

    12Rieger (1989)

    13This corroborates and extends ideas expressed within the theories of priming and spreading activation (Lorch 1982) allowing for the dynamic generation of paths (along which activation might spread) being a function of priming instead of its presupposed condition.

    14Prim (1957)

    15Rieger (1985b)

    16Rieger (1990)

    17Randomly assembled from first two pages of the daily die welt, Jg.1964, Berlin edition.

    18Rieger (1984)

    19Rieger (1991a), (1991b)

    20"[...] an outopoetic organization constitutes a closed domain of relations specified with respect to the autopoetic organization that these relations constitute, and thus it defines a space in which it can be realized as a concrete system, a space whose dimensions are the relations of production of the components that realize it." (Maturana/Varela 1980, p. 135)

    21Autopoietic principles of such a semiotic system were modelled also as mathematical topoi and got implemented successfully within a dynamic interpreter for PROLOG facts by C. Thiopoulos in his PhD-thesis (1991), completed at the Deptartment of Computational Linguistics, University of Trier.