Burghard B. Rieger

Rieger: Definition of Terms

Definition of Terms, Word Meaning, and Knowledge Structure
On some problems of semantics from a computational view of linguistics1

Es gibt kein größeres Hindernis des Fortgangs in den Wissenschaften, als das Verlangen, den Erfolg davon zu früh verspüren zu wollen ... ( LICHTENBERG)


Outline

  1. Introduction
  2. Linguistics
  3. Computational View
  4. Problems of Semantics
  5. Knowledge Structure
  6. Word Meaning
  7. Definition of Terms
  8. References

1  Introduction

The title of my paper may suggest that a lot of things ought to be dealt with to cover the topic (or rather, the topics) concerned, and that this can hardly be achieved within the limited frame of space.

The sub-title therefore serves to constrain the topic, so that general issues of past, present and future development of data, information processing will not be dealt with.

I will, however, try to convey some linguistically motivated ideas, concerning problems which nearly everyone dealing with natural language and the processing of its meanings most certainly either has run into or will encounter in this or another way, with or without being forced to solve them under the particular aspects of his discipline, approach and/or needs.

In what follows, therefore, be prepared to get acquainted with just ''another special view through the old holes'' - to quote LICHTENBERG's ironical aphorism [1] on the progress of science - and let us hope that there will be something new for some of you.

2  Linguistics

I can be very brief on linguistics in that - as all disciplines - it may concisely be characterized by stating its object of research, methods of investigation, and the objectives aimed at. The object of research is Natural Language, the methods employed are those of formal-theoretic and/or quantitative-empirical analysis, description or modelling, and the aims or objectives that are pursued by such activities, are to understand how Natural Languages serve the function they accomplish, i.e. being understood as a means to convey meaning and knowledge in an incredibly flexible and altogether quite reliable way.

3  Computational View

The computational view of linguistics is - as the term suggests - closely connected with possibilities which the advent of computers as not only number-crunching but symbol-processing machines has produced. More than rapid data manipulation, however, the symbol processing automaton has added a highly significant extension to epistomology. Let me at least sketch this extension:

In information system theory the activity of intelligent organisms can be characterized as the endeavor to find in and/or to create from an abundant mass of environmental data some regularities in order to be able to cope with this environment. The system's detection of order and repetition and its representation as some structure allows for a rule-governed behaviour based on systematic expectations. Scientific activities may be characterized analogously with differing levels of analysing and representing order according to the paradigm available at a time.

According to PATRICK SUPPES [2], observables of any kind may - on a first level - be described and analysed as entities or systems thereof constituting what has become to be known as the structural paradigm. Within the processual paradigm, however, these very structures may be understood as results of processes which can be modelled - on a second level - as different states of a system changing along a time axis. Abstracting from this timely duration of processes, these can be transformed to what is called a procedure which can - on a third level - be represented in a systematic way as an algorithm written in a formal language. Providing an adequate (hard- and software) environment, such procedures can be made to operate on suitable data to become operational processes again which will produce structures of observable entities of nearly any kind. This is - in short - what constitutes the procedural paradigm as an epistomological extension to structuralism.

Applied to linguistics (or at least parts of it), the procedural approach already shed some new light (not only through the old holes) but also onto problems whose solution appears to be at hand now, particularly so in ''computational linguistics''. Within the study of syntax the new paradigm has conquered the field, lacking only in the automatic detection of the data structures necessary to serve as syntactic knowledge base for a language. The study of semantics is (or at least ought to be) concerned mainly with the build-up and establishment of data structures of a particular kind which can serve as representational formats for world knowledge and word meanings. Although the dynamism and flexibility of knowledge and meaning has proved to be particularly hard to capture even within the new paradigm, I feel that procedural modelling might eventually prove to offer a breakthrough in meaning and knowledge representation, that some WITTGENSTEINian ideas seem to have asked for thirty years ago:

We constantly compare language with a calculus proceeding according to exact rules. This is a very one-sided way of looking at language. [...] For not only do we not think of the rules of usage - of definitions, etc. - while using language, but when we are asked to give such rules, in most cases we aren't able to do so. We are unable clearly to circumscribe the concept we use; not because we don't know their real definition, but because there is no ''real'' definition to them. [...] Why then do we constantly compare our use of words with one following exact rules? The answer is that the puzzles which we try to remove always spring from just this attitude towards language. [3]

4  Problems of Semantics

Semantics, the study of meaning, has become focal within a number of hitherto specialized disciplines. Scholars engaged in these research activities converge on a common interest in natural and artificial language systems or the communicative property related to their use. They differ, however, in what from their points-of-departure constitutes semantics. The variety of aspects raised ranges from the analysis and representation of natural language meanings as conveyed by signs, words, sentences and texts, via conceptual knowledge, memory structure, and logical inferencing, to the modelling and simulation of processes of cognition and comprehension.

Whereas more philosophically oriented investigations of knowledge, thought, and language have had a long-standing tradition of posing practical questions and producing theoretical answers, the inverse has become characteristic, meanwhile, of more recent approaches in the domain of cognitive and information sciences. Relevant research currently being undertaken in different disciplines reveals a growing tendency - in spite of severe theoretical problems connected with the ontological status of meaning - to come up with limited solutions which are practical in the sense that they are applicable to or reconstructable within operational models of some sort.

It is this kind of distinction between ''theory'' and ''model'' - after having been made and practiced in the sciences and in engineering for some time now - that is to become fertile for some computational approaches in linguistic semantics and the cognitive sciences. In these disciplines, general theories may still informally be assumed or heuristically be developed, but only so that certain components of them may be specified as to be studied in small-scale models. Preferably implemented as computer programs, these allow for the algorithmic simulation of processes to study their properties and results in order to test, evaluate, or modify assumptions being made by the large-scale theory. In this way it is hoped that informal theories may gradually be turned into procedural models as envisaged by KAREN SPARCK-JONES only recently:

In particular, in bringing the philosophical and computational approaches together in practice, many mismatches have to be removed since each party tends to be subtle where the other is simple (for example in relation to parsing, or reference). Somewhat similarly, within the framework of artificial intelligence, while the connection between linguistic and world knowledge (as internally represented) is made and effectively exploited, taking on board the multiplicity of word senses we regognize would impose wholly new strains on systems primarily focussing on the relation between sentence and message structure and that of the reference world. [4]
The reason for the somewhat optimistic expectations in this respect is, that semanticists from whatever discipline will probably agree on the common basis of the three major problems presented by the study of natural language meaning, namely the fundamental assumption of meaning to be (or be reconstructable as) some relational structure.

This assumption, however, may differently be addressed:

firstly, as the denotational aspect of how the signs, words, and sentences of a language are related to the entities (objects and/or processes) they refer to in the external world, constituting referential meaning as a system of extra-lingual relations;

secondly, what is known as the connotational aspect of how signs, words, and sentences of a language are related to one another, constituting structural meaning as a system of sub-systems of intra-lingual relations, and

thirdly, what is referred to as the dynamic aspect of how signs, words, and sentences of a language are related to functions which instantiate varying restrictions on possible choices of (referential and/or structural) meaning representations, constituting procedural meaning as a system of procedures that operate on and simultaneously reorganise the conceptual data of memory and/or knowledge.

To start with the denotational aspect of meaning, referential semantic theory has developed along the lines of FREGE [5], RUSSELL [6], the early WITTGENSTEIN [7], and CARNAP [8]. Their relevance to linguistics, which has only been recognized during recent years, has resulted in a number of approaches which employ formal logics as a representational notation for natural language expressions. They are assumed to have essentially declarative meaning which is analysable in propositional structures that are either `true' or `false', or have a third value such as `indeterminate'. Like truth-conditions of formal predicates or propositions, those for natural language sentences are modelled and introduced in terms of classical set theory. Accordingly, the meaning of a word basically appears to be identifiable with its conpositorial function in the propositions it may constitute. These, in turn, are interpreted by their denotations defined either extensionally as a set of points of reference, or intensionally as a set of satisfied properties in the universe of possible worlds, allowing truth-values to be assigned to any (declarative) natural language sentence represented in this way.

Truth-value models of this kind tend to exhibit all the formalisms and idealizing abstractions that the logical rigour of binary formal systems calls for. They do so, however, at the price of a rather limited coverage of basic and very obvious characteristics of natural language meaning, like for example, indeterminacy, vagueness, variation, con- and co-textual dependency, etc. These phenomena cannot be accounted for adequately by model-theoretic approaches which considered them more or less neglectable noise factors.

Unlike referential semantics, structural semantic theory has primarily been concerned with word meaning. As such, the phenomena underlying the just mentioned noise factors had been considered fundamental to the constitution of natural language meaning. Structuralists have therefore been concerned with the question of how the lexical meanings of words - other than being reconstructable from propositions relating language terms to extra-lingual entities - might be understood as being intra-lingually related to one another, constituting a (syntagmatically and/or a paradigmatically) structured system of overlapping sub-systems of `lexical fields' which organize the world as a universe of potential discourse. According to structural theory, the meaning of each term depends on the position it occupies in that system. It is argued that - although the term's references may be indeterminate, varying with different contexts - the position of each term relative to the others in these sub-systems will nevertheless be defined with precision. This idea of `structural determinacy' as opposed to `referential vagueness' can be traced in the works of linguists like SAUSSURE [9], HJELMSLEV [10], and WEISGERBER [11], down to COSERIU [12], HALLIDAY [13] and LYONS [14], and it has inspired empirical research in non-linguistic disciplines like the ethno-sciences and cognitive and experimental psychology. Represented either as conceptual cores (prototypes) determined by sets of functional and/or perceptual descriptors (schemata), or as linguistically labeled concepts (nodes) and relations (links) between them, these model constructions of both the `memory'-type as developed in cognitive psychology and the `network'-type as advanced in artificial intelligence, converge on the processual character of what meaning, and hence, cognition and comprehension, constitutes.

Unlike referential and structural notions of meaning, procedural semantic theory still is primarily an instrumental approach, not an analytical or descriptive one, let alone a fully fledged theory. The relational but basical static structures established by both, the truth-functional analysis of sentences, or the evaluating description of words, are superseded by an essentially dynamic approach which identifies meaning with the execution of goal-oriented procedures. These can be formulated to operate on and use those structures which the other semantic approaches will be able to provide as knowledge bases. Triggered by language terms, the procedures will not only allow for the activation of relevant sectors of these system structures but may simultaneously be used to model learning and forgetting as processes of reorganization and/or modification of knowledge systems. Complex formal deduction as well as contents-driven reasoning involved in these processes seem to be reconstructable by means of combination of only a limited number of apparently fundamental operations. Some of these have been employed and tested by psychologists in concept processing experiments of sorting, matching, and attainment tasks, and some have been implemented and used by information scientists in database and knowledge representation systems as storage, identification, and retrieval procedures. Combining the features of the descriptor-type and the network-type models on the basis of works by scholars like JOHNSON-LAIRD [15], MILLER [16], and ELEANOR ROSCH [17] in psychology, and MINSKY [18], SCHANK [19], and WINOGRAD [20] in artificial intelligence, the procedural paradigm adopted for computational linguistics will hopefully bring these three disciplines more closely together to the benefit also of `terminology' and `knowledge engineering'.

5  Knowledge Structure

What sort of systems have different disciplines developed to represent and structure the knowledge they consider essential or relevant in investigating natural language meaning?

In linguistics, the representation of word-meanings by tables of markers distinguishing e.g. human, male, unmarried adults as BACHELORs from human, female, married adults as WIVES is still part of introductory courses. A more sophisticated version of it employs a tree-structure to represent hierarchical dependencies of lexical meanings related by common markers and differing distinguishers, as the example from NIDA [21] shows (Fig. 1).

Figure 1

Fig. 1: Componential tree-representation of word meaning

Here, groups of lexical meanings of CHAIR are marked (in round brackets) according to object/role use - on the first level - human/nonhuman use - on the second - etc. ending up with a final list of distinguishers (in square brackets) that are considered (non-compound) properties to characterize the identified meanings sufficiently.

Comparing, however, the left-most branch of this tree and its bottom-line information to the puzzle BLACK's [22] chair museum (Fig. 2) poses (at least to German speakers of Markerese), then at least one of the problems of this representational device becomes obvious: it cannot depict gradual transitions.

Figure 2
Fig. 2: The chair museum (section)

In early artificial intelligence a different type of knowledge representation was developed for question-answering-systems. A fragment of the most common schema of the semantic network type according to WINOGRAD [23] is shown in Figure 3.

Figure 3

Fig. 3: Semantic net representation

Here again we have labeled concept nodes linked to one another by pointers representing labeled relations which form a network instead of a tree structure. This enables the system to answer questions like: ''Is Susy a cat?'' correctly by identifying the SUSY-node, its ISA-relation pointer and the CAT-node. Moreover, the pointer structure allows for the processing of paths laid through the network, initiated by questions like: ''Susy, cat?'' which will prompt the answer ''Busy isa cat. Cat eats fish. Cat is an animal. Fish is an animal.''

Probably one of the most familiar forms of concept representation which experimental psychologists like e.g. COLLINS/QUILLIAN [24] and KLIX [25] have set up and tested in the course of their developments of memory models is again a tree-like graph (Fig. 4).

Figure 4

Fig. 4: Conceptual hierarchy representation

Here we have a hierarchy of labeled concept nodes with predicates and attributes linked to them which are herited by directly dependent nodes. The hypotheses formulated and tested in experiments predict that testpersons will take more time to identify and decide given propositions with an increasing number of node- and level-transitions to be processed in the course of interpretation. Evaluating a sentence like ''A canary can sing'' will take less time than to decide whether the sentence ''A robin can breathe'' is true or not. Thus, reaction-time serves as an indicator for the proposed model structure either to be correct or in need of modification.

A schematic representation of concept relatedness as envisaged by cognitive theorists who work along more procedural lines of memory models as by COLLINS and LOFTUS [26] is shown in Figure 5.

Figure 5

Fig. 5: Associative net representation

Their distance-relational conception lends itself readily to the notion of stereotype representation for concepts that do not have intersubjectively identifiable sharp boundaries. Instead of binarily decidable category membership, stereotypical concepts or prototypes are determined by way of their adjacency to other prototypes. Taken as a memory model, stimulation of a concept will initiate spreading activation to prime the more adjacent concepts more intensely than those farther away in the network structure, thus determining a realm of concepts related by their primed semantic affinity. In the given example, the stimulation of the concept-node MANAGEMENT will activate that of BUSINESS first, then INDUSTRY and ORGANISATION with about the same intensities, then ADMINISTRATION and so on, with the intensities decreasing as a function of the activated nodes' distances.

These four schemata of lexical structures in linguistic semantics, memory models in cognitive psychology, and semantic networks in AI-research have in common that they use directed graphs as the basic format of their models. As model structures for semantic representation these are well designed to deal primarily with static aspects of meaning and knowledge. Thus, in interpreting input symbols/strings, pre-defined/stored meaning relations and constructions will be identified and their representations be retrieved. Possibly distorted or modified instantiations of such relations or relevant supplementary semantic information, however, can hardly be recognized or be provided within such systems. As the basic data is not taken from natural language discourse in communicative environments but is elicited in experimental settings by either exploring one's own or a testperson's linguistically relevant cognitive or semantic capacities, usage similarities of different or contextual variations of identical language items are difficult to be ascertained.

This is rather unsatisfactory from a procedural linguist's point-of-view because the structural format for word meaning as well as world knowledge may be based upon linguistic data produced by real speakers/writers in factual acts of communicative performance in order to let new meaning representations (or fragments of them) replace (or improve) older ones, simultaneously changing and updating the knowledge base concerned and render its static model to become a dynamic one.

6  Word Meaning

Analysing directly, how people use language terms in texts instead of asking them how they think they use terms and understand them, will ensure that such an approach can replace (or improve) old meaning representations by newer ones as soon as people change the way of using terms. A system designed on the basis of such an empirical approach to meaning representation from the analysis of term usage in texts would be able to change and update its knowledge structure simultaneously and render its static model to become a dynamic one.

In a rather sharp departure from more traditional, introspective data acquisition in semantic and knowledge representation research, we have been engaged in my research group formerly at the Aachen Technical University to develop some non-parsing, lexicon independent strategies for the processing of natural language texts in view of an automatic assessment of their contents. Based upon statistical means for the empirical analysis and the formal representation of vague word meanings in natural language texts, procedures have been devised which allow for the systematic modelling of a fragment of the lexical structure constituted by the vocabulary employed in the texts as an essential part of the concomitantly conveyed world knowledge concerned.

In a first step [27], the statistical coefficients applied will map lexical items onto fuzzy subsets of the vocabulary according to the numerically specified regularities these items have been used with in the discourse analysed. The resulting system of sets of fuzzy subsets is a data-structure which may be interpreted topologically as a hyperspace with a natural metric. Its elements are abstract meaning representations and the distances between them represent their mutual meaning differences. They form discernable clouds and clusters which are determined by the mapping function. It is a composite of any one lexical item's usage regularity and its differences calculated against those of all other items occurring in the texts. Thus, the analysing algorithms take natural language texts as input and produce as output a distance-like datastructure, called semantic space, of linguistically labelled elements, called meaning points, whose topologies like position, adjacency, environment, etc. reveal associative properties of the conceptual prototypes according to which their linguistic labels have been employed as lexical items in the texts processed.

In a second step [28], some procedures were developed to allow for some contents-driven reorganisation of the meaning points as represented in the semantic space structure. This was achieved by separating the present format of a basically static, distance-relational representation of prototype word meanings from their latent dependency relational organisation which is induced algorithmically in our model. The procedure used is essentially based on an optimal-spanning-tree-algorithm and it generates a hierarchy of meaning points organised according to their decreasing degree of relevancy depending on the semantic aspect or point-of-entry to the meaning representation system.

ARBEIT 0.000
ALLGEMEIN 8.332 ANBIET 8.756 AUSGAB 10.392
STADT 10.711 PERSON 11.075 LEHR 11.811
GEBIET 11.831 VERBAND 12.041 UNTERNEHM 12.130
VERKEHR 12.312 HERRSCH 12.362 VERANTWORT 12.543
EINSATZ 13.980 STELLE 14.120 WERB 15.561
ORGANIS 16.146 VERWALT 16.340 MODE 16.842
GESCHAEFT 16.873 UNTERRICHT 18.275 BITT 19.614
... ... ...
Fig. 6.1: Topologic environment of ARBEIT (labour)


INDUSTRI 0.000
SUCH 2.051 ELEKTRON 2.106 LEIT 2.369
BERUF 2.507 SCHUL 3.229 SCHREIB 3.329
WIRTSCHAFT 3.659 COMPUTER 3.667 FAEHIG 3.959
SYSTEM 4.040 ERFAHR 4.294 KENN 5.286
DIPLOM 5.504 TECHNI 5.882 UNTERRICHT 7.041
ORGANIS 8.355 WUNSCH 8.380 BITT 9.429
STELLE 11.708 UNTERNEHM 14.430 STADT 16.330
GEBEIT 17.389 VERBAND 17.569 PERSON 18.938
AUSGAB 19.302 ANBIET 20.335 ALLGEMEIN 21.685
... ... ...
Fig. 6.2: Topologic environment of INDUSTRI (industry)

To give an idea of what the semantic space data looks like and what its dynamic reorganisation under certain semantic aspects will produce, here are some examples of topological environments of meaning points in the semantic hyperspace structure computed from a corpus of German newspaper texts [29] as centred around ARBEIT and INDUSTRI (Fig. 6.1 and 6.2).

Figure 7.1

Fig. 7.1: Dispositional dependency structure (DDS) of ARBEIT (labour)

Figure 7.2

Fig. 7.2: Dispositional dependency structure (DDS) of INDUSTRI (industry)

As I cannot comment in any detail on the dynamic re-structuring procedures operating on these environments, let me point to the fact, that in the semantic space we have a basic knowledge structure whose symmetrical and reflexive, but non-transitive distance relation is transformed algorithmically into a transitive dependency relation, allowing for specific sectors or fragments of the semantic space structure to be selected automatically according to the aspect chosen. The resulting hierarchy of meaning points, called the root-node's ''semantic'' or ''dispositional dependency structure'' (SDS or DDS), as given in Figures 7.1 and 7.2, has been employed not only to calculate degrees of relevancy of meanings under certain aspects [30], but may also be used as a base structure for the simulation of contents-driven, semantic reasoning as opposed to deductive propositional inferencing [31].

7  Definition of Terms

Coming to my final and sixth point of term definition, I certainly will not venture to present this distinguished assembly of aggregated expertise in Terminology and Knowledge Engineering with just another illustration of LICHTENBERG's aphorism (new view through old holes).

What I would like to say, however, is, that from a linguistic point-of-view, as term definition appears to be a matter of special purpose languages, or sub-languages, the terminologists come into the picture when the linguistic semanticists have done their job properly. Deplorably enough, this is not the case, yet, although work is in progress. It appears, however, that in both fields, computational linguistics and knowledge representation, particular interest has been directed towards the investigation of more dynamic structures in modelling word meaning and world knowledge [32]. Results from this new perspective will hopefully soon be available to help defining terms and engineering knowledge as a way of a rule-governed, kowledge-based, systematic explication of unambiguous use of language items for special communicative purposes in present and future environments.

8  References

1. LICHTENBERG, G.C.: Aphorismen. Ed. by Leitzmann, A. Nendeln, Liechtenstein: Kraus Reprint, 1968, F 871.

2. SUPPES, P.: Probabilistic grammars for natural languages. In: Davidson, D./Harman, G. (eds.). Semantics of natural language. Dordrecht: Reidel, 1972, pp. 741-762.

- Procedural semantics. In: Haller, R./Grass, L.W. (eds.). Proceedings of the 4th International Wittgenstein Symposion Kirchberg. Wien: Holder & Pichler, 1979, pp. 27-35.

3. WITTGENSTEIN, L.: The blue and brown books. Edited by Rhees, R. Oxford: Blackville, 1958, pp. 24-26.

4. SPARCK-JONES, K.: Synonymy and classification. Edinburgh: University Press, 1986, p. 7.

5. FREGE, G.: Ueber Sinn und Bedeutung. In: Patzig, G. (ed.). Gottlob Frege. Funktion, Begriff, Bedeutung. Göttingen: Vandenhoeck & Ruprecht, 1969, pp. 40-65.

6. RUSSELL, B./WHITEHEAD, A.N.: Principia mathematica. Cambridge: University Press, 1973, Vol. 1-3.

7. WITTGENSTEIN, L.: Tractatus logico-philosophicus. Frankfurt: Suhrkamp, 1971.

8. CARNAP, R.: Logische Syntax der Sprache. Wien/New York: Springer, 1968.

9. SAUSSURE, F. DE: Cours de linguistique gènèrale. Kritische Edition von R. Engler. Wiesbaden: Backhaus, 1967.

10. HJELMSLEV, L.: Für eine strukturelle Semantik. In: Hjelmslev, L. (ed.). Aufsätze zur Sprachwissenschaft. Stuttgart: Klett, 1974.

11. WEISGERBER, L.: Zur innersprachlichen Umgrenzung des Wortfeldes. Wirkendes Wort (1951/52), no. 2, pp. 138-143.

12. COSERIU, E.: Einführung in die strukturelle Betrachtung des Wortschatzes. Tübingen: Niemayer, 1970.

13. HALLIDAY, M.A.K.: Language as social semiotic: The social interpretation of language and meaning. London: Arnold, 1978.

14. LYONS, J.: Structural semantics. Oxford: Blackwell, 1963.

15. JOHNSON-LAIRD, P.N.: Procedural semantics. Cognition (1977), no. 5, pp. 189-214.

16. MILLER, G.A./JOHNSON-LAIRD: Language and perception. Cambridge, U.K.: CUP, 1976.

17. ROSCH, E.: Cognitive representations of semantic categories. Journal of Experimental Psychology (1977), General 104, no. 3, pp. 192-233.

18. MINSKY, M.L.: Semantic information processing. Cambridge/London: MIT Press, 1968.

19. SCHANK, R.C.: Conceptual dependency theory. In: Schank, R.C. (ed.). Conceptual information processing. Amsterdam: North Holland, 1975, pp. 22-82.

20. WINOGRAD, T.: Language as a cognitive process. Reading, Mass./London: Addison-Wesley, 1983, Vol. I: Syntax.

21. NIDA, E.A.: Toward a science of translating. Leiden: Brill, 1964.

22. BLACK, M.: Vagueness. An exercise in logical analysis. Philosophy of Science (1937), no. 4, pp. 427-455.

23. WINOGRAD, T.: Frame representation and the declarative/procedural controversy. In: Bobrow, D.G./Collins, A. (eds.). Representation and understanding. New York/San Francisco/London: Academic Press, 1975, pp. 185-210.

24. COLLINS, A.M./QUILLIAN, M.R.: Retrieval time from semantic memory. Journal of verbal learning and verbal behaviour (1969), no. 8, pp. 240-247.

25. KLIX, F.: Strukturelle und funktionelle Komponenten des menschlichen Gedächtnisses. In: Klix, F. (ed.). Psychologische Beiträge zur Analyse kognitiver Prozesse. Berlin: Akademie-Verlag, 1976, pp. 57-98.

26. COLLINS, A.M./LOFTUS, E.F.: A spreading activation theory of semantic processing. Psychological Review (1982), no. 6, pp. 407-428.

27. RIEGER, B.B.: Feasible fuzzy semantics: On some problems of how to handle word meaning empirically. In: Eikmeyer, H.J./Rieser, H. (eds.). Words, worlds and contexts: New approaches in word semantics. Berlin/New York: de Gruyter, 1981, pp. 193-209.

28. RIEGER, B.B.: Lexical relevance and semantic disposition: On stereotype word meaning representation in procedural semantics. In: Hoppenbrouwers, G./Seuren, P./Weijters, A. (eds.). Meaning and the lexicon. Dordrecht: Foris Publications, 1985, pp. 387-400.

29. DIE WELT, first two pages of 1964 editions.

30. RIEGER, B.B.: Semantic relevance and aspect dependency in a given subject domain. In: Walker, D.E. (ed.). COLING 84 - Proceedings of the 10th Intern. Conference on Computational Linguistics. Stanford: Stanford U.P., 1984, pp. 298-301.

31. RIEGER, B.B.: Stereotype representation and dynamic structuring of fuzzy word meanings for contents-driven semantic processing. In: Agrawal, J.C./Zunde, P. (eds.). Empirical foundations of information and software science. New York/London: Plenum Press, 1985, pp. 273-291.

32. WEISCHEDEL, R.M.: Knowledge representation and natural language processing. Proceeding of the IEEE, July 1986, pp. 905-920.


Footnotes:

1Published in: Czap, H./Galinski, C. (Eds): Terminology and Knowledge Engineering (Volume 2), Frankfurt/M. (Indeks) 1988, pp. 25-41.