Tree-like Dispositional Dependency Structures
for non-propositional Semantic Inferencing.
On a SCIP approach to natural language understanding by
machine
Burghard B. Rieger
FB II: Department
of Computational Linguistics, University of Trier,
Germany
E-Mail: rieger@ldv.uni-trier.de
Abstract
Arguing for the semiotic modeling of natural
language understanding by machine is to follow a procedural stance
of approach focusing on processes of meaning constitution.
These can be typified in pragmatic situations of
performative language games which may be analyzed
empirically, described formally, and simulated computationally. In
doing so, graph theoretical tools have been employed and new tree
structures developed which allow both, to restrict the relational
manifold in high-dimensional vector space structures computed as
fuzzy word meaning representations, and to visualize semantically
motivated relevancies emerging from such restrictions as
reflexive, non-symmetric, and (weakly) transitive dependency
relations among them. As a basal, context-sensitive form of
reorganizing distributionally represented fuzzy entities, the tree
like dispositional dependency structures (DDS) serve as a
non-propositional format for conceptual associations and semantic
inferencing by machine, as opposed to propositional reasoning
based on truth-functional constraints. After a short
introduction into semiotic cognitive information processing
(SCIP) and the text analyzing and meaning representational
formalisms employed, DDS tree generation will be
discussed, and some examples be given to illustrate the
algorithms' semantic inferencing potential as computed
from and performed on a sample of German newspaper texts.
1 Introduction
In view of semiotic processes like understanding
natural language sign structures the modeling enterprise is
aggravated by the lore of thinking in traditional terms of
(modern) linguistics, cognitive psychology, and artificial
intelligence approaches. These tend to replicate computationally
what is believed to be known about (fragments of) human processing
instead of developing computational models which might (or might
not) correspond to some of that knowledge, but whose functional
results are equivalent (perhaps inferior, or even superior) to
very obvious human processing capabilities1.
Arguing for computational models in this sense is to ask for a
genuinely procedural extension to cognition and cognitive
modeling, trying to avoid rather than employ traditional
conceptualizations for a chance to find possible solutions to
problems differently posed. In other words, sentence
parsing and generation, knowledge based
interpretation, rule based inferencing-to name only
the most salient-can be viewed as very particular abstractions
(and models derived) of humans' general capabilities to employ
signs and to constitute meanings that can be understood. It may be
argued that these abstractions-however seminal in many
respects-have impeded rather than advanced adequate
computational modeling of e.g. discourse understanding, language
and knowledge acquisition, adaptive learning and knowledge
modification, dynamic reasoning with fuzzy, associative, and
uncertain concepts, etc. that human beings normally are able to
perform with ease.
Due to the centrality of semiosis, and its
pivotal role in natural language understanding, the concept of
Semiotic Cognitive Information Processing systems (SCIPS)
was developed to simulate the process of sign and/or
meaning constitution by machine without (necessarily) replicating
these processes as enacted by humans. The modeling of processes of
meaning constitution as typified in pragmatic
situations of performative language
games basically follows an ecological
systems theoretical approach, placing an
information processing system into an environment
whose structural coupling is
mutually achieved by processes of (material, energetic,
informational) mediation between them. The generality of this
concept lends itself easily to accommodate hierarchically
structured restrictions on recursive modes of processing. Thus,
data processing is defined as manipulation of data
according to predefined rules, and may be distinguished from
information processing which comprises the
interpretation of data according to given and pre-established
codes;
cognitive information processing will be called any
information processing whose interpretations are not codified but
have to be derived from sets of principled structures and
according to certain mechanisms either internal or external to the
processing system and described as its knowledge; and finally
semiotic cognitive information processing will be
restricted to such cognitive or knowledge based information
processing whose knowledge-internal or external to the
system-is not just made available but is instead acquired,
structured, represented, and/or modified by the system's own
processing according to its capabilities and intrinsic principles.
Semiotic cognitive information processing (SCIP)
capabilities appear to serve a double purpose, as a means of
structuring an environment as perceived by an (artificial
or natural) information processing system, a n d as a
means of representing this structure in order to communicate it to
other systems. In allowing not only for an (internally)
representational processing whose states may provide stimuli for
further action, but also for the (externalized) representations of
the course or states of such processing in forms of agglomerated
sign structures, this is roughly what understanding
natural language translates to in a semiotic
system-environment situation.
2 Modeling Language Understanding
The dramatic increase of computational power and symbol
manipulation means has changed the fundamentals of many scientific
disciplines, creating even new ones. Apparently, it has left
linguistically oriented disciplines, even new ones, adhere to
seemingly well grounded and traditionally dignified concepts (like
phrase and sentence, predicate and
proposition, grammatical correctness and formal truth,
etc.) in describing natural language structures. Considering our
as yet very limited understanding of natural language
understanding, it may well be suspected that some of the problems
encountered are due to inadequate conceptions and corresponding
representational formats employed in depicting and manipulating
linguistic entities (elements, structures, processes, and
procedures) considered to be of interest or even essential to the
understanding of the communicative use of natural languages by
humans.
2.1 Different Approaches
An earlier attempt to classify model
constructions as produced in cognitive science had distinguished
three types of modeling approaches: the cognitive, the
associative, and the enactive. Whereas the first
two approaches draw on the traditional rationalistic paradigm of
mind-matter-duality-static and unable to adapt the
former, dynamic and able to learn the latter-by assuming
the existence of external world structures and an
internal representations of it, the third type does not.
Instead of assuming an external world and the systems' internal
representations of it, some unity of mutual relatedness (
structural coupling) is considered to be fundamental of-and
the (only) condition for-any abstracted or acquired duality in
concepts of the external and internal, object and subject, reality
and any experience of it which might evolve. Considering
the importance that the notions of formatting and representation
(both internal and external to an information processing system)
have gained in tracing processes on the grounds of their
observable or resulting structures, it appears to be justified to
add a fourth type, the semiotic. It is
focused on the concept of semiosis and may be
characterized by the process of enactment too,
complemented, however, by the representational impact. This is
considered fundamental to the distinction of e.g. cognitive
processes from their structural results which-only
due to some traces these processes leave behind-may emerge in
forms of knowledge. Its different representational modes
comply with different forms of activation that allow for the
distinction of internal or tacit knowledge (i.e.
memory) on the one hand, and of external or
declarative knowledge (i.e. symbolic representations like
language structures) on the other.
According to the above types of cognitive modeling,
computational semiotics can be characterized as aiming at the
dynamics of meaning constitution by simulating processes of
multi-resolutional representation within the
frame of an ecological information processing paradigm.
When we take human beings to be systems
whose knowledge based processing of represented
information makes them cognitive, and whose sign and
symbol generation, manipulation, and understanding capabilities
render them semiotic, we may do so due to our own daily
experience of these systems' outstanding ability for representing
results of cognitive processes, organizing these representations,
and modifying them according to changing conditions and states of
system-environment adaptedness.
2.2 Computational Processing
Computational systems for natural language processing
are based upon relevant findings in computational linguistics (CL)
and artificial intelligence (AI) research. Operational systems for
natural language analysis and generation by machine require
correct structural descriptions of input strings and their
semantic interpretations. By and large, this is provided-for
different languages differently-by rule based representations of
(syntactic and lexical) linguistic knowledge and of
(referential and situative) segments of domain specific world
knowledge which grammar formalisms and deductive inferential
mechanisms can operate on. This kind of cognitive (or
knowledge-based) language processing (using monotone logics,
symbolic representations, rule-based operations, sequential
processing, etc.) and the statics of its representational
structures were challenged-although for differing reasons-by
connectionist and empirical approaches. These were particularly
successful in simulating dynamic properties of cognitive natural
language processing (based on the theory of dynamic systems,
sub-symbolic or distributed representation, numerically continuous
operations, parallel processing, etc.). New insights were gained
into the wealth of structural patterns and functional relations as
observed in very large language corpora2 of
communicative natural language performance as specified by models
of quantitative and statistical analyses (based on probability and
possibility theory, stochastic and fuzzy modeling, numerical
mathematics and non-monotone logics, strict hypothesizing and
rigorous testing, etc.). Language regularities and
structures which are empirically traceable but may not easily be
identified within the categorial framework of established
linguistic concepts3, were discovered by the empirical study of
performative language phenomena providing valuable new
insights and explanations because of a broader coverage of
language material, and due to the new methods complementary to
those of competence centered linguistics. Moreover,
empirical approaches allow for quantitative-statistical as well as
fuzzy-theoretical model constructions which promote a more
semiotic understanding of the functioning of language signs as
used by interlocutors in communicative interaction.
2.3 Information Systems View
Following a systems theoretical paradigm of information
processing and accepting the cognitive point-of-view (implying
that information processing is knowledge based), human beings
appear to be not just natural information processing systems with
wider cognitive abilities. Instead, they have to be considered
very particular cognitive systems whose outstanding plasticity and
capability to adapt to changing environmental conditions is
essentially tied to their use and understanding of natural
languages in communicative discourse. It seems that language
faculty expands their learning potential well beyond experimental
real-world experience into realms of experiencing hypothetical
reality in virtual environments (Gedankenexperimente) for
better real-world adaptation. The basic idea of model
construction in terms of such an ecological theory of information
is that the processing structure of an information
system is conceived as a correlate of those structures which such
a system has to be able to process in order to survive. For
cognitive models of natural language processing the system
theoretical view suggests to accept natural language discourse as
analyzable and empirically accessible evidence for tracing such
processes, and to hypothesize about their procedural modeling.
Thus, natural language discourse might reveal essential parts of
the particularly structured, multi-layered information
representation and processing potential to a system
analyzer and model constructor in rather the same way as this
potential is accessed in order to be constrained by an information
processing system in the course of understanding.
3 SKIP Systems
Other than value attributing procedures that reorganize
input data computationally according to predefined symbolic
structures of intermediate representations (as hypothesized by
competence theoretical linguistics and realized in
cognitive CL models) semiotic cognitive information
processing (SCIP) systems will have to, and can
in fact, be distinguished sharply as sets of procedures whose
computations will transform structured input data according to its
immanent regularities to yield new, structural representations
emerging from that computation (as hypothesized by
performative linguistics and realized in procedural models of
computational semiotics).
3.1 Constraint Exploration
Structural linguists have given substantial hints on how
language items come about to be employed in communicative
discourse the way they are. They have identified the fundamental
and apparently universal constraints4 that control the multi-level combinability
and formation of language entities by distinguishing the
restrictions on linear aggregation of elements (
syntagmatics) from restrictions on their selective replacement
(paradigmatics). This distinction allows within any
sufficiently large set of strings of natural language discourse to
ascertain syntagmatic regularities of element aggregations on
level n whose characteristic distributional patterns or
paradigms gain functional status on level n+1 for higher
aggregation. The distinction of these representational levels and
their identification with functional results introduced elsewhere
is tantamount to the categorial
constraints applied when identifying regularities with rules.
Fully deterministic if-then rules will result in a rather coarse
three-level hierarchy of categorial description
(Fig. 1) whereas probabilistic or
possibilistic dependencies produce a continuous, multi-level
covering of distributional representations
(Fig. 2). These model hierarchies
distinguish cognitive linguistic from semiotic
procedures whose computations transform structured input data
according to its immanent regularities. Their output yields new
structural representations emerging from computational processes.
The elements they produce are value distributions or vectors of
input entities whose structural properties are depicted by
adjacencies of the new elements (and their structural relatedness)
constituting multi-dimensional (metric) space structures (
semiotic spaces). Their elements may also be interpreted as
fuzzy sets allowing set theoretical operations being
exercised on these representations which exhibit granular
properties and do no longer require categorial
type (crisp) definitions of concept formation. Computation
of letter (morphic) vectors in word space, derived
from n-grams of letters (graphemes) as well
as of word (semic) vectors in semantic space,
derived from word-type correlations
of their tokens in discourse have illustrated the operational
flexibility and fine granularity of vector notations
to identify regularities of semiotic meaning constitution in language
performance which traditional linguistic categories fail to
represent.
3.2 Visualizing Vector Representations
Returning to the ecological systems theoretical view
applied to information processing, we will focus on the problem of
visualizing results of computational procedures developed to model
and simulate semiotic processes whose numerical
representations-by definition-do not have an immediate
interpretation. We may concentrate on the level of
semantic meaning constitution as various techniques formerly
applied to analyze, scrutinize, and visualize the structuredness
of vectoral representations have been able to
demonstrate the definite non-contingency of meaning points
z in semantic space S,z. Therefore, a short
introduction to illustrate its conception as based upon the
measurement of differences of usage regularities in VLLC of
situated or pragmatically homogeneous texts will suffice.
For a vocabulary
V={xn},n=1,¼,i.j.¼,N of lexical items, their meanings
zn Î áS,zñ are re-constructed as a
composite function [(d)\tilde] | yn°[(a)\tilde] | xn of the difference distributions
and the grounding usage regularity distributions
The empirical measures employed to specify intensities
of co-occurring lexical items are centered around a modified
correlational coefficient
where eit=[(Hi)/L] lt and ejt=[(Hj)/L] lt, computed over a text corpus
K={ kt } ; t=1,¼,T having an overall length
L=åt=1T lt; 1 £ lt £ L measured
by the number of word-tokens per text form the vocabulary
xn Î V of word-types whose frequencies are denoted by
Hi=åt=1Thit ; 0 £ hit £ Hi
and a measure of similarity (or rather, dissimilarity) to specify
the a-value distributions'
differences
The consecutive application of (Eqn.
2) on input texts and (Eqn. 1) on
its output data allows to model the meanings of words as a
two-level function of differences ([(d)\tilde] | yj
paradigmatic selection) of usage regularities
([(a)\tilde] | xi syntagmatic aggregation),
schematized as semiotic morphisms in Fig. 3.
4 Dispositional Dependency Structures
Following a semiotic understanding of meaning more as a
constitutional process rather than as a static entity of
invariable constancy and representation, the present semantic
space may be considered part of a word meaning/world knowledge
representation system which separates the format of basic
(stereotyped) meaning components (meaning points) from
their latent (dependency) relational organization as meaning
potential (semantic dispositions). Whereas here the former
is represented as a static, topologically organized
multi-dimensional memory structure, the latter can be
characterized as a dynamic and flexible structuring process which
reorganizes and thereby transforms the basic relatedness of the
elements it operates on.

Figure 1
Figure 2
Schemata of model hierarchy of cognitive linguistic strata of mechanisms
(Bierwisch) as compared to model tiling of computational semiotic coverage of procedures
(Rieger) for the analysis and representation of (abstracted and observable) language phenomena.
4.1 Tree Generation
This is achieved by a recursively defined procedure that
produces a hierarchical ordering of the semantic space's meaning
points which can be represented as a tree structure organized
under a given aspect (root node) according to and in dependence of
neighbors (descendant nodes) in cotextual relevancy to it. Taking
up ideas from cognitive theories of semantic memory,
priming, and spreading activation, the
DDS-algorithm was devised to operate on the semantic space
data and to generate dispositional dependency structures
(DDS) in the format of n-ary trees. Given one meaning point's
position, the algorithm will
- take that meaning point's label as a start,
- stack list labels of all its neighboring points by
their decreasing distances,
- initialize DDS-tree with starting point's label as primed
head or root node. Then it will
- take label on top of stack as new daughter node,
4.1 list all labels of new daughter's neighbors,
4.2 intersect it with nodes in tree,
4.3 determine from intersection the least distant one as current mother node,
- link new daughter to identified mother node
- and repeat 4. either
6.1 until 2. is empty
6.2 or other stop condition (given number of nodes, maximum distance, etc.)
is reached
- to end.
The tree structured graphs5 may serve as a
visualization of the dependencies that any labeled meaning point
zi Î áS,zñ chosen as root node will
produce according to the adjacencies of other points in the
semantic space (Fig. 4). Their semantic
relatedness as represented by their topology-being determined by
and reconstructed operationally as a function of the differences
(Eqn. 1) of usage regularities (Eqn.
2) of word distributions in the texts analyzed-will
thus allow for a directed, non-symmetric relation (
dependency) being established between them, induced by the
start area, i.e. the meaning point's position chosen as the tree's
root node. Thus, it is this node's neighborhood which will control
the topologically motivated dependencies between related meanings
in a way that is highly sensitive to the semantic context
of the meaning points' representations concerned. This type of
algorithmically generated tree structure has been named
dispositional because of the structured assembly of possible
meaning relations and dependencies it offers as something like a
potential for restricted choices to be made.
In
order to illustrate the contextual sensitivity which distinguishes
the DDS-algorithm from e.g. minimal spanning trees (MST),
the latter (Fig. 5) has been
generated from the same data with the same starting node. Note,
that the subtrees of Bahn (track, course, trail)
found to be identical in both, the MST- and the DDS-tree, are
positioned on extremely different levels (comparing 23 to
3)6.
Although the DDS-algorithm which consumes all meaning points
zn Î áS,zñ, can roughly be characterized
as an encapsulated MST-procedure, this encapsulation apparently
serves to catch an essential property of semiotic meaning
constitution and representation tied to its contextuality. Where
the MST is searching for shortest possible distance relations
between points qualifying for tree node relatedness, the DDS is
looking for highest meaning similarities, i.e. for
shortest possible distance relations between points which are
interpretable as semiotically derived representations. It
is this holistic property of áS,zñ that allows
the algorithm's search space to be semantically
constrained on the starting point's or root node's topological
environment (capsule), rendering it aspect-dependent and
structurally context sensitive.
Figure 3:
Morphisms of fuzzy mapping relations [(a)\tilde] and [(d)\tilde] from structured sets of vovabulary items xn
Î V, via corpus points yn
Î C, to labeled meaning points zn
Î S.
Figure 4:
Fragment of DDS-tree of
Alpen/Alps (root) as generated from semantic
space data (V=345, Hi ³ 10) of a German newspaper sample
(Die Welt, 1964 Berlin edition).
Figure 5: Fragment of MST-graph of
Alpen (root) as generated from the same semantic space
data.
Figure 6:
Dependency path of
lesen/to read Þ
schreiben/to write as traced in DDS-tree of
les.
a>Figure 7:
Dependency path of schreiben/ro writeÞlesen/to read as traced in DDS-tree
of schreib.
Figure 8:
Fragment of DDS-tree of Wort/word
Ú Satz/sentence
(root) as generated from OR-adjunction (max.) of these two meaning
points in semantic space.
Figure 9:
DDS-based
semantic inference from Ausschuss/committee,
Genf/Geneva, and Programm/program
(premises) to nahe/near (conclusion) as computed
from the semantic space data.
4.2 Some Properties
There are a number of consequences of which the
following seem interesting enough to be illustrated and shortly
commented on:
The procedural
(semiotic) approach replaces the storage of fixed and ready set
relations of (semantic) networks in AI by source- or
aspect-oriented induction of relations among meaning points by
means of the DDS procedure;
DDSs dependencies may be identified with an algorithmically
induced relevance relation which is reflexive,
non-symmetric, and (weakly) transitive as illustrated by the
dependency paths' listings of node transitions les/
to read Þ schreib/to write and
its (partial) inverse schreib/ Þ les
(Figs. 6 and 7);
the relevance relation gives rise to the notion of
criteriality which allows estimates to what degree a meaning
component (daughter node) contributes to the meaning
potential a root node's DDS produces. It will render the DDS a
weighted tree and may numerically be specified as a function of
any node's level and z-distance by
with i, m, d for root, mother, and daughter
nodes respectively, and the counters k for (left to right)
nodes, and l for (top down) levels in the tree;
as the criteriality values are decreasing monotonously from 1.0
(root) they may be interpreted as membership values which reflect
the relevance related soft structure of components (nodes)
in the DDS as a fuzzy meaning potential. Fuzzy set
theoretical extensions of logical operators (and, or, non,
etc.) open up new possibilities to generate composite meaning
points (Wort/word Ú Satz/
sentence in Fig. 8) without assuming a
propositional structure, a n d to get these new composites'
meanings represented as determined by their DDSs computable
from the semantic space data;
our experiments employing DDSs for semantic inferencing
(SI) have turned out to be very promising. SI appears to be
feasible without the need of having to state the premises in a
predicative or propositional form prior to the concluding process.
The DDS algorithm lends itself easily to the modeling of
analogical reasoning processes by parallel processing of DDS
trees.
As illustrated in Fig. 9, the
semantic inference process will start from two (or more) root
nodes as semantic premises (here the three:
Ausschu/committee, Genf/Geneva and
Programm/program), then it will run the two (or more) DDS
processes concerned each of which-in selecting its daughter
nodes-will tag the respective meaning points in the semantic
space. Stop condition for this mutual processing-which proceeds
(least distance or highest criteriality) breadth
first through the respective DDSs-is defined by the first
meaning point found to be tagged previously by one (or more) of
the other processes active. This point (nahe/near)
will be considered the (first) candidate inferred or concluded
from the premises (with the option to extend the number of
candidates under different stop conditions). The
dependencies activated (bottom line of Fig.
9) are three paths: 1st committee
® July ® Hamburg
® April ® Tuesday
® near, 2nd Geneva ®
calm ® power ® soviet
® ride ® center
® near, and 3rd program ®
Tuesday ® near) which translate to the
premises' inference paths resulting in the concluded
meaning (near) whose connotative embedding is provided by
the subtrees shown according to its semantic relatedness mediated
by the newspaper texts analyzed.
5 Conclusion
Devising representational structures which result from
semiotic processing of natural language discourse as modeled by
SCIP systems is to explore syntagmatic and
paradigmatic constraints on different levels of item
combinability in pragmatically homogeneous texts. Although
tentative still, it is hoped to come up one day with a new
understanding of how entities and structures are constituted that
may indeed be called semiotic, i.e. do not only have an
objective (material) extension in space-time, but can above that
be understood as having interpretable meaning, too. In order to be
able to interpret, (natural as well as artificial) semiotic
cognitive information processing systems need structuredness. We
are about to experience that the linguistically identified
structures available so far do not serve the purposes too well
when we have to deal with problems of a kind which we are unable
to describe or represent, let alone analyze or even solve under
these circumstances. Procedural models and their computational
realizations might appear to be good candidates for some progress.
References
- [1]
-
J. Barwise/J. Perry:
Situations and Attitudes.
Cambridge, MA, 1983.
- [2]
- M. Bierwisch:
Logische und psychologische Determinanten der Struktur
natürlicher Sprachen.
In: Scharf (ed):Naturwissenschaftliche
Linguistik, Halle 1981, pp. 176-187; 841-851.
- [3]
- R. Lorch:
Priming and search processes in semantic memory: a test of three
models of spreading activation.
Journal of Verbal Learning and Verbal Behaviour, 21:468-492,
1982.
- [4]
- H. Maturana/F. Varela:
Autopoiesis and Cognition.
Dordrecht/Boston 1980.
- [5]
- H. R. Maturana:
Biology of Language. The epistomology of reality.
In: Miller/Lenneberg (eds): Psychology and
Biology of Language and Thought, New York 1978, pp. 27-64.
- [6]
- A. Meystel:
Semiotic Modeling and Situation Analysis: an
Introduction.
Bala Cynwyd, PA, 1995.
- [7]
- C. Prim:
Shortest connection networks and some generalizations.
Bell Systems Technical Journal, 36:1389-1401, 1957.
- [8]
- B. B. Rieger:
Feasible Fuzzy Semantics. On some problems of how to handle
word meaning empirically.
In: Eikmeyer/Rieser (eds): Words,
Worlds, and Contexts. Berlin/New York 1981, pp. 193-209.
- [9]
- B. B. Rieger:
Fuzzy Representation Systems in Linguistic Semantics.
In: Trappl et.al.(eds): Progress
in Cybernetics and Systems Research, Vol. XI,
Washington/New York/ 1982, pp. 249-256.
- [10]
- B. B. Rieger:
Clusters in Semantic Space.
In: Delatte (ed): Actes du Congrès
International
Informatique et Science Humaines, Liège, 1983, pp. 805-814.
- [11]
- B. B. Rieger:
Generating Dependency Structures of Fuzzy Word Meanings
in Semantic Space.
In: Hattori/ Iounu (eds): Proceedings of the
XIIIth
Int.Congr.of Linguists, Tokyo 1983, pp. 543-548.
- [12]
- B. B. Rieger:
Semantic Relevance and Aspect Dependancy in a Given
Subject Domain.
In: Walker (ed): COLING-84 Proceedings.
Stanford 1984, pp. 298-301.
- [13]
- B. B. Rieger:
Inducing a Relevance Relation in a Distance-like Data
Structure of Fuzzy Word Meaning Representations.
In: Allen (ed): Data Bases in the
Humanities and
Social Sciences. Osprey, FL. 1985, pp. 374-386.
- [14]
- B. B. Rieger:
Lexical Relevance and Semantic Disposition.
In: Hoppenbrouwes et.al.(eds):
Meaning and the Lexicon, Dordrecht,
1985, pp. 387-400.
- [15]
- B. B. Rieger:
Distributed Semantic Representation of Word Meanings.
In: Becker et.al.(eds):
Parallelism, Learning, Evolution. Berlin/New York,
1991, pp. 243-273.
- [16]
- B. B. Rieger:
Meaning Acquisition by SCIPS.
In: Ayyub (ed): IEEE-Transactions of
ISUMA-NAFIPS-95,
Alamitos, CA, 1995, pp. 390-395.
- [17]
- B. B. Rieger:
Situations, Language Games, and SCIPS. Modeling semiotic
cognitive information processing systems.
In: Meystel/Nerode (eds): Semiotic Modeling and
Situation Analysis in Large Complex
Systems, Bala Cynwyd, PA, 1995, pp. 130-138.
- [18]
- B. B. Rieger:
Situation Semantics and computational linguistics: towards
Informational Ecology.
In: Kornwachs/Jacoby (eds): Information.
New Questions to a Multidisciplinary Concept, Berlin,
1996, pp. 285-315.
- [19]
- B. B. Rieger:
Computational Semiotics and Fuzzy Linguistics. On meaning
constitution and soft categories.
In: Meystel (ed): A Learning Perspective:
(ISAS-97), Washington, DC, 1997, pp. 541-551.
- [20]
- B. B. Rieger:
Semiotics and Computational Linguistics. On Semiotic
Cognitive Information Processing (SCIP).
In: Sebeok (ed): Advances in Semiotics,
Urbana, IN. 1998 [in print]
- [21]
- F.Varela et.al.:
The Embodied Mind. Cognitive Science and Human
Experience. Cambridge, MA, 1991.
- [22]
- L. Wittgenstein:
Über Gewißheit. On Certainty.
New York/London, 1969.
- [23]
- L.A. Zadeh:
Toward a Theory of Fuzzy Information Granulation and
its Centrality in Human Reasoning and Fuzzy Logic.
Fuzzy Sets and Systems, 3(90):111-127, 1997.
Footnotes:
1To illustrate
the point by an example taken from engineering: the phenomenon of
flying - observed in nature as airborne locomotion which (most)
birds are capable of - has not been modeled by replicating
nature's solution (flapping wings), but simulated by
technologically quite different means (propeller, jet engine) in
aircrafts which surpass birds' capacities in many respects.
2The Trier
dpa-VLLC comprises the complete textual material, i.e.
720.000 documents of approx. 180 millions (18 ·107)
running words (tokens) from the basic news real
service of 1990-1993 which the Deutsche Presseagentur (
dpa), Hamburg, deserves thanks to have left the author with for
research purposes. It is this corpus which provides the
performative data of written language use for the current (and
planned) fuzzy-linguistic projects at our department.
3Phenomena like linear
short-distance/long-distance orderings (Nah- and
Fern-Ordnung) of performative language entities (e.g.
co-occurences) easily represented and processed as numerical
expressions of correlation values with any precision, are cases in
point here. Although observable results of structuring principles,
they have continuously been overlooked by rule based approaches
whose representational means comply more adequately with
agglomerative orderings (constituent- and
phrase-structure) as represented and processed by familiar
grammar formalisms.
4The distinction of
langue-parole (de Saussure) and
competence-performance / I-language-E-language (Chomsky)
in modern linguistics is grounded in the possibility to
abstract (formally representable) linguistic entities from
(empirically observable) language phenomena. The discovery
of principles of combinatorial constraints responsible for regular
string formation in natural languages gave rise not only to
segment strings of language discourse and to categorize
classes of types of linguistic entities, but also to distinguish
and construct different levels of language description and
linguistic analysis.
5The figures present
subtrees of a semantic space as computed from a sample of texts
from the German daily newspaper (Die Welt, 1964, Berlin
edition). Nodes marked [¯]
+ hide subtrees whose expansions have
been conflated for lack of space; the numerical values stated are
direct z-distances to the root node.
6The numerical MST values given are direct
z-distances between nodes (mother-daughter pairs).