PROBLEM REPRESENTATION
- A major focus in AI: what is the best way to represent the problem?
What representational scheme is being used?
- Why do we care about formal semantics?
- Gives an understanding of representational choices:
- A. What are the objects? What are the predicates? What are the functions?
- B. Grain size of objects?
- C. What do the predicates mean?
- D. What meaning distinctions are being made?
- Without care in defining the world and interpretation:
- Inconsistent representation schemes likely result
- Proofs you WANT will NOT go through
- Proofs you do NOT want WILL go through
- Representation will not be as extensible
REPRESENTATIONAL CHOICES
A. What are the objects? What are the predicates? What are the functions?
Example:
Representational Scheme 1:
D={a,b,c,d,e}; predicate red = {a,b,c}; predicate pink = {d,e}
Under the intended interpretation:
red(a). pink(d).
Representational Scheme 2 -- reify the predicates, for a more
expressive representation
D = {a,b,c,d,e,red,pink}; predicate colorof={<a,red>,<b,red>,<c,red>,<d,pink>,<e,pink>}
predicate color={red,pink}; predicate primary={red}; predicate pretty={pink}.
Under the intended interpretation:
colorof(a,red). color(pink). primary(red).
More expressive, but inference may take longer (axioms may be larger)
(an alternative: axiom schemata; see later lecture)
A scheme that mixes and matches won't work!
Example using scheme given in B below:
Given: plural(word3). ending(plural). Axiom below will not ``fire''.
Classic example: what does ISA mean?
Interpretation 1: {<john homa, CS475>,<Las Cruces, NM Cities>,...}
- subset relation.
Interpretation 2: {<john homa, Person>,{Las Cruces, NM Cities},..}
- type-token relation
These are sometimes the same, sometimes not. Lead to confusion in semantic
networks. If such problems are there, they will keep creeping up (needing
patches such as exceptions to axioms, special-purpose axioms, overuse of
``procedural attachment'' and other extra-logical features)
B. Grain size of objects?
NLP problem 1: word-sense disambiguation
Smallest object a word?
What about morphological clues?
Choices:
- represent the individual letters of the word
- represent morphological clues as properties of words
- plural(word3). root-form(word2). ..OR
- morph-feature(word3,plural). <---- probably the best
- continuing with the last scheme: ending(plural), ending(past-marker).
- forall X exists Y (morph-feature(X,Y) and ending(Y) -->
- NOT morph-feature(X,root-form).
NLP problem 2: which language is this segment in?
Smallest object a word, with properties, as for word-sense disambiguation?
Probably not:
useful properties: letter frequency, letters specific to particular
alphabet, accents,
letter combinations
C. Meaning of predicates? In practice, you don't enumerate the tuples
defining the predicate. In practice:
Define the usage. If you specifiy the types of the arguemnts, this intensionally
defines the predicate Example: morph-feature(Word, Morph-Feature) : word
Word has morphological feature Morph-Feature
Write axioms involving the predicate
D. What meaning distinctions are being made? (Not just an NLP issue!)
action types versus actions themselves
``surface'' versus ``deep'' meaning
individual versus group versus substance(such as ``sand'')
specific versus non-specific
- `The tiger is an animal'', ``The tigar bit him''
- ``I'll have what he's having''
- ``The murderer is a woman'' (whoever she is?)
intension versus extension:
- John believes Mary's phone number is 646-6228
- Mary's phone number is 646-6445
- Infer???
- John believes 646-6228 is 646-6445???
- BUT:
- John dialed Mary's phone number
- Mary's phone number is 646-6445
- Infer+
- John dialed 646-6445.
Process, versus achievement, versus punctual event, etc.
What can you infer once you know something is finished? He made a cake
vs. He slept.