Chapter 7


Chapter 7: Problems and goals

7.1 Intelligence
7.2 Uncommon sense
7.3 The puzzle principle
7.4 Problem solving
7.5 Learning and memory
7.6 Reinforcement and reward
7.7 Local responsibility
7.8 Difference engines
7.9 Intentions
7.10 Genius

--------------------------------
7.1 Intelligence, p 71
Plants and streams don't seem very good at *solving the kind of
problems* we regard as needing intelligence.
[...] the ability to solve hard problems [Non-willing to give a
definition: common meaning].
Instead of trying to say what such a word "means", it is better
simply to try and explain how we use it.

7.3 The puzzle principle, p 73
We can program a computer to solve any problem by trial and
error, without knowing how to solve it in advance, provided only
that we have a way to recognize when the problem is solved.

7.4 Problem solving, p 74
The Progress Principle: Any process of exhaustive search can be
greatly reduced if we possess some way to detect when "progress"
has been made.

[...] for hard problems, it may be almost as difficult to to
recognize "progress" as to solve the problem itself.

- Goals and Subgoals.
- Using knowledge.

7.5 Learning and memory, p 75
Some psychologists have claimed that human learning is based
entirely on [...] reward. [...] You must first be able to do
something before you can be rewarded for doing it. This
circularity was no great problem [for] Pavlov because [...]
animals never needed to produce new kinds of behavior. [...]
Skinner [...] recognized that higher animals did indeed
sometimes exhibit new forms of behaviors, which he called
"operants".

Skinner [never explained] how brains produce new operants. [...]
The answer must lie in learning better ways to learn. [...]
we'll have to start by using many ordinary words like *goal*,
*reward*, *learning*, *thinking*, *recognizing*, *liking*,
*wanting*, *imagining*, and *remembering* -all based on old and
vague ideas.

7.6 Reinforcement and reward, 76
"The unit of success is the goal" -Allen Newell-.

I designed a machine called the *Snarc*. [...] it was composed
of forty agents, each connected to several others, more or less
at random, through a "reward" system [...].
[...] learn by "reinforcing" the connections between agents.

7.7 Local responsibility, p 77
The *Local* scheme rewards each agent that helps accomplish its
supervisor's goal.
The *Global* scheme rewards only agents that help accomplish
top-level goals.

7.8 Difference engines, p 78
A "goal-driven" system does not seem to react directly to the
stimuli or situations it encounters.

A difference-engine must contain a description of a "desired"
situation.
It must have subagents that are aroused by various differences
between the desired situation and the actual situation.
Each subagent must act in a way that tend to diminish the
difference that aroused it.

Actual inputs -> Situation
                                   -> Differences -> Agents
Ideal inputs  -> Goal description

7.9 Intentions, p 79
D'Alembert showed that one one can [...] predict the behavior of
a rolling ball by describing it as a difference-engine whose
goal is to reduce its own energy.
Words should be our servants, not our masters.


Chapter 8, Chapter 6
The Society of Mind
Marc Girod