living computation / Dave Ackley

What’s good?

How do we know what’s good?

Well for one thing, we’ve got special-purpose hardware built in, telling us: It’s good if it feels good; bad feels bad.

The hardwired pleasure and pain machinery does best, it seems, when it’s providing feedback on our body: Health and satisfaction feel good, damage and need feel bad.

Even there, though, even at its best, there’s rarely any inherent connection between event and sensation. It’s all just a slap-up stitch-together construct, an error-prone mapping between what’s really going on and some kind of awareness or perception. A punch in the arm can make the shoulder hurt, but so can a heart attack; an amputated limb aches years later.

And in any case, our hardware reinforcement mechanisms are also pressed into service to interpret that especially unruly space located outside the body: the ‘environment’.

A rock heading straight at us doesn’t actually feel bad, until it arrives. Detecting a looming potential collision while there’s still time to avoid it is a nifty and non-trivial bit of calculation, but even so it can be computed quite accurately using only low-level ‘close to the senses’ data. By contrast, an environmental task such as optimal mating, done accurately and only in appropriate environments, could require arbitrarily complex, articulated and information-rich computations.

So by and large the hardware doesn’t try. In natural environments, our hardware designs opt to replace infeasible high-level computations like ‘Is that good food for me now?’, ‘Is that a good mate?’ with more easily implemented low-level heuristics that tend to predictWith possible compensations elsewhere so that designs can tolerate a certain failure rate of the heuristics. specific answers to such high-level questions; heuristics like detecting particular chemicals, or seeing particular colors and shapes in particular configurations or motions.

Evolutionary psychologists show that many of the cues we attend to are anything but random, and instead are precisely low-level data values that do tend to correlate—in sometimes rather subtle and interesting ways—with important high-level goals, such as finding a good mate or caring for offspring.

But for all of that, hardwired pleasure and pain is just the beginning. We also have extremely flexible software reinforcement systems as well. We can learn to distinguish between situations far too complex for any out-of-the-box hardware to comprehend, and then associate pleasure or pain with themLike winning vs losing chess positions, for example. separately, regardless of how superficially similar.

With a hardwired bootstrap into a software reinforcement system, we come to ascribe goodness and badness to things far removed from direct physical pleasure and pain. The software can soak up just about any evaluation system we can conceiveThis is a tautology, I suspect.. We can decide that what is good is what our parentsParents are always telling us that unpleasant things are good for us, aren’t they? tell us is goodOr what they tell us is bad., or somebody else, or a book, or an inner voice speaking sense only we can hear.

The software reinforcement system can be so powerful that it can overwhelm the hardware mechanisms, perhaps rendering naïve pain into pleasure, or the reverse; perhaps producing perverts or heroesOr perhaps ‘heroverts’, both in one depending on specifics.

How do we know what’s good?

So we’ve got hardwared good and bad, useful but limited and heuristicLike Tom Waits’ comment about being so “horny the crack of dawn better watch itself around me.”, with softwared good and bad laid on top, amplifying, modulating, or completely undercutting it.

And all that is just the lesser half of it, because everything gets knocked on its head once G’I should stress here that G’ is just a term I made up to use amongst myself back in Boltzmann Machine days; it’s not an official technical term. minimization starts kicking in. When we can flexibly change the world to satisfy ourselves, rather than only the reverse, what will happen? What will we do?

We’ll tend to change the world specifically to destroy the correlations we evolved under: Sure as gravity, we’ll use our artifice to create inputs that trigger our pleasure heuristics even though the inputs don’t represent the actual situation that the heuristic was designed to detect. We’ll create generalized pornography.

Sweet flavors may have predicted food energy and thus good eats; diet sweeteners trigger that heuristic but provide no energy.

Images of healthy, resourceful, symmetric, well-proportioned people can trigger mating heuristics, but provide no offspring.

Likewise, birth control.

Good? Bad? Depends? Whatever on? The ‘state of nature’ has long vanished from the ‘civilized’ world, if it ever existed in any effective sense to begin with; and in any case the old hardware drives were just elegant hacks anyway, then as now neither meaningless nor infallible.

So, we have to play it as it lays; it’s a judgment call every time, it depends on everything. It wasn’t ever all purity and harmony, not for manPerhaps for cockroaches, or other non-trivially successful species. in nature. Learn from your hardware, choose your software, pick your dreams with care.

— § —

20 Jul 2004
[comments (4)]
[permanent link]
<Previous :: Next>


This work is licensed under a
Creative Commons License.
Creative Commons License