


An incubation effect, that is, better resolution of initially unsolved problems retested after a delay rather than retesting immediately, was seen only in Experiment 1, in which unsolved problems were completely removed from sight. Resolution of initially unsolved RAT problems (fixated versus non-fixated) was examined as a function of complete interruption (Experiment 1) or partial distraction (Experiment 2).

Following exposure to misleading clues designed to induce initial fixation on RAT problems, versus no clues, participants were retested on problems either immediately after their first attempt (no-incubation), or after a 40-second incubation period. Two experiments examined the effects of incubation on initially unsolved Remote Associates Test (RAT) problems. A new trial-by-trial method for observing incubation effects was used to compare the forgetting fixation hypothesis with the conscious work hypothesis. Incubation has long been proposed as a mechanism in creative problem solving (Wallas, 1926). In sum, the cognitive neuroscience of insight is an exciting new area of research with connections to fundamental neurocognitive processes. Recent studies have begun to apply direct brain stimulation to facilitate insight. Individual differences in the tendency to solve problems insightfully rather than in a deliberate, analytic fashion are associated with different patterns of resting-state brain activity.

Recent work has revealed insight-related coarse semantic coding in the right hemisphere and internally focused attention preceding and during problem solving. Insight research began a century ago, but neuroimaging and electrophysiological techniques have been applied to its study only during the past decade. This can take the form of a solution to a problem (an "aha moment"), comprehension of a joke or metaphor, or recognition of an ambiguous percept. Insight occurs when a person suddenly reinterprets a stimulus, situation, or event to produce a nonobvious, nondominant interpretation. In closing, I suggest that progress toward implementing human-like understanding in machines-machine understanding-may benefit from a naturalistic approach in which natural processes are modelled as closely as possible in mechanical substrates. I propose a hypothesis that might help to explain why consciousness is important to understanding. Moreover, evidence from psychology and neurobiology suggests that it is this capacity for consciousness that, in part at least, explains for the superior performance of humans in some cognitive tasks and may also account for the authenticity of semantic processing that seems to be the hallmark of natural understanding. I analyse some concrete examples of natural understanding and show that although it shares properties with the artificial understanding implemented in current machine learning systems it also has some essential differences, the main one being that natural understanding in humans entails consciousness. Here I draw a distinction between natural, artificial, and machine understanding. But despite the remarkable recent success of machine learning systems in areas such as natural language processing and image classification, important questions remain about their limited performance and about whether their cognitive abilities entail genuine understanding or are the product of spurious correlations. Some researchers in the field of machine understanding have argued that it is not necessary for computers to be conscious as long as they can match or exceed human performance in certain tasks. This article addresses the question of whether machine understanding requires consciousness.
