2,344 Pages • 632,819 Words • PDF • 23.8 MB
+ assessment+ Neuropsychological
Uploaded at 2021-09-21 16:37
Report DMCA
PREVIEW PDF
Neuropsychological Assessment
NEUROPSYCHOLOGICAL ASSESSMENT Fifth Edition Muriel Deutsch Lezak Diane B. Howieson Erin D. Bigler Daniel Tranel
Oxford University Press, Inc., publishes works that further Oxford University’s objective of excellence in research, scholarship, and education. Oxford New York Auckland Cape Town Dar es Salaam Hong Kong Karachi Kuala Lumpur Madrid Melbourne Mexico City Nairobi New Delhi Shanghai Taipei Toronto With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan Poland Portugal Singapore South Korea Switzerland Thailand Turkey Ukraine Vietnam Copyright © 1976, 1983, 1995, 2004, 2012 by Oxford University Press, Inc. Published by Oxford University Press, Inc. 198 Madison Avenue, New York, New York 10016 www.oup.com Oxford is a registered trademark of Oxford University Press All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of Oxford University Press. Library of Congress Cataloging-in-Publication Data Neuropsychological assessment / Muriel D. Lezak … [et al.]. — 5th ed. p. cm. Includes bibliographical references and index. ISBN 978–0–19–539552–5 1. Neuropsychological tests. I. Lezak, Muriel Deutsch. RC386.6.N48L49 2012 616.8’0475—dc23 2011022190
Dedicated in gratitude for the loving support from our spouses, John Howieson, Jan Bigler, and Natalie Denburg; and in memory of Sidney Lezak whose love and encouragement made this work possible.
Preface Direct observation of the fully integrated functioning of living human brains will probably always be impossible. M.D. Lezak, 1983, p. 15
What did we know of possibilities, just a little more than a quarter of a century ago? The “black box” of classical psychology is no longer impenetrable as creative neuroscientists with ever more revealing neuroimaging techniques are devising new and powerful ways of finding windows into the black box. In neuroimaging we can now trace neural pathways, relate cortical areas to aspects of thinking and feeling—even “see” free association in the “default” state—and are discovering how all this is activated and integrated in complex, reactive, and interactive neural systems. We may yet uncover the nature of (self- and other-) consciousness and how synaptic interconnections, the juices that flow from them, and the myriad other ongoing interactive neural processes get translated into the experience of experiencing. We can never again say “never” in neuroscience. Yet, as entrancing and even astonishing as are the findings the new technologies bring to neuroscience, it is important to be mindful of their roots in human observations. As these technologically enhanced observations of the brain at work open the way for new insights about brain function and its behavioral correlates they also confirm, over and over again, the foundational hypotheses of neuropsychology—hypotheses generated from direct observations by neuropsychologists and neurologists who studied and compared the behavior of both normal and brain impaired persons. These foundational hypotheses guide practitioners in the clinical neurosciences today, whether observations come from a clinician’s eyes and ears or a machine. In the clinic, observations of brain function by technological devices enhance understanding of behavioral data and sometimes aid in prediction, but cannot substitute for clinical observations. When the earliest neuroimaging techniques became available, some thought
that neuropsychologists would no longer be needed as it had become unnecessary to improve the odds of guessing a lesion site, a once important task for neuropsychologists. Today’s advanced neuroimaging techniques make it possible to predict with a reasonable degree of accuracy remarkably subtle manifestations, such as the differences between socially isolated brain injured patients who will have difficulty in social interactions although actively seeking them, versus those who may be socially skilled but lack incentive to socialize. Yet this new level of prediction, rather than substituting for human observation and human intervention, only raises more questions for experienced clinical neuroscientists: e.g., what circumstances exacerbate or alleviate the problem? what compensatory abilities are available to the patient? is the patient aware of the problem and, if so, can this awareness be used constructively? is this a problem that affects employability and, if so, how? and so on. Data generated by new neurotechnologies may help identify potential problem areas: neuropsychologists can find out how these problems may play out in real life, in human terms, and what can be done about them. Thus, in the fifth incarnation of Neuropsychological Assessment, we have tried to provide a wide-ranging report on neuropsychology as science and as a clinical specialty that is as relevant today as it was when it first appeared 35 years ago. Certainly what is relevant in 2012 is somewhat different from 1976 as the scope of activities and responsibilities of neuropsychologists has enlarged and the knowledge base necessary for clinical practice as well as for research has expanded exponentially. Three major additions distinguish the first and the fifth editions of Neuropsychological Assessment. Most obvious to the experienced neuropsychologist is the proliferation of tests and the wealth of readily available substantiating data. Second, a book such as this must provide practically useful information for neuropsychologists about the generations— yes, generations—of neuroimaging techniques that have evolved in the past 30 years. Further, especially exciting and satisfying is confirmation of what once was suspected about the neural organization underlying brain functions thanks to the marriage of sensitive, focused, clinical observations with sensitive, focused, neuroimaging data. In this edition we convey what is known about the enormity of interwoven, interactive, and interdependent complexities of neuronal processing as the brain goes about its business and how this relates to our human strengths and frailties. What remains the same in 2012 as it was in 1976 is the responsibility of clinicians to treat their patients as individuals, to value their individuality, and to respect them. Ultimately, our understandings about human behavior and its
neural underpinnings come from thoughtful and respectful observations of our patients, knowledge of their histories, and information about how they are living their lives. Muriel Deutsch Lezak Diane B. Howieson Erin D. Bigler Daniel Tranel
Acknowledgments Once again we want to honor our neuropsychologist friends, colleagues, and mentors who have died in the past few years. Most of what is written in this text and much of contemporary neuropsychology as science or clinical profession, relies on their contributions to neuropsychology, whether directly, or indirectly through their students and colleagues. We are deeply grateful for the insightful, innovative, integrative, and helpfully practical work of William W. Beatty, Edith F. Kaplan, John C. Marshall, Paul Satz, Esther Strauss, and Tom Tombaugh. The authors gratefully acknowledge Tracy Abildskov in creating the various neuroimaging illustrations, Jo Ann Petrie’s editing, and Aubrey Scott’s artwork. Many of David W Loring’s important contributions to the fourth edition of Neuropsychological Assessment enrich this edition as well. We miss his hand in this edition but are grateful to have what he gave us. And thanks, too, to Julia Hannay for some invaluable chapter sections retained from the fourth edition. Special thanks go to Kenneth Manzell for his aid in preparing the manuscript and illustrations. We are fortunate to have many colleagues and friends in neuropsychology who—at work or in meetings—have stimulated our thinking and made available their work, their knowledge, and their expertise. The ongoing 2nd Wedns. Neuropsychology Case Conference in Portland continues to be an open-door free-for-all and you are invited. It has been a pleasure to work with our new editor, Joan Bossert, who has not only been encouraging and supportive, but has helped us through some technical hoops and taught us about e-publishing. Tracy O’Hara, Development Editor, has done the heroic task of organizing the production idiosyncrasies of four writers into a cohesive manuscript while helping with some much needed data acquisition. Book production has been carefully timed and managed by Sr. Production Editor Susan Lee who makes house calls. Thanks, OUP team, for making this book possible. Last to get involved but far from least, our gratitude goes out to Eugenia Cooper Potter, best known as Genia, whose thorough scouring and polishing of text and references greatly helped bring
this book to life.
Contents List of Figures List of Tables
I THEORY AND PRACTICE OF NEUROPSYCHOLOGICAL ASSESSMENT 1. The Practice of Neuropsychological Assessment Examination purposes The multipurpose examination
The Validity of Neuropsychological Assessment What Can We Expect of Neuropsychological Assessment in the 21st Century? 2. Basic Concepts Examining the Brain Laboratory Techniques for Assessing Brain Function
Neuropsychology’s Conceptual Evolution Concerning Terminology Dimensions of Behavior Cognitive Functions Neuropsychology and the Concept of Intelligence: Brain Function Is Too Complex To Be Communicated in a Single Score
Classes of Cognitive Functions Receptive Functions Memory Expressive Functions Thinking
Mental Activity Variables
Executive Functions Personality/Emotionality Variables 3. The Behavioral Geography of the Brain Brain Pathology and Psychological Function The Cellular Substrate The Structure of the Brain The Hindbrain The Midbrain The Forebrain: Diencephalic Structures The Forebrain: The Cerebrum The Limbic System
The Cerebral Cortex and Behavior Lateral Organization Longitudinal Organization
Functional Organization of the Posterior Cortex The Occipital Lobes and Their Disorders The Posterior Association Cortices and Their Disorders The Temporal Lobes and Their Disorders
Functional Organization of the Anterior Cortex Precentral Division Premotor Division Prefrontal Division
Clinical Limitations of Functional Localization 4. The Rationale of Deficit Measurement Comparison Standards for Deficit Measurement Normative Comparison Standards Individual Comparison Standards
The Measurement of Deficit Direct Measurement of Deficit Indirect Measurement of Deficit
The Best Performance Method The Deficit Measurement Paradigm 5. The Neuropsychological Examination: Procedures Conceptual Framework of the Examination Purposes of the Examination Examination Questions
Conduct of the Examination Examination Foundations Examination Procedures
Procedural Considerations in Neuropsychological Assessment Testing Issues Examining Special Populations Common Assessment Problems with Brain Disorders
Maximizing the Patient’s Performance Level Optimal versus Standard Conditions When Optimal Conditions Are Not Best Talking to Patients
Constructive Assessment 6. The Neuropsychological Examination: Interpretation The Nature of Neuropsychological Examination Data Different Kinds of Examination Data Quantitative and Qualitative Data Common Interpretation Errors
Evaluation of Neuropsychological Examination Data Qualitative Aspects of Examination Behavior Test Scores Evaluation Issues Screening Techniques Pattern Analysis
Integrated Interpretation 7. Neuropathology for Neuropsychologists
Traumatic Brain Injury Severity Classifications and Outcome Prediction Neuropathology of TBI Penetrating Head Injuries Closed Head Injuries Closed Head Injury: Nature, Course, and Outcome Neuropsychological Assessment of Traumatically Brain Injured Patients Moderator Variables Affecting Severity of Traumatic Brain Injury Less Common Sources of Traumatic Brain Injury
Cerebrovascular Disorders Stroke and Related Disorders
Vascular Disorders Hypertension Vascular Dementia (VaD) Migraine Epilepsy
Dementing Disorders Mild Cognitive Impairment
Degenerative Disorders Cortical Dementias Alzheimer’s Disease (AD) Frontotemporal Lobar Degeneration (FTLD) Dementia with Lewy Bodies (DLB)
Subcortical Dementias Movement Disorders Parkinson’s Disease/Parkinsonism (PD) Huntington’s Disease (HD) Progressive Supranuclear Palsy (PSP) Comparisons of the Progressive Dementias
Other Progressive Disorders of the Central Nervous System Which May Have Important Neuropsychological Effects Multiple Sclerosis (MS) Normal Pressure Hydrocephalus (NPH)
Toxic Conditions
Alcohol-Related Disorders Street Drugs Social Drugs Environmental and Industrial Neurotoxins
Infectious Processes HIV Infection and AIDS Herpes Simplex Encephalitis (HSE) Lyme Disease Chronic Fatigue Syndrome (CFS)
Brain Tumors Primary Brain Tumors Secondary (Metastatic) Brain Tumors CNS Symptoms Arising from Brain Tumors CNS Symptoms Arising from Cancer Treatment
Oxygen Deprivation Acute Oxygen Deprivation Chronic Oxygen Deprivation Carbon Monoxide Poisoning
Metabolic and Endocrine Disorders Diabetes Mellitus (DM) Hypothyroidism (Myxedema) Liver Disease Uremia
Nutritional Deficiencies 8. Neurobehavioral Variables and Diagnostic Issues Lesion Characteristics Diffuse and Focal Effects Site and Size of Focal Lesions Depth of Lesion Distance Effects Nature of the Lesion
Time Nonprogressive Brain Disorders
Progressive Brain Diseases
Subject Variables Age Sex Differences Lateral Asymmetry
Patient Characteristics: Race, Culture, and Ethnicity The Uses of Race/Ethnicity/Culture Designations The Language of Assessment
Patient Characteristics: Psychosocial Variables Premorbid Mental Ability Education Premorbid Personality and Social Adjustment
Problems of Differential Diagnosis Emotional Disturbances and Personality Disorders Psychotic Disturbances Depression Malingering
II A COMPENDIUM OF TESTS AND ASSESSMENT TECHNIQUES 9. Orientation and Attention Orientation Awareness Time Place Body Orientation Finger Agnosia Directional (Right–Left) Orientation Space
Attention, Processing Speed, and Working Memory Attentional Capacity Working Memory/Mental Tracking Concentration/Focused Attention
Processing Speed Complex Attention Tests Divided Attention Everyday Attention
10. Perception Visual Perception Visual Inattention Visual Scanning Color Perception Visual Recognition Visual Organization Visual Interference
Auditory Perception Auditory Acuity Auditory Discrimination Auditory Inattention Auditory–Verbal Perception Nonverbal Auditory Reception
Tactile Perception Tactile Sensation Tactile Inattention Tactile Recognition and Discrimination Tests
Olfaction 11. Memory I: Tests Examining Memory Verbal Memory Verbal Automatisms Supraspan Words Story Recall
Visual Memory Visual Recognition Memory
Visual Recall: Verbal Response Visual Recall: Design Reproduction Visual Learning Hidden Objects
Tactile Memory Incidental Learning Prospective Memory Remote Memory Recall of Public Events and Famous Persons Autobiographic Memory
Forgetting 12. Memory II: Batteries, Paired Memory Tests, and Questionnaires Memory Batteries Paired Memory Tests Memory Questionnaires 13. Verbal Functions and Language Skills Aphasia Aphasia Tests and Batteries Aphasia Screening Testing for Auditory Comprehension
Verbal Expression Naming Vocabulary Discourse
Verbal Comprehension Verbal Academic Skills Reading Writing Spelling Knowledge Acquisition and Retention
14. Construction and Motor Performance Drawing Copying Miscellaneous Copying Tasks Free Drawing
Assembling and Building Two-Dimensional Construction Three-Dimensional Construction
Motor Skills Examining for Apraxia Neuropsychological Assessment of Motor Skills and Functions
15. Concept Formation and Reasoning Concept Formation Concept Formation Tests in Verbal Formats Concept Formation Tests in Visual Formats Symbol Patterns Sorting Sort and Shift
Reasoning Verbal Reasoning
Reasoning about Visually Presented Material Mathematical Procedures Arithmetic Reasoning Problems Calculations
16. Executive Functions The Executive Functions Volition Planning and Decision Making Purposive Action Self-Regulation Effective Performance Executive Functions: Wide Range Assessment
17. Neuropsychological Assessment Batteries Ability and Achievement Individual Administration Paper-and-Pencil Administration
Batteries Developed for Neuropsychological Assessment Batteries for General Use Batteries Composed of Preexisting Tests
Batteries for Assessing Specific Conditions HIV+ Schizophrenia Neurotoxicity Dementia: Batteries Incorporating Preexisting Tests Traumatic Brain Injury
Screening Batteries for General Use Computerized Neuropsychological Assessment Batteries 18. Observational Methods, Rating Scales, and Inventories The Mental Status Examination Rating Scales and Inventories Dementia Evaluation Mental Status Scales for Dementia Screening and Rating Mental Status and Observer Rating Scale Combinations Scales for Rating Observations
Traumatic Brain Injury Evaluating Severity Choosing Outcome Measures Outcome Evaluation Evaluation of the Psychosocial Consequences of Head Injury
Epilepsy Patient Evaluations Quality of Life Psychiatric Symptoms 19. Tests of Personal Adjustment and Emotional Functioning
Objective Tests of Personality and Emotional Status Depression Scales and Inventories Anxiety Scales and Inventories Inventories and Scales Developed for Psychiatric Conditions
Projective Personality Tests Rorschach Technique Storytelling Techniques Drawing Tasks
20. Testing for Effort, Response Bias, and Malingering Research Concerns Examining Response Validity with Established Tests Multiple Assessments
Test Batteries and Other Multiple Test Sets Wechsler Scales
Batteries and Test Sets Developed for Neuropsychological Assessment Memory Tests Single Tests Tests with a Significant Motor Component
Special Techniques to Assess Response Validity Symptom Validity Testing (SVT) Forced-Choice Tests Variations on the Forced-Choice Theme Other Special Examination Techniques
Self-Report Inventories and Questionnaires Personality and Emotional Status Inventories
Appendix A: Neuroimaging Primer Appendix B: Test Publishers and Distributors References Test Index Subject Index
List of Figures The Behavioral Geog raphy of the Brain FIGURE Schematic of a neuron. Photomicrograph. (See color Figure 3.1) 3.1 FIGURE (a) Axial MRI, coronal MRI, sagittal MRI of anatomical divisions of the brain. (See color Figure 3.2 3.2a, b, and c) FIGURE Lateral surface anatomy postmortem (left) with MRI of living brain (right) 3.3 FIGURE Ventricle anatomy. (See color Figure 3.4 ) 3.4 FIGURE Scanning electron micrograph showing an overview of corrosion casts from the occipital cortex 3.5 FIGURE Major blood vessels schematic 3.6 FIGURE Thalamo-cortical topography demonstrated by DTI tractography. (See color Figure 3.7) 3.7 FIGURE Memory and the limbic system 3.8 FIGURE Cut-away showing brain anatomy viewed from a left frontal perspective with the left frontal and 3.9 parietal lobes removed. (See color Figure 3.9) FIGURE DTI (diffusion tensor imaging) of major tracts. (See color Figure 3.10) 3.10 FIGURE DTI of major tracts through the corpus callosum. (See color Figure 3.11) 3.11 FIGURE Representative commissural DTI ‘streamlines’ showing cortical projections and cortical 3.12 terminations of corpus callosum projections. (See color Figure 3.12) FIGURE Schematic diagram of visual fields, optic tracts, and the associated brain areas, showing left and 3.13 right lateralization in humans FIGURE Diagram of a “motor homunculus” showing approximately relative sizes of specific regions of the 3.14 motor cortex FIGURE Example of global/local stimuli 3.15 FIGURE Example of spatial dyscalculia by a traumatically injured pediatrician 3.16 FIGURE Attempts of a 51-year-old right hemisphere stroke patient to copy pictured designs with colored 3.17a blocks FIGURE Attempts of a 31-year-old patient with a surgical lesion of the left visual association area to copy 3.17b the 3 × 3 pinwheel design FIGURE Overwriting (hypergraphia) by a 48-year-old college-educated retired police investigator 3.18 suffering right temporal lobe atrophy
FIGURE Simplification and distortions of four Bender-Gestalt designs by a 4 5-year-old assembly line 3.19 worker FIGURE The lobe-based divisions of the human brain and their functional anatomy 3.20 FIGURE Brodmann’s cytoarchitectural map of the human brain 3.21 FIGURE Lateral view of the left hemisphere, showing the ventral “what” and dorsal “where” visual 3.22 pathways in the occipital-temporal and occipital-parietal regions FIGURE (a) This bicycle was drawn by the 51-year-old retired salesman who constructed the block 3.23 designs of Figure 3.17a FIGURE Flower drawing, illustrating left-sided inattention 3.24 a FIGURE Copy of the Taylor Complex Figure (see p. 575), illustrating inattention to the left side of the 3.24 b stimulus FIGURE Writing to copy, illustrating inattention to the left side of the to-be-copied sentences; written by a 3.24 c 69 year-old man FIGURE Example of inattention to the left visual field 3.24 d FIGURE Ventral view of H.M.’s brain ex situ using 3-D MRI reconstruction 3.25 FIGURE The major subdivisions of the human frontal lobes identified on surface 3-D MRI reconstructions 3.26 of the brain The Rationale of Deficit Measurement FIGURE Calculations test errors (circled) made by a 55-year-old dermatologist with a contre coup 4 .1 The Neuropsycholog ical Examination: Procedures FIGURE An improvised test for lexical agraphia 5.1 FIGURE Copies of the Bender-Gestalt designs drawn on one page by a 56-year-old sawmill worker with 5.2 phenytoin toxicity The Neuropsycholog ical Examination: Interpretation FIGURE House-Tree-Person drawings of a 4 8-year-old advertising manager 6.1 FIGURE This bicycle was drawn by a 61-year-old who suffered a stroke involving the right parietal lobe 6.2 FIGURE The relationship of some commonly used test scores to the normal curve and to one another 6.3 Neuropatholog y for Neuropsycholog ists FIGURE This schematic is of a neuron and depicts various neuronal membrane and physiological effects 7.1 incurred during the initial stage of TBI (See color Figure 7.1) FIGURE Proteins are the building blocks of all tissues including all types of neural cells and in this 7.2 diagram the Y-axis depicts the degree of pathological changes in protein integrity with TBI FIGURE There are two pathways that lead to a breakdown in the axon from TBI, referred to as axotomy 7.3 FIGURE CT scans depicting the trajectory prior to neurosurgery depicting the trajectory and path of a 7.4 bullet injury to frontotemporal areas of the brain FIGURE MRI demonstration of the effects of penetrating brain injury 7.5 FIGURE Postmortem section showing the central penetration wound from a bullet which produces a
7.6 permanent cavity in the brain FIGURE Diagram showing impulsive loading from the rear (left) and front (right) with TBI 7.7 FIGURE Mid-sagittal schematic showing the impact dynamics of angular decelerations of the brain as the 7.8 head hits a fixed object FIGURE Wave propagation and contact phenomena following impact to the head 7.9 FIGURE The colorized images represent a 3-D CT recreation of the day-of-injury hemorrhages resulting 7.10 from a severe TBI (See color Figure 7.10) FIGURE Mid-sagittal MRI with an atrophied corpus callosum and old shear lesion in the isthmus (See 7.11 color Figure 7.11) FIGURE MRI comparisons at different levels of TBI severity in children with a mean age of 13.6 7.12 FIGURE 3-D MRI reconstruction of the brain highlighting the frontal focus of traumatic hemorrhages 7.13 associated with a severe TBI.(See color Figure 7.13) FIGURE This is a case of mild TBI where conventional imaging (upper left) shows no abnormality but the 7.14 fractional anisotropy DTI map (top, middle image) does (See color Figure 7.14 ) FIGURE The brain regions involved in TBI that overlap with PTSD are highlighted in this schematic (See 7.15 color Figure 7.15) FIGURE “The three neurodegenerative diseases classically evoked as subcortical dementia are 7.16 Huntington’s chorea, Parkinson’s disease, and progressive supranuclear palsy FIGURE Tracings of law professor’s Complex Figure copies (see text for description of his performance) 7.17 FIGURE Immediate (upper) and delayed (lower) recall of the Complex Figure by the law professor with 7.18 Huntington’s disease FIGURE Pyramid diagram of HIV-Associated Neurocognitive Disorders (HAND) 7.19 FIGURE Schematic flow diagram showing a diagnostic decision tree for various neurocognitive disorders 7.20 associated with HiV FIGURE Autopsy-proved HIV encephalitis in an AIDS patient with dementia 7.21 FIGURE The devastating effects of structural damage from herpes simplex encephalitis 7.22 FIGURE Postmortem appearance of a glioblastoma multiforme 7.23 FIGURE Postmortem appearance of a mid-sagittal frontal meningioma (left) and a large inferior frontal 7.24 meningioma (right) FIGURE Postmortem appearance of malignant melanoma 7.25 FIGURE Postmortem appearance of pulmonary metastasis to the brain 335. 7.26 FIGURE The MRIs show bilateral ischemic hypoxic injury characteristic of anoxic brain injury 7.27 Neurobehavioral Variables and Diag nostic Issues FIGURE The handedness inventory 8.1 FIGURE The target matrix for measuring manual speed and accuracy 8.2 FIGURE Tapley and Bryden’s (1985) dotting task for measuring manual speed
8.3 Orientation and Attention FIGURE One of the five diagrams of the Personal Orientation Test 9.1 FIGURE Curtained box used by Benton to shield stimuli from the subject’s sight when testing finger 9.2 localization FIGURE Outline drawings of the right and left hands with fingers numbered for identification 9.3 FIGURE Floor plan of his home drawn by a 55-year-old mechanic injured in a traffic accident 9.4 a FIGURE Floor plan of their home drawn by the mechanic’s spouse 9.4 b FIGURE Topographical Localization responses by a 50-year-old engineer who had a ruptured right 9.5 anterior communicating artery FIGURE Corsi’s Block-tapping board 9.6 FIGURE The symbol-substitution format of the WIS Digit Symbol Test 9.7 FIGURE The Symbol Digit Modalities Test (SDMT) 9.8 FIGURE Practice samples of the Trail Making Test 9.9 Perception FIGURE This sample from the Pair Cancellation test (Woodcock-Johnson III Tests of Cognitive Abilities) 10.1 FIGURE The Line Bisection test 10.2 FIGURE Performance of patient with left visuospatial inattention on the Test of Visual Neglect 10.3 FIGURE The Bells Test (reduced size) 10.4 FIGURE Letter Cancellation task: “Cancel C’s and E’s” (reduced size) 10.5 FIGURE Star Cancellation test (reduced size) 10.6 FIGURE Indented Paragraph Reading Test original format for copying 10.7 FIGURE indented Paragraph Reading Test with errors made by the 4 5-year-old traumatically injured 10.8 pediatrician FIGURE This attempt to copy an address was made by a 66-year-old retired paper mill worker two years 10.9 after he had suffered a right frontal CVA FIGURE Flower drawn by patient with left visuospatial neglect 10.10 FIGURE Judgment of Line Orientation 10.11 FIGURE Focal lesions associated with JLO failures. (See color Figure 10.12) 10.12 FIGURE Test of Facial Recognition 10.13
FIGURE An item of the Visual Form Discrimination test 10.14 FIGURE Example of the subjective contour effect 10.15 FIGURE Closure Speed (Gestalt Completion) 10.16 FIGURE Two items from the Silhouettes subtest of the Visual Object and Space Perception Test 10.17 FIGURE Multiple-choice item from the Object Decision subtest of the Visual Object and Space Perception 10.18 Test FIGURE Easy items of the Hooper Visual Organization Test 10.19 FIGURE Closure Flexibility (Concealed Figures) 10.20 FIGURE Example of a Poppelreuter-type overlapping figure 10.21 FIGURE Rey’s skin-writing procedures 10.22 Memory I: Tests FIGURE Memory for Designs models 11.1 FIGURE Complex Figure Test performance of a 50-year-old hemiparetic engineer with severe right frontal 11.2 damage of 14 years’ duration FIGURE Two representative items of the Benton Visual Retention Test 11.3 FIGURE Ruff-Light Trail Learning Test (RuLiT) (reduced size) 11.4 FIGURE One of the several available versions of the Sequin-Goddard Formboard used in the Tactual 11.5 Performance Test Verbal Functions and Lang uag e Skills FIGURE Alzheimer patient’s attempt to write (a) “boat” and (b) “America.” 13.1 Construction and Motor Performance FIGURE The Hutt adaptation of the Bender-Gestalt figures 14 .1 FIGURE Rey Complex Figure (actual size) 14 .2 FIGURE Taylor Complex Figure (actual size) 14 .3 FIGURE Modified Taylor Figure 14 .4 FIGURE The four Medical College of Georgia (MCG) Complex Figures (actual size) 14 .5 FIGURE An example of a Complex Figure Test Rey-Osterrieth copy 14 .6 FIGURE Structural elements of the Rey Complex Figure 14 .7 FIGURE Sample freehand drawings for copying 14 .8
FIGURE Freehand drawing of a clock by a 54 -year-old man with a history of anoxia resulting in bilateral 14 .9 hippocampus damage FIGURE Block Design test 14 .10 FIGURE Voxel lesion-symptom mapping on 239 patients from the iowa Patient Registry projected on the 14 .11 iowa template brain FIGURE Example of a WIS-type Object Assembly puzzle item 14 .12 FIGURE Test of Three-Dimensional Constructional Praxis, Form A (A.L. Benton) 14 .13 FIGURE Illustrations of defective performances 14 .14 FIGURE The Purdue Pegboard Test 14 .15 Concept Formation and Reasoning FIGURE Identification of Common Objects stimulus card (reduced size) 15.1 FIGURE Examples of two levels of difficulty of Progressive Matrices-type items 15.2 FIGURE The Kasanin-Hanfmann Concept Formation Test 15.3 FIGURE The Wisconsin Card Sorting Test 15.4 FIGURE A simple method for recording the Wisconsin Card Sorting Test performance 15.5 FIGURE WIS-type Picture Completion test item 15.6 FIGURE WIS-type Picture Arrangement test item 15.7 FIGURE Sample items from the Block Counting task 15.8 FIGURE Example of a page of arithmetic problems laid out to provide space for written calculations 15.9 Executive Functions FIGURE Bender-Gestalt copy trial rendered by a 4 2-year-old interior designer a year after she had 16.1 sustained a mild anterior subarachnoid hemorrhage FIGURE House and Person drawings by the interior designer whose Bender-Gestalt copy trial is given in 16.2 Figure 16.1 FIGURE Two of the Porteus mazes 16.3 FIGURE Tower of London examples 16.4 FIGURE A subject performing the Iowa Gambling Task on a computer 16.5 FIGURE Card selections on the Iowa Gambling Task as a function of group (Normal Control, Brain 16.6 damaged Control, Ventromedial Prefrontal), deck type (disadvantageous v. advantageous), and trial block FIGURE A 23-year-old craftsman with a high school education made this Tinkertoy “space platform” 16.7 FIGURE “Space vehicle” was constructed by a neuropsychologist unfamiliar with Tinkertoys
16.8 FIGURE The creator of this “cannon” was a 60-year-old left-handed contractor who had had a small left 16.9 parietal stroke FIGURE This 4 0-year-old salesman was trying to make a “car” following a right-sided stroke 16.10 FIGURE Figural Fluency Test responses by 62-year-old man described on p. 698 16.11 FIGURE Ruff Figural Fluency Test (Parts I-V) 16.12 FIGURE Repetitive patterns which subject is asked to maintain 16.13 FIGURE Drawing of a clock, illustrating perseveration 16.14 FIGURE Signature of middle-aged man who had sustained a gunshot wound to the right frontal lobe 16.15 Neuropsycholog ical Assessment Batteries FIGURE This figure summarizes the lesion mapping of cognitive abilities showing where abnormally low 17.1 WAIS-III Index Scores are most often associated with focal lesions FIGURE The Peabody Individual Achievement Test 17.2 FIGURE Histograms illustrating the distribution of scores for each measure in the ADC UDS 17.3 Neuropsychological Test Battery Observational Methods, Rating Scales, and Inventories FIGURE Partial items from the Montreal Cognitive Assessment 18.1 FIGURE Galveston Orientation and Amnesia Test (GOAT) record form 18.2 Tests of Personal Adjustment and Emotional Functioning FIGURE Mean MMPI profile for patients with diagnosed brain disease 19.1 FIGURE MMPI-2 profile in a patient with medically unexplained “spells” and significant psychosocial 19.2 stressors FIGURE Illustration of the ventromedial prefrontal region 19.3 APPENDIX A: Neuroimag ing Primer FIGURE With computerized tomography (CT) and magnetic resonance imaging (MRI), gross brain A1 anatomy can be readily visualized. (See color Figure A1) FIGURE This scan, taken several months after a severe traumatic brain injury, shows how an old right A2 frontal contusion appears on the different imaging sequences FIGURE These horizontal scan images are from a patient with a severe TBI A3 FIGURE The postmortem coronal section in the center of this figure shows the normal symmetry of the A4 brain and the typically white appearance of normal white matter, and gray matter (See color Figure A4 ) FIGURE Diffusion tensor imagining (DTI) tractography is depicted in these images of the brain (See color A5 Figure A5) FIGURE DTI tractography of a patient who sustained a severe TBI showing loss of certain tracts in the A6 frontal and isthmus region (See color Figure A6) FIGURE This figure shows how structural 3-D MRI may be integrated with 3-D DTI tractography. (See
A7 color Figure A7) FIGURE The MRI image on the left is at approximately the same level as the positron emission computed A8 tomogram or PET scan on the right of a 58-year-old patient (See color Figure A8) FIGURE In plotting functional MRI (fMRI) activation, the regions of statistically significant activation are A9 mapped onto a universal brain model. (See color Figure A9)
List of Tables Basic Concepts TABLE 2.1 Most Commonly Defined Aphasic Syndromes The Behavioral Geog raphy of the Brain TABLE 3.1 Functional dichotomies of left and right hemispheric dominance The Rationale of Deficit Measurement TABLE 4.1 North American Adult Reading Test (NAART): Word List The Neuropsycholog ical Examination: Procedures TABLE 5.1 Classification of Ability Levels The Neuropsycholog ical Examination: Interpretation TABLE 6.1 Standard Score Equivalents for 21 Percentile Scores Ranging from 1 to 99 TABLE 6.2 Behavior Changes that are Possible Indicators of a Pathological Brain Process The Neuropsycholog ical Examination: Interpretation TABLE 7.1 Diagnostic Criteria for Mild TBI by the American Congress of Rehabilitation Medicine TABLE 7.2 Selected Signs and Symptoms of a Concussion TABLE 7.3 Estimates of Injury Severity Based on Posttraumatic Amnesia (PTA) Duration TABLE 7.4 Test Completion Codes TABLE 7.5 Exclusion Criteria for Diagnosis of Alzheimer’s Disease TABLE 7.6 Uniform Data Set of the National Alzheimer’s Coordination Center Neuropsychological Test Battery TABLE 7.7 Memory in Alzheimer’s Disease TABLE 7.8 A Comparison of Neuropsychological Features of AD, FTLD, LBD, PDD, HD, PSP, and VaD Neuropatholog y for Neuropsycholog ists TABLE 8.1 Some Lateral Preference Inventories and Their Item Characteristics Orientation and Attention TABLE 9.1 Temporal Orientation Test Scores for Control and Brain Damaged Patients TABLE 9.2 Sentence Repetition: Form 1 TABLE 9.3 Sentence Repetition (MAE): Demographic Adjustments for Raw Scores TABLE 9.4 Example of Consonant Trigrams Format TABLE 9.5 Symbol Digit Modalities Test Norms for Ages 18 to 74 perception TABLE 10.1 The Bells Test: Omissions by Age and Education TABLE 10.2 Judgment of Line Orientation: Score Corrections TABLE 10.3 Facial Recognition Score Corrections TABLE 10.4 The Face-Hand Test TABLE 10.5 Skin-Writing Test Errors Made by Four Adult Groups Memory I: Tests
TABLE 11.1 Telephone Test Scores for Two Age Groups TABLE 11.2 Benson Bedside Memory Test TABLE 11.3 Rey Auditory-Verbal Learning Test Word Lists TABLE 11.4 Word Lists for Testing AVLT Recognition, Lists A-B TABLE 11.5 Multiple-Choice and Cued-Recall Items for Forms 1–4 of SRT TABLE 11.6 Norms for the Most Used SR Scores for Age Groups with 30 or More Subjects TABLE 11.7 WMS-III Logical Memory Recognition Scores as a Function of Age or LM II Scores TABLE 11.8 Expected Scores for Immediate and Delayed Recall Trials of the Babcock Story Recall Test TABLE 11.9 Percentiles for Adult Accuracy Scores on Memory Trials of the Complex Figure Test (ReyO) TABLE Medical College of Georgia Complex Figure (MCGCF) Data for Two Older Age Groups 11.10 TABLE BVRT Norms for Administration A: Adults Expected Number Correct Scores 11.11 Verbal Functions and Lang uag e Skills TABLE 13.1 The Most Frequent Alternative Responses to Boston Naming Test Items TABLE 13.2 Normal Boston Naming Test Score Gain with Phonemic Cueing TABLE 13.3 The Token Test TABLE 13.4 A Summary of Scores Obtained by the Four Experimental Groups on The Token Test TABLE 13.5 Adjusted Scores and Grading Scheme for the “Short Version” of the Token Test TABLE 13.6 The National Adult Reading Test Construction and Motor Performance TABLE 14 .1 Scoring System for the Rey Complex Figure TABLE 14 .2 Scoring System for the Taylor Complex Figure TABLE 14 .3 Modified Taylor Figure TABLE Scoring Systems for the MCG Complex Figures 14.4 TABLE 14 .5 Scoring System of Qualitative Errors TABLE 14 .6 Complex Figure Organizational Quality Scoring TABLE 14 .7 Scoring System for Bicycle Drawings TABLE 14 .8 Bicycle Drawing Means and Standard Deviations for 14 1 Blue Collar Workers TABLE 14 .9 Scoring System for House Drawing TABLE WAIS-IV Block Design Score Changes with Age 14.10 TABLE Activities for Examining Practic Functions 14.11 Concept Formation and Reasoning TABLE 15.1 Matrix Reasoning and Vocabulary are Age-corrected Scaled Scores TABLE 15.2 First Series of Uncued Arithmetic Word Problems TABLE 15.3 Benton’s Battery of Arithmetic Tests Executive Functions TABLE 16.1 Items Used in the Tinkertoy Test TABLE 16.2 Tinkertoy Test: Scoring for Complexity TABLE 16.3 Comparisons Between Groups on np and Complexity Scores TABLE 16.4 Verbal Associative Frequencies for the 14 Easiest Letters TABLE 16.5 Controlled Oral Word Association Test: Adjustment Formula for Males (M) and Females (F) TABLE 16.6 Controlled Oral Word Association Test: Summary Table
Neuropsycholog ical Assessment Batteries TABLE 17.1 Rapid Semantic Retrieval Mean Scores for 1-min Trial TABLE 17.2 CDEs: Traumatic Brain Injury Outcome Measures TABLE 17.3 Repeatable Battery for the Assessment of Neuropsychological Status Test Means Observational Methods, Rating Scales, and Inventories TABLE 18.1 Dementia Score TABLE 18.2 Glasgow Coma Scale TABLE 18.3 Severity Classification Criteria for the Glasgow Coma Scale (GCS) TABLE 18.4 Frequency of “Bad” and “Good” Outcomes Associated with the Glasgow Coma Scale TABLE 18.5 The Eight Levels of Cognitive Functioning of the “Rancho Scale” TABLE 18.6 Disability Rating Scale TABLE 18.7 Item Clusters and Factors from Part 1 of the Katz Adjustment Scale TABLE 18.8 Mayo-Portland Adaptability Inventory (MPAI) Items by Subscales TABLE 18.9 Satisfaction With Life Scale (SWLS) Tests of Personal Adjustment and Emotional Functioning TABLE 19.1 MMPI-2 RC Scales and corresponding Clinical Scales from MMPI-2 TABLE 19.2 Sickness Impact Profile (SIP) Categories and Composite Scales TABLE 19.3 Major Response Variables Appearing in Every Rorschach Scoring System Testing for Effort, Response Bias, and Maling ering TABLE 20.1 Malingering Criteria Checklist TABLE 20.2 Confidence Intervals (CIs) for Random Responses for Several Halstead-Reitan Battery Tests TABLE 20.3 D.E. Hartman (2002) Criteria for Evaluating Stand-alone Malingering and Symptom Validity Tests … TABLE 20.4 Percentile Norms for Time (in Seconds)Taken to Count Ungrouped Dots TABLE 20.5 Percentile Norms for Time (in Seconds) Taken to Count Grouped Dots TABLE 20.6 Autobiographical Memory Interview
I Theory and Practice of Neuropsychological Assessment
1 The Practice of Neuropsychological Assessment Imaging is not enough. Mortimer Mishkin, 1988
Clinical neuropsychology is an applied science concerned with the behavioral expression of brain dysfunction. It owes its primordial—and often fanciful— concepts to those who, since earliest historic times, puzzled about what made people do what they did and how. These were the philosophers, physicians, scientists, artists, tinkerers, and dreamers who first called attention to what seemed to be linkages between body—not necessarily brain—structures and people’s common responses to common situations as well as their behavioral anomalies (Castro-Caldas and Grafman, 2000; Finger, 1994, 2000; C.G. Gross, 1998; L.H. Marshall and Magoun, 1998). In the 19th century the idea of controlled observations became generally accepted, thus providing the conceptual tool with which the first generation of neuroscientists laid out the basic schema of brain-behavior relationships that hold today (Benton, 2000; Boring, 1950; M. Critchley and Critchley, 1998; Hécaen et Lanteri-Laura, 1977; N.J. Wade and Brozek, 2001). In the first half of the 20th century, war-damaged brains gave the chief impetus to the development of clinical neuropsychology. The need for screening and diagnosis of brain injured and behaviorally disturbed servicemen during the first World War and for their rehabilitation afterwards created large-scale demands for neuropsychology programs (e.g., K. Goldstein, 1995 [1939]; Homskaya, 2001; see references in Luria, 1973b; Poppelreuter, 1990 [1917]; W.R. Russell [see references in Newcombe, 1969]). The second World War and then the wars in east Asia and the Mideast promoted the development of many talented neuropsychologists and of increasingly sophisticated examination and treatment techniques. While clinical neuropsychology can trace its lineage directly to the clinical neurosciences, psychology contributed the two other domains of knowledge and skill that are integral to the scientific discipline and clinical practices of neuropsychology today. Educational psychologists, beginning with Binet (with Simon, 1908) and Spearman (1904), initially developed tests to capture that
elusive concept “intelligence.” Following these pioneers, mental measurement specialists produced a multitude of examination techniques to screen recruits for the military and to assist in educational evaluations. Some of these techniques—such as Raven’s Progressive Matrices, the Wechsler Intelligence Scales, and the Wide Range Achievement Tests—have been incorporated into the neuropsychological test canon (W. Barr, 2008; Boake, 2002). Society’s acceptance of educational testing led to a proliferation of largescale, statistics-dependent testing programs that provided neuropsychology with an understanding of the nature and varieties of mental abilities from a normative perspective. Educational testing has also been the source of ever more reliable measurement techniques and statistical tools for test standardization and the development of normative data, analysis of research findings, and validation studies (Mayrhauser, 1992; McFall and Townsend, 1998; Urbina, 2004). Clinical psychologists and psychologists specializing in personality and social behavior research borrowed from and further elaborated the principles and techniques of educational testing, giving neuropsychology this important assessment dimension (Cripe, 1997; G.J. Meyer et al., 2001). Psychology’s other critical contribution to neuropsychological assessment comes primarily from experimental studies of cognitive functions in both humans and other animals. In its early development, human studies of cognition mainly dealt with normal subjects—predominantly college students who sometimes earned course credits for their cooperation. Animal studies and clinical reports of brain injured persons, especially soldiers with localized wounds and stroke patients, generated much of what was known about the alterations and limitations of specific cognitive functions when one part of the brain is missing or compromised. In the latter half of the 20th century, many experimental psychologists became aware of the wealth of information about cognitive functions to be gained from studying brain injured persons, especially those with localized lesions (e.g., G. Cohen et al., 2000; Gazzaniga, 2009, passim; Tulving and Craik, 2000, passim). Similarly, neuroscientists discovered the usefulness of cognitive constructs and psychological techniques when studying brain-behavior relationships (Bilder, 2011; Fuster, 1995; Luria, 1966, 1973b). Now in the 21st century, dynamic imaging techniques permit viewing functioning brain structures, further refining understanding of the neural foundations of behavior (Friston, 2009) . Functional neuroimaging gives psychological constructs the neurological bases supporting analysis and comprehension of the always unique and often anomalous multifaceted behavioral presentations of brain
injured patients. When doing assessments, clinical neuropsychologists typically address a variety of questions of both neurological and psychological import. The diversity of problems and persons presents an unending challenge to examiners who want to satisfy the purposes for which the examination was undertaken and still evaluate patients at levels suited to their capacities and limitations. In this complex and expanding field, few facts or principles can be taken for granted, few techniques would not benefit from modifications, and few procedures will not be bent or broken as knowledge and experience accumulate. The practice of neuropsychology calls for flexibility, curiosity, inventiveness, and empathy even in the seemingly most routine situations (B. Caplan and Shechter, 1995; Lezak, 2002). Each neuropsychological evaluation holds the promise of new insights into the workings of the brain and the excitement of discovery. The rapid evolution of neuropsychological assessment in recent years reflects a growing sensitivity among clinicians generally to the practical problems of identification, assessment, care, and treatment of brain impaired patients. Psychologists, psychiatrists, and counselors ask for neuropsychological assistance in identifying those candidates for their services who may have underlying neurological disorders. Neurologists and neurosurgeons request behavioral evaluations to aid in diagnosis and to document the course of brain disorders or the effects of treatment. Rehabilitation specialists request neuropsychological assessments to assist in rehabilitation planning and management of a neurological condition (Malec, 2009) . A fruitful interaction is taking place between neuropsychology and gerontology that enhances the knowledge and clinical applications of each discipline with the worldwide increase in longevity and the neurological problems that are associated with aging (see Chapter 8, pp. 354–361). Child neuropsychology has developed hand in hand with advances in the study of mental retardation, neurodevelopmental disorders including learning disabilities, and children’s behavior problems. As this text concerns neuropsychological issues relevant for adults, we refer the interested reader to the current child neuropsychology literature (e.g., Baron, 2004; Hunter and Donders, 2007; Semrud-Clikeman and Teeter Ellison, 2009; Yeates, Ris, et al., 2010). Adults whose cognitive and behavioral problems stem from developmental disorders or childhood onset conditions may also need neuropsychological attention. These persons are more likely to be seen in clinics or by neuropsychologists specializing in the care of adults. However, the
preponderance of the literature on their problems is in books and articles dealing with developmental conditions such as attention deficit hyperactivity disorder, spina bifida, or hydrocephalus arising from a perinatal incident, or with the residuals of premature birth or childhood meningitis, or the effects of cancer treatment in childhood. When this book first appeared, much of the emphasis in clinical neuropsychology was on assessing behavioral change. In part this occurred because much of the need had been for assistance with diagnostic problems. Moreover, since many patients seen by neuropsychologists were considered too limited in their capacity to benefit from behavioral training programs and counseling, these kinds of treatment did not seem to offer practical options for their care. Yet, as one of the clinical sciences, neuropsychology has been evolving naturally: assessment tends to play a predominant role while these sciences are relatively young; treatment techniques develop as diagnostic categories and etiological relationships are defined and clarified, and the nature of the patients’ disorders become better understood. Today, treatment planning and evaluation have become not merely commonplace but often necessary considerations for neuropsychologists performing assessments. EXAMINATION PURPOSES Any of six different purposes may prompt a neuropsychological examination: diagnosis; patient care—including questions about management and planning; treatment-1: identifying treatment needs, individualizing treatment programs, and keeping abreast of patients’ changing treatment requirements; treatment-2: evaluating treatment efficacy; research, both theoretical and applied; and now in the United States and to a lesser extent elsewhere, forensic questions are frequently referred to neuropsychologists. Each purpose calls for some differences in assessment strategies. Yet many assessments serve two or more purposes, requiring the examiner to integrate the strategies in order to gain the needed information about the patient in the most focused and succinct manner possible. 1. Diagnosis. Neuropsychological assessment can be useful for discriminating between psychiatric and neurological symptoms, identifying a possible neurological disorder in a nonpsychiatric patient, helping to distinguish between different neurological conditions, and providing behavioral data for localizing the site—or at least the hemisphere side—of a lesion. However, the use of neuropsychological assessment as a diagnostic tool has diminished
while its contributions to patient care and treatment and to understanding behavioral phenomena and brain function have grown. This shift is due at least in part to the development of highly sensitive and reliable noninvasive neurodiagnostic techniques (pp. 864–870, Appendix A). Today, accurate diagnosis and lesion localization are often achieved by means of the neurological examination and laboratory data. Still, conditions remain in which even the most sensitive laboratory analyses may not be diagnostically enlightening, such as toxic encephalopathies (e.g., L.A. Morrow, 1998; Rohlman et al., 2008; B. Weiss, 2010), Alzheimer ’s disease and related dementing processes (e.g., Y.L. Chang et al., 2010; Derrer et al., 2001; Welsh-Bohmer et al., 2003), or some autoimmune disorders which present with psychiatric symptoms (E.K. Geary et al., 2010; Nowicka-Sauer et al., 2011; Ponsford Cameron et al., 2011). In these conditions the neuropsychological findings can be diagnostically crucial. Even when the site and extent of a brain lesion have been shown on imaging, the image will not identify the nature of residual behavioral strengths and the accompanying deficits: for this, neuropsychological assessment is needed. It has been known for decades that despite general similarities in the pattern of brain function sites, these patterns will differ more or less between people. These kinds of differences were demonstrated in three cases with localized frontal lesions that appeared quite similar on neuroimaging yet each had a distinctively different psychosocial outcome (Bigler, 2001a). Moreover, cognitive assessment can document mental abilities that are inconsistent with anatomic findings, such as the 101-year-old nun whose test scores were high but whose autopsy showed “abundant neurofibrillary tangles and senile plaques, the classic lesions of Alzheimer ’s disease” (Snowdon, 1997) . Markowitsch and Calabrese (1996), too, discussed instances in which patients’ level of functioning exceeded expectations based on neuroimaging. In another example, adults who had shunts to treat childhood hydrocephalus may exhibit very abnormal neuroradiological findings yet perform adequately and sometimes at superior levels on cognitive tasks (Feuillet et al., 2007; Lindquist et al., 2011).Thus, neuropsychological techniques will continue to be an essential part of the neurodiagnostic apparatus. Although limited in its applications as a primary diagnostic tool, neuropsychological assessment can aid in prodromal or early detection and prediction of dementing disorders or outcome (Seidman et al., 2010). The earliest detection of cognitive impairments during the prodrome as well as conversion to Alzheimer ’s disease often comes in neuropsychological
assessments (R.M. Chapman et al., 2011; Duara et al., 2011; Ewers et al., 2010). For identified carriers of the Huntington’s disease gene, the earliest impairments can show up as cognitive deficits identified in neuropsychological assessments, even before the onset of motor abnormalities (Peavy et al., 2010; Stout et al., 2011). Pharmacologic research may engage neuropsychological assessment to assist in predicting responders and best psychopharmacological treatments in mood disorders (Gudayol-Ferre et al., 2010). In patients with intractable epilepsy, neuropsychological evaluations are critical for identifying candidates for surgery as well as for implementing postsurgical programs (Baxendale and Thompson, 2010; Jones-Gotman, Smith, et al., 2010). Screening is another aspect of diagnosis. Until quite recently, screening was a rather crudely conceived affair, typically dedicated to separating out “brain damaged” patients from among a diagnostically mixed population such as might be found in long-term psychiatric care facilities. Little attention was paid to either base rate issues or the prevalence of conditions in which psychiatric and neurologic contributions were mixed and interactive (e.g., Mapou, 1988; A. Smith, 1983; C.G. Watson and Plemel, 1978; discussed this issue). Yet screening has a place in neuropsychological assessment when used in a more refined manner to identify persons most likely at risk for some specified condition or in need of further diagnostic study, and where brevity is required —whether because of the press of patients who may benefit from neuropsychological assessment (D.N. Allen et al., 1998) or because the patient’s condition may preclude a lengthy assessment (S. Walker, 1992) (also see Chapter 6, p. 175). In the last decade screening tests have been developed for identifying neurocognitive and neurobehavioral changes in TBI (traumatic brain injury) patients (Donnelly et al., 2011). 2. Patient care and planning. Whether or not diagnosis is an issue, many patients are referred for detailed information about their cognitive status, behavioral alterations, and personality characteristics—often with questions about their adjustment to their disabilities—so that they and the people responsible for their well-being may know how the neurological condition has affected their behavior. At the very least the neuropsychologist has a responsibility to describe the patient as fully as necessary for intelligent understanding and care. Descriptive evaluations may be employed in many ways in the care and treatment of brain injured patients. Precise descriptive information about
cognitive and emotional status is essential for careful management of many neurological disorders. Rational planning usually depends on an understanding of patients’ capabilities and limitations, the kinds of psychological change they are undergoing, and the impact of these changes on their experiences of themselves and on their behavior. A 55-year-old right-handed management expert with a bachelor’s degree in economics was hospitalized with a stroke involving the left frontoparietal cortex three months after taking over as chief executive of a foundering firm. He had been an effective troubleshooter who devoted most of his waking hours to work. In this new post, his first as chief, his responsibilities called for abilities to analyze and integrate large amounts of information, including complex financial records and sales and manufacturing reports; creative thinking; good judgment; and rebuilding the employees’ faltering morale. Although acutely he had displayed right-sided weakness and diminished sensation involving both his arm and leg, motor and sensory functions rapidly returned to near normal levels and he was discharged from the hospital after ten days. Within five months he was walking 3 1/2 miles daily, he was using his right hand for an estimated 75% of activities, and he felt fit and ready to return to work. In questioning the wisdom of this decision, his neurologist referred him for a neuropsychological examination. This bright man achieved test scores in the high average to superior ability ranges yet his performance was punctuated by lapses of judgment (e.g., when asked what he would do if he was the first to see smoke and fire in a movie theater he said, “If you’re the first—if it’s not a dangerous fire try to put it out by yourself. However, if it’s a large fire beyond your control you should immediately alert the audience by yelling and screaming and capturing their attention.”). When directed to write what was wrong with a picture portraying two persons sitting comfortably out in the rain, he listed seven different answers such as, “Right-hand side of rain drops moves [sic] to right on right side of pict. [sic],” but completely overlooked the central problem. Impaired selfmonitoring appeared in his rapid performance of a task requiring the subject to work quickly while keeping track of what has already been done (Figural Fluency Test)—he worked faster than most but left a trail of errors; in assigning numbers to symbols from memory (Symbol Digit Modalities Test) without noting that he gave the same number to two different symbols only inches apart; and in allowing two small errors to remain on a page of arithmetic calculations done without a time limit. Not surprisingly, he had word finding difficulties which showed up in his need for phonetic cueing to retrieve six words on the Boston Naming Test while not recalling two even with cueing. This problem also appeared in discourse; for example, he stated that a dog and a lion were alike in being “both members of the animal factory, I mean animal life.” On self-report of his emotional status (Beck Depression Inventory, Symptom Check List-90-R) he portrayed himself as having no qualms, suffering no emotional or psychiatric symptoms. In interview the patient assured me [mdl] that he was ready to return to a job that he relished. As his work has been his life, he had no “extracurricular” interests or activities. He denied fatigue or that his temperament had changed, insisting he was fully capable of resuming all of his managerial duties. It was concluded that the performance defects, though subtle, could be serious impediments at this occupational level. Moreover, lack of appreciation of these deficits plus the great extent to which this man’s life—and sense of dignity and self-worth—were bound up in his work suggested that he would have difficulty in understanding and accepting his condition and adapting to it in a constructive manner. His potential for serious depression seemed high. The patient was seen with his wife for a report of the examination findings with recommendations, and to evaluate his emotional situation in the light of both his wife’s reports and her capacity to understand and support him. With her present, he could no longer deny fatigue since it undermined both his efficiency and his good nature, as evident in her examples of how his efficiency and disposition were better in the morning than later in the day. She welcomed learning
about fatigue as his late-day untypical irritability and cognitive lapses had puzzled her. With his neurologist’s permission, he made practical plans to return to work—for half-days only, and with an “assistant” who would review his actions and decisions. His need for this help became apparent to him after he was shown some of his failures in self-monitoring. At the same time he was given encouraging information regarding his many well-preserved abilities. Judgmental errors were not pointed out: While he could comprehend the concrete evidence of self-monitoring errors, it would require more extensive counseling for a man with an impaired capacity for complex abstractions to grasp the complex and abstract issues involved in evaluating judgments. Moreover, learning that his stroke had rendered him careless and susceptible to fatigue was enough bad news for the patient to hear in one hour; to have given more discouraging information than was practically needed at this time would have been cruel and probably counterproductive. An interesting solution was worked out for the problem of how to get this self-acknowledged workaholic to accept a four-hour work day: If he went to work in the morning, his wife was sure he would soon begin stretching his time limit to five and six or more hours. He therefore agreed to go to work after his morning walk or a golf game and a midday rest period so that, arriving at the office after 1 PM, he was much less likely to exceed his half-day work limit. Ten months after the stroke the patient reported that he was on the job about 60 hours per week and had been told he “was doing excellent work.” He described a mild naming problem and other minor confusions. He also acknowledged some feelings of depression in the evening and a sleep disturbance for which his neurologist began medication.
In many cases the neuropsychological examination can answer questions concerning patients’ capacity for self-care, reliability in following a therapeutic regimen (Galski et al., 2000), not merely the ability to drive a car but to handle traffic emergencies (J.D. Dawson et al., 2010; Marcotte Rosenthal et al., 2008; Michels et al., 2010) , or appreciation of money and of their financial situation (Cahn, Sullivan, et al., 1998; Marson et al., 2000). With all the data of a comprehensive neuropsychological examination taken together— the patient’s history, background, and present situation; the qualitative observations; and the quantitative scores—the examiner should have a realistic appreciation of how the patient reacts to deficits and can best compensate for them, and whether and how retraining could be profitably undertaken (A.-L. Christensen and Caetano, 1996; Diller, 2000; Sohlberg and Mateer, 2001). The relative sensitivity and precision of neuropsychological measurements make them well-suited for following the course of many neurological diseases and neuropsychiatric conditions (M.F. Green et al., 2004; Heaton, Grant, Butters, et al., 1995; Wild and Kaye, 1998) . Neuropsychological assessment plays a key role in monitoring cognitive and neurobehavioral status following a TBI (I.H. Robertson, 2008; E.A. Wilde, Whiteneck, et al., 2010). Data from successive neuropsychological examinations repeated at regular intervals can provide reliable indications of whether the underlying neurological condition is changing, and if so, how rapidly and in what ways (e.g., Salmon, Heindel, and Lange, 1999) as, for instance, monitoring cognitive decline in dementia
patients (Josephs et al., 2011; Tierney et al., 2010), since deterioration on repeated testing can identify a dementing process early in its course (J.C. Morris, McKeel, Storandt, et al., 1991; Paque and Warrington, 1995). Parenté and Anderson (1984) used repeated testing to ascertain whether brain injured candidates for rehabilitation could learn well enough to warrant cognitive retraining. Freides (1985) recommended repeated testing to evaluate performance inconsistencies in patients complaining of attentional deficits. Repeated testing may also be used to measure the effects of surgical procedures, medical treatment, or retraining. A single, 27-year-old, highly skilled logger with no history of psychiatric disturbance underwent surgical removal of a right frontotemporal subdural hematoma resulting from a car accident. Twenty months later his mother brought him, protesting but docile, to the hospital. This alert, oriented, but poorly groomed man complained of voices that came from his teeth, explaining that he received radio waves and could “communicate to their source.” He was emotionally flat with sparse speech and frequent 20- to 30-sec response latencies that occasionally disrupted his train of thought. He denied depression and sleeping or eating disturbances. He also denied delusions or hallucinations, but during an interview pointed out Ichabod Crane’s headless horseman while looking across the hospital lawn. As he became comfortable, he talked more freely and revealed that he was continually troubled by delusional ideation. His mother complained that he was almost completely reclusive, without initiative, and indifferent to his surroundings. He had some concern about being watched, and once she had heard him muttering, “I would like my mind back.” Most of his neuropsychological test scores were below those he had obtained when examined six and a half months after the injury. His only scores above average were on two tests of welllearned verbal material: background information and reading vocabulary. He received scores in the low average to borderline defective ranges on oral arithmetic, visuomotor tracking, and all visual reasoning and visuoconstructive—including drawing—tests. Although his verbal learning curve was considerably below average, immediate verbal span and verbal retention were within the average range. Immediate recall of designs was defective. Shortly after he was hospitalized and had completed a scheduled 20-month examination, he was put on trifluoperazine (Stelazine), 15 mg h.s., continuing this treatment for a month while remaining under observation. He was then reexamined. The patient was still poorly groomed, alert, and oriented. His reaction times were well within normal limits. Speech and thinking were unremarkable. While not expressing strong emotions, he smiled, complained, and displayed irritation appropriately. He reported what hallucinating had been like and related the content of some of his hallucinations. He talked about doing physical activities when he returned home but felt he was not yet ready to work. His test scores 21 months after the injury were mostly in the high average to superior ranges. Much of his gain came from faster response times which enabled him to get full credit rather than partial or no credit on timed items he had completed perfectly but slowly the previous month. Although puzzle constructions (both geometric designs and objects) were performed at a high average level, his drawing continued to be of low average quality (but better than at 20 months). All verbal memory tests were performed at average to high average levels; his visual memory test response was without error, gaining him a superior rating. He did simple visuomotor tracking tasks without error and at an average rate of speed; his score on a complex visuomotor tracking task was at the 90 th percentile.
In this case, repeated testing provided documentation of both the cognitive repercussions of his psychiatric disturbance and the effects of psychotropic
medication on his cognitive functioning. This case demonstrates the value of repeated testing, particularly when one or another aspect of the patient’s behavior appears to be in flux. Had testing been done only at the time of the second examination, a very distorted impression of the patient’s cognitive status would have been gained. Fortunately, since the patient was in a research project, the first examination data were available to cast doubt on the validity of the second set of tests, performed when he was acutely psychotic, and therefore the third examination was given as well. Brain impaired patients must have factual information about their functioning to understand themselves and to set realistic goals, yet their need for this information is often overlooked. Most people who sustain brain injury or disease experience changes in their selfawareness and emotional functioning; but because they are on the inside, so to speak, they may have difficulty appreciating how their behavior has changed and what about them is still the same (Prigatano and Schacter, 1991, passim). Neurological impairment may diminish a patient’s capacity for empathy (De Sousa et al., 2010) , especially when damage occurs in prefrontal regions (Bramham et al., 2009). These misperceptions tend to heighten what mental confusion may already be present as a result of altered patterns of neural activity. Distrust of their experiences, particularly their memory and perceptions, is a problem shared by many brain damaged persons, probably as a result of even very slight disruptions and alterations of the exceedingly complex neural pathways that mediate cognitive and other behavioral functions. This selfdistrust seems to reflect feelings of strangeness and confusion accompanying previously familiar habits, thoughts, and sensations that are now experienced differently, and from newly acquired tendencies to make errors (T.L. Bennett and Raymond, 1997; Lezak, 1978b; see also Skloot, 2003, for a poet’s account of this experience). The selfdoubt of the brain injured person, often referred to as perplexity, is usually distinguishable from neurotic selfdoubts about life goals, values, principles, and so on, but it can be just as painful and emotionally crippling. Three years after undergoing a left frontal craniotomy for a parasagittal meningioma, a 45-year-old primary school teacher described this problem most tellingly: Perplexity, the not knowing for sure if you’re right, is difficult to cope with. Before my surgery I could repeat conversations verbatim. I knew what was said and who said it… . Since my surgery I don’t have that capability anymore. Not being able to remember for sure what was said makes me feel very insecure.
Careful reporting and explanation of psychological findings can do much
to allay the patient’s anxieties and dispel confusion. The following case exemplifies both patients’ needs for information about their psychological status and how disruptive even mild experiences of perplexity can be. An attractive, unmarried 24-year-old bank teller sustained a concussion in a car accident while on a skiing trip in Europe. She appeared to have improved almost completely, with only a little residual facial numbness. When she came home, she returned to her old job but was unable to perform acceptably although she seemed capable of doing each part of it well. She lost interest in outdoor sports although her coordination and strength were essentially unimpaired. She became socially withdrawn, moody, morose, and dependent. A psychiatrist diagnosed depression, and when her unhappiness was not diminished by counseling or antidepressant drugs, he administered electroshock treatment, which gave only temporary relief. While waiting to begin a second course of shock treatment, she was given a neuropsychological examination at the request of the insurer responsible for awarding monetary compensation for her injuries. This examination demonstrated a small but definite impairment of auditory span, concentration, and mental tracking. The patient reported a pervasive sense of unsureness which she expressed in hesitancy and doubt about almost everything she did. These feelings of doubt had undermined her trust in many previously automatic responses, destroying a lively spontaneity that was once a very appealing feature of her personality. Further, like many postconcussion patients, she had compounded the problem by interpreting her inner uneasiness as symptomatic of “mental illness,” and psychiatric opinion confirmed her fears. Thus, while her cognitive impairment was not an obstacle to rehabilitation, her bewildered experience of it led to disastrous changes in her personal life. A clear explanation of her actual limitations and their implications brought immediate relief of anxiety and set the stage for sound counseling.
The concerned family, too, needs to know about their patient’s condition in order to respond appropriately (D.N. Brooks, 1991; Camplair, Butler, and Lezak, 2003; Lezak, 1988a, 1996; Proulx, 1999). Family members need to understand the patient’s new, often puzzling, mental changes and what may be their psychosocial repercussions. Even quite subtle defects in motivation, in abilities to plan, organize, and carry out activities, and in self-monitoring can compromise patients’ capacities to earn a living and thus render them socially dependent. Moreover, many brain impaired patients no longer fit easily into family life as irritability, self-centeredness, impulsivity, or apathy create awesome emotional burdens on family members, generate conflicts between family members and with the patient, and strain family ties, often beyond endurance (Lezak, 1978a, 1986a; L.M. Smith and Godfrey, 1995). 3. Treatment-1: Treatment planning and remediation. Today, much more of the work of neuropsychologists is involved in treatment or research on treatment (Vanderploeg, Collins, et al., 2006). Rehabilitation programs for cognitive impairments and behavioral disorders arising from neuropathological conditions now have access to effective behavioral treatments based on neuropsychological knowledge and tested by neuropsychological techniques (for examples from different countries see: A.-L. Christensen and Uzzell,
2000; Cohadon et al., 2002; Mattioli et al., 2010; and B.[A]. Wilson, Rous, and Sopena, 2008). Of particular neuropsychological importance is the ongoing development of treatment programs for soldiers sustaining brain injuries in the Gulf, Iraq, and Afghanistan wars as well as for those injured from terrorist acts (Helmick, 2010). In the rehabilitation setting, the application of neuropsychological knowledge and neuropsychologically based treatment techniques to individual patients creates additional assessment demands: Sensitive, broadgauged, and accurate neuropsychological assessment is necessary for determining the most appropriate treatment for each rehabilitation candidate with brain dysfunction (B. Levine, Schweizer, et al., 2011; Raskin and Mateer, 2000; Sloan and Ponsford, 1995; B.[A]. Wilson, 2008). In addressing the behavioral and cognitive aspects of patient behavior, these assessments will include both delineation of problem areas and evaluation of the patient’s strengths and potential for rehabilitation. In programs of any but the shortest duration, repeated assessments will be required to adapt programs and goals to the patient’s changing needs and competencies. Since rehabilitation treatment and care is often shared by professionals from many disciplines and their subspecialties, such as psychiatrists, speech pathologists, rehabilitation counselors, and occupational and physical therapists, a current and centralized appraisal of patients’ neuropsychological status enables these treatment specialists to maintain common goals and understanding of the patient. In addition, it may clarify the problems underlying patients’ failures so that therapists know how patients might improve their performances (e.g., Greenwald and Rothi, 1998; B.[A]. Wilson, 1986). A 30-year-old lawyer, recently graduated in the top 10% of his law school class, sustained a ruptured right anterior communicating artery aneurysm. Surgical intervention stopped the bleeding but left him with memory impairments that included difficulty in retrieving stored information when searching for it and very poor prospective memory (i.e., remembering to remember some activity originally planned or agreed upon for the future, or remembering to keep track of and use needed tools such as memory aids). Other deficits associable to frontal lobe damage included diminished emotional capacity, empathic ability, self-awareness, spontaneity, drive, and initiative-taking; impaired social judgment and planning ability; and poor self-monitoring. Yet he retained verbal and academic skills and knowledge, good visuospatial and abstract reasoning abilities, appropriate social behaviors, and motor function. Following repeated failed efforts to enter the practice of law, his wife placed him in a recently organized rehabilitation program directed by a therapist whose experience had been almost exclusively with aphasic patients. The program emphasized training to enhance attentional functions and to compensate for memory deficits. This trainee learned how to keep a memory diary and notebook, which could support him through most of his usual activities and responsibilities; and he was appropriately drilled in the necessary memory and notetaking habits. What was overlooked was the overriding problem that it did not occur to him to remember what he needed to remember
when he needed to remember it. (When his car keys were put aside where he could see them with instructions to get them when the examination was completed, at the end of the session he simply left the examining room and did not think of his keys until he was outside the building and I [mdl] asked if he had forgotten something. He then demonstrated a good recall of what he had left behind and where.) One week after the conclusion of this costly eight-week program, while learning the route on a new job delivering to various mail agency offices, he laid his memory book down somewhere and never found it again—nor did he ever prepare another one for himself despite an evident need for it. An inquiry into the rehabilitation program disclosed a lack of appreciation of the nature of frontal lobe damage and the needs and limitations of persons with brain injuries of this kind. The same rehabilitation service provided a virtually identical training program to a 4 2-year-old civil engineer who had incurred severe attentional and memory deficits as a result of a rear-end collision in which the impact to his car threw his head forcibly back onto the head rest. This man was keenly and painfully aware of his deficits, and he retained strong emotional and motivational capacities, good social and practical judgment, and abilities for planning, initiation, and selfmonitoring. He too had excellent verbal and visuospatial knowledge and skills, good reasoning ability, and no motor deficits. For him this program was very beneficial as it gave him the attentional training he needed and enhanced his spontaneously initiated efforts to compensate for his memory deficits. With this training he was able to continue doing work that was similar to what he had done before the accident, only on a relatively simplified level and a slower performance schedule.
4. Treatment-2: Treatment evaluation. With the everincreasing use of rehabilitation and retraining services must come questions regarding their worth (Kashner et al., 2003; Prigatano and Pliskin, 2003; B.[A]. Wilson, Gracey, et al., 2009). These services tend to be costly, both monetarily and in expenditure of professional time. Consumers and referring clinicians need to ask whether a given service promises more than can be delivered, or whether what is produced in terms of the patient’s behavioral changes has psychological or social value and is maintained long enough to warrant the costs. Here again, neuropsychological assessment can help answer these questions (Sohlberg and Mateer, 2001; Trexler, 2000; Vanderploeg, 1998; see also Ricker, 1998; and B.[A]. Wilson, Evans, and Keohane, 2002, for a discussion of the cost-effectiveness of neuropsychological evaluations of rehabilitation patients). Neuropsychological evaluation can often best demonstrate the neurobehavioral response—both positive and negative—to surgical interventions (e.g., B.D. Bell and Davies, 1998, temporal lobectomy for seizure control; Yoshii et al., 2008, pre- and postsurgical and radiation treatment for brain cancer; Selnes and Gottesman, 2010, coronary artery bypass surgery; McCusker et al., 2007; Vingerhoets, Van Nooten, and Jannes, 1996, open-heart surgery) or to brain stimulation (e.g., Rinehardt et al., 2010; A.E. Williams et al., 2011, to treat Parkinson’s disease; Vallar, Rusconi, and Bernardini, 1996, to improve left visuospatial awareness).
Testing for drug efficacy and side effects also requires neuropsychological data (Meador, Loring, Hulihan, et al., 2003; Wilken et al., 2007). Examples of these kinds of testing programs can be found for medications for many different conditions such as cancer (C.A. Meyers, Scheibel, and Forman, 1991), HIV (human immunodeficiency virus) (Llorente, van Gorp, et al., 2001; Schifitto et al., 2007), seizure control (Wu et al., 2009), attentional deficit disorders (Kurscheidt et al., 2008; Riordan et al., 1999), multiple sclerosis (Fischer, Priore, et al., 2000; S.A. Morrow et al., 2009; Oken, Flegel, et al., 2006), hypertension (Jonas et al., 2001; Saxby et al., 2008), and psychiatric disorders (Kantrowitz et al., 2010), to list a few. 5. Research. Neuropsychological assessment has been used to study the organization of brain activity and its translation into behavior, and to investigate specific brain disorders and behavioral disabilities (this book, passim; see especially Chapters 2, 3, 7, and 8). Research with neuropsychological assessment techniques also involves their development, standardization, and evaluation. Their precision, sensitivity, and reliability make them valuable tools for studying both the large and small—and sometimes quite subtle—behavioral alterations that are then observable manifestations of underlying brain pathology. The practical foundations of clinical neuropsychology are also based to a large measure on neuropsychological research (see Hannay, Bieliauskas, et al., 1998: Houston Conference on Specialty Education and Training in Clinical Neuropsychology, 1998). Many of the tests used in neuropsychological evaluations—such as those for arithmetic or for visual memory and learning —were originally developed for the examination of normal cognitive functioning and recalibrated for neuropsychological use in the course of research on brain dysfunction. Other assessment techniques—such as certain tests of tactile identification or concept formation—were designed specifically for research on normal brain function. Their subsequent incorporation into clinical use attests to the very lively exchange between research and practice. This exchange works especially well in neuropsychology because clinician and researcher are so often one and the same. Neuropsychological research has also been crucial for understanding normal behavior and brain functions and the association of cognition with the underlying functional architecture of the brain (Mahon and Caramazza, 2009). The following areas of inquiry afford only a partial glimpse into these rapidly expanding knowledge domains. Neuropsychological assessment techniques
provide the data for interpreting brain mapping studies (e.g., Friston, 2009). Cognitive status in normal aging and disease states has been tracked by neuropsychological assessments repeated over the course of years and even decades (e.g., Borghesani et al., 2010; M.E. Murray et al., 2010; Tranel, Benton, and Olson, 1997) as well as staging of dementia progression (O’Bryant et al., 2008). The contributions of demographic characteristics to the expression of mental abilities are often best delineated by neuropsychological findings (e.g., Ardila, Ostrosky-Solis, et al., 2000; Kempler et al., 1998; Vanderploeg, Axelrod, et al., 1997). Increasingly precise analyses of specific cognitive functions have been made possible by neuropsychological assessment techniques (e.g., Dollinger, 1995; Schretlen, Pearlson, et al., 2000; Troyer, Moscovitch, and Winocur, 1997). 6. Forensic neuropsychology. Neuropsychological assessment undertaken for legal proceedings has become quite commonplace in personal injury actions in which monetary compensation is sought for claims of bodily injury and loss of function (Heilbronner and Pliskin, 2003; Sweet, Meyer, et al., 2011). Although the forensic arena may be regarded as requiring some differences in assessment approaches, most questions referred to a neuropsychologist will either ask for a diagnostic opinion (e.g., “Has this person sustained brain damage as a result of … ?”) or a description of the subject’s neuropsychological status (e.g., “Will the behavioral impairment due to the subject’s neuropathological condition keep him from gainful employment? Will treatment help to return her to the workplace?”). Usually the referral for a neuropsychological evaluation will include (or at least imply) both questions (e.g., “Are the subject’s memory complaints due to … , and if so, how debilitating are they?”). In such cases, the neuropsychologist attempts to determine whether the claimant has sustained brain impairment which is associable to the injury in question. When the claimant is brain impaired, an evaluation of the type and amount of behavioral impairment sustained is intrinsically bound up with the diagnostic process. In such cases the examiner typically estimates the claimant’s rehabilitation potential along with the extent of any need for future care. Not infrequently the request for compensation may hinge on the neuropsychologist’s report. In criminal cases, a neuropsychologist may assess a defendant when there is reason to suspect that brain dysfunction contributed to the misbehavior or when there is a question about mental capacity to stand trial. The case of the murderer of President Kennedy’s alleged assailant remains as probably the
most famous instance in which a psychologist determined that the defendant’s capacity for judgment and self-control was impaired by brain dysfunction (J. Kaplan and Waltz, 1965). Interestingly, the possibility that the defendant, Jack Ruby, had psychomotor epilepsy was first raised by Dr. Roy Schafer ’s interpretation of the psychological test findings and subsequently confirmed by electroencephalographic (EEG) studies. At the sentencing stage of a criminal proceeding, the neuropsychologist may also be asked to give an opinion about treatment or potential for rehabilitation of a convicted defendant. Use of neuropsychologists’ examination findings, opinions, and testimony in the legal arena has engendered what, from some perspectives, seems to be a whole new industry dedicated to unearthing malingerers and exaggerators whose poor performances on neuropsychological tests make them appear to be cognitively impaired—or more impaired, in cases in which impairment may be mild. To this end, a multitude of examination techniques and new tests have been devised (Chapter 20). Whether the problem of malingering and symptom exaggeration in neuropsychological examinations is as great as the proliferation of techniques for identifying faked responding would suggest remains unanswered. Certainly, when dealing with forensic issues the examining neuropsychologist must be alert to the possibility that claimants in tort actions or defendants in criminal cases may—deliberately or unwittingly —perform below their optimal level; but the examiner must also remain mindful that for most examinees their dignity is a most prized attribute that is not readily sold. Moreover, base rates of malingering or symptom exaggeration probably vary with the population under study: TBI patients in a general clinical population would probably have a lower rate than those referred by defense lawyers who have an opportunity to screen claimants—and settle with those who are unequivocally injured—before referring the questionable cases for further study (e.g., Fox et al., 1995; see Stanczak et al., 2000, for a discussion of subject-selection biases in neuropsychological research; Ruffalo, 2003, for a discussion of examiner bias).
The Multipurpose Examination Usually a neuropsychological examination serves more than one purpose. Even though the examination may be initially undertaken to answer a single question such as a diagnostic issue, the neuropsychologist may uncover vocational or family problems, or patient care needs that have been overlooked, or the patient may prove to be a suitable candidate for research.
Integral to all neuropsychological assessment procedures is an evaluation of the patient’s needs and circumstances from a psychological perspective that considers quality of life, emotional status, and potential for social integration. When new information that has emerged in the course of an examination raises additional questions, the neuropsychologist will enlarge the scope of inquiry to include newly identified issues, as well as those stated in the referral. Should a single examination be required to serve several purposes— diagnosis, patient care, and research—a great deal of data may be collected about the patient and then applied selectively. For example, the examination of patients complaining of short-term memory problems can be conducted to answer various questions. A diagnostic determination of whether shortterm memory is impaired may only require finding out if they recall significantly fewer words of a list and numbers of a series than the slowest intact adult performance. To understand how they are affected by such memory dysfunction, it is important to know the number of words they can recall freely and under what conditions, the nature of their errors, their awareness of and reactions to their deficit, and its effect on their day-to-day activities. Research might involve studying immediate memory in conjunction with a host of metabolic, neuroimaging, and electrophysiological measures that can now be performed in conjunction with neuropsychological assessment. THE VALIDITY OF NEUROPSYCHOLOGICAL ASSESSMENT A question that has been repeatedly raised about the usefulness of neuropsychological assessments concerns its “ecological” validity. Ecological validity typically refers to how well the neuropsychological assessment data reflect everyday functioning, or predict future behavior or behavioral outcomes. These questions have been partially answered—almost always affirmatively—in research that has examined relationships between neuropsychological findings and ultimate diagnoses, e.g., the detection of dementia (Salmon and Bondi, 2009), between neuropsychological findings and imaging data (Bigler, 2001b), and between neuropsychological findings and employability (Sbordone and Long, 1996; B.[A]. Wilson, 1993). Most recently very specific studies on the predictive accuracy of neuropsychological data have appeared for a variety of behavioral conditions, many focused on everyday functioning (see Marcotte and I. Grant, 2009). For example, prediction of treatment outcome for substance abuse patients rested significantly on Digit Span Backward and Beck Depression Inventory scores (Teichner et al., 2001). Hanks and colleagues (1999) found that measures of
aspects of executive function (Letter-Number Sequencing, Controlled Oral Word Association Test, Trail Making Test-B, Wisconsin Card Sorting Test) along with story recall (Logical Memory) “were strongly related to measures of functional outcome six months after rehabilitation” (p. 1030) of patients with spinal cord injury, orthopedic disorders, or TBI. HIV+ patients’ employability varied with their performances on tests of memory, cognitive flexibility, and psychomotor speed (van Gorp, Baerwald, et al., 1999) as well as neuropsychological measures of multitasking (J.C. Scott et al., 2011). Test scores that correlated significantly with the functional deficits of multiple sclerosis came from the California Verbal Learning Test-long delay free recall, the Paced Auditory Serial Addition Test, the Symbol Digit Modalities Test, and two recall items from the Rivermead Behavioural Memory Test (Higginson et al., 2000). Several components of the very practical prediction of ability to perform activities of daily living (ADL) have been explored with neuropsychological assessments (A. Baird, Podell, et al., 2001; Cahn-Weiner, Boyle, and Malloy, 2002; van der Zwaluw et al., 2010) as has their accuracy for predicting realworld functional disability in neuropsychiatric disorders and predicting who is ready to drive after neurological injury or illness or at advanced ages (K.A. Ryan et al., 2009; Sommer et al., 2010; Whelihan, DiCarlo, and Paul, 2005). On reviewing several hundred examination protocols of persons referred for neuropsychological assessment, J.E. Meyers, Volbrecht, and Kaster-Bundgaard (1999) reported that discriminant function analysis of these data was 94.4% accurate in identifying competence and noncompetence in driving. Scores on an arithmetic test battery were strongly related to those on an ADL questionnaire (Deloche, Dellatolas, et al., 1996). For geriatric patients, scores from the Hooper Visual Organization Test above all, but also the Boston Naming Test and immediate recall of Logical Memory and Visual Reproduction were predictive of their safety and independence in several activity domains (E.D. Richardson, Nadler, and Malloy, 1995). A comparison of rehabilitation inpatients who fail and those who do not showed that the former made more perseverative errors on the Wisconsin Card Sorting Test and performed more poorly on the Stroop and Visual Form Discrimination tests (Rapport, Hanks, et al., 1998). A variety of neuropsychological assessment techniques have been used for TBI outcome predictions (Sherer et al., 2002). S.R. Ross and his colleagues (1997) report that two tests, the Rey Auditory Verbal Learning Test and the Trail Making Test together and “in conjunction with age significantly predicted psychosocial outcome after TBI as measured by patient report” (p. 168). A
review of studies examining work status after TBI found that a number of tests used for neuropsychological assessment were predictive, especially “measures of executive functions and flexibility” (p. 23); specifically named tests were the Wisconsin Card Sorting Test, a dual—attention and memory—task, the Trail Making Test-B, and the Tinker Toy Test; findings on the predictive success (for work status) of memory tests varied considerably (Crepeau and Scherzer, 1993). Another study of TBI patients’ return to work found that “Neuropsychological test performance is related to important behavior in outpatient brain-injury survivors” (p. 382), and it further noted that “no measures of trauma severity contributed in a useful way to this prediction (of employment/unemployment)”(p. 391) (M.L. Bowman, 1996). T.W. Teasdale and colleagues (1997) also documented the validity of tests—of visuomotor speed and accuracy and complex visual learning given before entry into rehabilitation—as predictors of return to work after rehabilitation. Intact performance on verbal reasoning, speed of processing, and visuo- perceptual measures predicted functional outcome one year after the TBI event (Sigurdardottir et al., 2009). WHAT CAN WE EXPECT OF NEUROPSYCHOLOGICAL ASSESSMENT IN THE 21ST CENTURY? Neuropsychological Assessment (1976) was the first textbook to include “Neuropsychological” and “Assessment” in its title. The first citable publication with “clinical neuropsychology” in its title was Halgrim KWe’s 1963 article, followed by the first citable journal article with “neuropsychological assessment” in its title in 1970 by M.L. Schwartz and Dennerll. By early 2011, the National Library of Medicine has listed almost 56,000 articles related to neuropsychological assessment! This number alone represents a powerful acknowledgment of neuropsychological assessment’s importance for understanding brain function, cognition, and behavior. In the first chapter of the last two editions of Neuropsychological Assessment predictions were made about the future of neuropsychology. Historically, neuropsychologists focused on adapting existing psychological assessment tests and techniques for use with neurological and neuropsychiatric patients while developing new measures to assess the specific cognitive functions and behavioral dysfunctions identified in neuropsychological research. In 2004 it was predicted that with their increased efficiency and capacity, assessments by computers—already a busy enterprise—would continue to proliferate. Computerized assessments have not become the major
avenue for neuropsychological evaluations, but we believe we can safely predict that the proportion of assessments using computerized programs—for administration, scoring, and data storage, compilation, and analysis—will continue its rapid growth. However, whether computerization will take over most of the work done by clinical neuropsychologists today is both doubtful and—for a humanistic profession such as ours—undesirable. What is new is the variety of computer-based assessment programs now available (e.g., Wild, Howieson, et al., 2008). One type of especial interest is computerized virtual reality assessment programs with “real-world” characteristics; e.g., learning a path through a realistic-looking park (Weniger et al., 2011). Furthermore, some animal-based cognitive tasks like the water maze can be adapted with computer and virtual reality technology such that the wealth of data and hypotheses from animal research can be extrapolated to human studies (Goodrich-Hunsaker et al., 2010). Paper- and-pencil measures cannot make this anthropomorphic jump but the computer can. Computerbased assessment methods also permit neuropsychology to extend into rural settings via telemedicine in which a neuropsychologist can evaluate the patient from a distance (Cullum, Weiner, et al., 2006). All of these developments portend that future editions of Neuropsychological Assessment will include more information about computer-based assessment methods. All that said, the big revolution to come in neuropsychological assessment will likely be multifaceted, dependent in part on the emergence of what has been termed neuroinformatics (Jagaroo, 2010) and also on the confluence of three factors: (1) cognitive ontologies, (2) collaborative neuropsychological knowl edge bases, and (3) universally available and standardized assessment methods, largely based on computerized assessments (Bilder, 2011). Bilder emphasizes the importance of traditional broad-based clinical and neuroscience training in neuropsychology. Additionally, he believes that the advantage of using computer-based assessment methods linked with i nformatics technology will be such that technology-based assessment techniques will not only be able to establish their own psychometric soundness but make “… more subtle task manipulations and trial-by-trial analyses, which can be more sensitive and specific to individual differences in neural system function”(p. 12). He envisions computer technology assisting in establishing Web-based data repositories with much larger sample sizes than what exist for conventional neuropsychological methods. With larger and more diverse sample sizes, more customized approaches to neuropsychological assessment may be possible. Neuropsychological assessment techniques need to be adaptive and
integrated with other neurodiagnostic and assessment methods, so that neuropsychology maintains its unique role while continuing to contribute to the larger clinical neuroscience, psychological, and medical knowledge base. Neuroimaging methods of analysis have become automated. What used to take days to weeks of painstaking tracing of images can now, with the proper computer technology, be done in a matter of minutes to hours (Bigler, Abildskov, et al., 2010). Algorithms are now being developed integrating neuropsychological data with structural and functional neuroimaging so that the relevance of a particular lesion or abnormality with a neuropsychological finding may be more readily elucidated (Voineskos et al., 2011; Wilde, Newsome, et al., 2011). Moreover, tests used for neuropsychological assessments are being adapted for administration during functional neuroimaging (M.D. Allen and Fong, 2008a,b) such that, on completion of a combined neuroimaging and neuropsychological assessment session not only will neuropsychologists have psychometric data on cognitive performance but they will be able to visualize brain activation patterns related to specific tests and also have a detailed comparison of the brain morphometry of this patient with a large normative sample. One measure of the degree to which neuropsychology has become an accepted and valued partner in both clinical and research enterprises is its dispersion to cultures other than Western European, and its applications to language groups other than those for which tests were originally developed. With all the very new digital and social network communication possibilities of the 21st century, neuropsychology is facing important challenges for both greater cross-cultural sensitivity (Gasquoine, 2009; Pedraza and Mungas, 2008; Shepard and Leathem, 1999) and more language- appropriate tests (see Chapter 6, pp. 144–145). Increased demands for neuropsychological assessment of persons with limited or no English language background has been the impetus for developing tests in other languages that have been standardized on persons in the other culture and language groups; use of interpreters is only a second-best partial solution (Artioli i Fortuny and Mullaney, 1998; LaCalle, 1987; see p. 143–144). In the United States and Mexico, test developers and translators have begun to respond to the need for Spanish language tests with appropriate standardization (e.g., Ardila, 2000b; Cherner et al., 2008; Ponton and Leon-Carrion, 2001). Studies providing norms and analyses of tests in Chinese reflect the increasing application of neuropsychological assessment in the Far East (A.S. Chan and Poon, 1999; Hua, Chang, and Chen, 1997; L. Lu and Bigler, 2000). HIV, a problem for all countries and language groups, offers an example of
the worldwide need for neuropsychological assessment and generally accepted and adequately normed tests (Maruta et al., 2011). A common, universally agreed upon cognitive assessment strategy is important for understanding HIVrelated cognitive and neurobehavioral impairments, outlining treatments and assessing their effectiveness, as well as for tracking disease progression (K. Robertson, Liner, and Heaton, 2009). The development of internationally accepted neuropsychological measures for HIV patients is underway (Joska et al., 2011). Ideally such research-based tests will be developed with interdisciplinary input to tailor the assessment task to the needs of particular groups of individuals and/or conditions (H.A. Bender et al., 2010). While real progress has been made over the last few decades in understanding cognitive and other neuropsychological processes and how to assess them, further knowledge is needed for tests and testing procedures to be sufficiently organized and standardized that assessments may be reliably reproducible, practically valid, and readily comprehensible. Yet, the range of disorders and disease processes, the variations and overlaps in their presentations across individuals, their pharmacologic and other treatment effects, make it unlikely that any “one size fits all” battery can be developed or should even be contemplated. Today’s depth and breadth of neuropathological and psychological knowledge coupled with increasingly sensitive statistical techniques for test evaluation, and the advent of computer-based assessments should—together—lead to improvements in tasks, procedures, possibilities, and effectiveness of neuropsychological assessment. One means of achieving such a goal while retaining the flexibility appropriate for the great variety of persons and problems dealt with in neuropsychological assessment could be a series of relatively short fixed batteries designed for use with particular disorders and diseases and specific deficit clusters (e.g., visuomotor dysfunction, short-term memory disorders). Neuropsychologists in the future would then have at their disposal a set of test modules and perhaps structured interviews (each containing several tests) that can be upgraded as knowledge increases and that can be applied in various combinations to answer particular questions and meet specific patients’ needs.
2 Basic Concepts If our brains were so simple that we could understand them, we would be so simple that we could not. Anonymous
EXAMINING THE BRAIN Historically, the clinical approach to the study of brain functions involved the neurological examination, which includes study of the brain’s chief product— behavior. The neurologist examines the strength, efficiency, reactivity, and appropriateness of the patient’s responses to commands, questions, discrete stimulation of particular neural subsystems, and challenges to specific muscle groups and motor patterns. The neurologist also examines body structures, looking for evidence of brain dysfunction such as swelling of the retina or atrophied muscles. In the neurological examination of behavior, the clinician reviews behavior patterns generated by neuroanatomical subsystems, measuring patients’ responses in relatively coarse gradations, and taking note of important responses that might be missing. The mental status portion of the neurological exam is specifically focused on “higher” behavioral functions such as language, memory, attention, and praxis. Neuropsychological assessment is another method of examining the brain by studying its behavioral product, but in far more detail than what is covered in the mental status portion of a neurological exam. Being focused on behavior, neuropsychological assessment shares a kinship with psychological assessment: it relies on many of the same techniques, assumptions, and theories, along with many of the same tests. Similar to psychological assessment, neuropsychological assessment involves the intensive study of behavior by means of interviews and standardized tests and questionnaires that provide precise and sensitive indices of neuropsychological functioning. Neuropsychological assessment is, in short, a means of measuring in a quantitative, standardized fashion the most complex aspects of human behavior —attention, perception, memory, speech and language, building and drawing, reasoning, problem solving, judgment, planning, and emotional processing. The distinctive character of neuropsychological assessment lies in a conceptual frame of reference that takes brain function as its point of
departure. In a broad sense, a behavioral study can be considered “neuropsychological” so long as the questions that prompted it, the central issues, the findings, or the inferences drawn from the findings, ultimately relate to brain function. And as in neurology, neuropsychological findings are interpreted within the clinical context of the patient’s presentation and in the context of pertinent historical, psychosocial, and diagnostic information (see Chapter 5).
Laboratory Techniques for Assessing Brain Function Some of the earliest instruments for studying brain function that remain in use are electrophysiological (e.g., see Daube, 2002, passim). These include electroencephalography (EEG), evoked and event-related potentials (EP, ERP), and electrodermal activity. EEG frequency and patterns not only are affected by many brain diseases but also have been used to study aspects of normal cognition; e.g., frequency rates have been associated with attentional activity for decades (Boutros et al., 2008; Oken and Chiappa, 1985). EEG is especially useful in diagnosing seizure disorders and sleep disturbances, and for monitoring depth of anesthesia. Both EP and ERPs can identify hemispheric specialization (R.J. Davidson, 1998, 2004; Papanicolaou, Moore, Deutsch, et al., 1988) and assess processing speed and efficiency (J.J. Allen, 2002; Picton et al., 2000; Zappoli, 1988). Magnetoencephalography (MEG), the magnetic cousin of EEG that records magnetic rather than electrical fields, has also been used to examine brain functions in patients and healthy volunteers alike (Reite, Teale, and Rojas, 1999). As MEG can have a higher resolution than EEG, it can more precisely identify the source of epileptic discharges in patients with a seizure disorder. Because MEG is expensive the cost may often be prohibitive, especially for clinical applications; to date, the technique has not entered into regular clinical usage. EEG and MEG are both distinguished by their capacity to provide very high, fidelity measurements of the temporal aspects of neural activity, but neither technique has very good spatial resolution. MEG and EEG produce prodigious data sets from which investigators, using sophisticated quantitative methods, have developed applications such as “brain mapping” (F.H. Duffy, Iyer, and Surwillo, 1989; Nuwer, 1989). Whether this is a valid clinical approach to be used in the routine assessment of neurological patients, however, has remained controversial, especially given that both techniques are fraught with thorny problems regarding source localization—i.e., it is very
difficult to know the exact neural source of the signals produced by these techniques, especially if the signals originate in deeper brain structures. Electrodermal activity (measured as skin conductance response [SCR]) reflects autonomic nervous system functioning and provides a sensitive and very robust measure of emotional responses and feelings (Bauer, 1998; H.D. Critchley, 2002; Zahn and Mirsky, 1999). Electrodermal activity and other autonomic measures such as heart rate, respiration, and pupil dilation have also been used to demonstrate various nonconscious forms of brain processing (J.S. Feinstein and Tranel, 2009; Tranel, 2000). For example, when patients with prosopagnosia (who cannot recognize familiar faces at a conscious level, see p. 444) were shown pictures of family members and other familiar individuals, they said they did not recognize the faces; however, these patients showed a robust SCR—a nonconscious recognition response (Tranel and Damasio, 1988). In another example, a patient with severe inability to acquire new information (anterograde amnesia, see p. 29) had large SCRs to a neutral stimulus that had previously been paired with a loud aversive tone during a fear conditioning paradigm, despite having no recollection of the learning situation (Bechara, Tranel, et al., 1995). In yet another experiment, a patient with one of the most severe amnesias ever recorded produced large, discriminatory SCRs to persons who had been systematically paired with either positive or negative affective valence, despite having no conscious, declarative knowledge of the persons (Tranel and Damasio, 1993). Other methods that enable visualization of ongoing brain activity are collectively known as “functional brain imaging” (for a detailed review of contemporary neuroimaging technology see Neuroimaging Primer, Appendix A, pp. 863–871). These techniques have proven useful for exploring both normal brain functioning and the nature of specific brain disorders (Huettel et al., 2004; Pincus and Tucker, 2003, passim; P. Zimmerman and Leclercq, 2002). One of the older functional brain imaging techniques, regional cerebral blood flow (rCBF), reflects the brain’s metabolic activity indirectly as it changes the magnitude of blood flow in different brain regions. rCBF provides a relatively inexpensive means for visualizing and recording brain function (D.J. Brooks, 2001; Deutsch, Bourbon, et al., 1988). Beginning in the mid-1970s, neuroimaging has become a critical part of the diagnostic workup for most patients with known or suspected neurological disease. Computerized tomography (CT) and magnetic resonance imaging (MRI) techniques reconstruct different densities and constituents of internal brain structures into clinically useful three-dimensional pictures of the intracranial anatomy (Beauchamp and Bryan, 1997; R.O. Hopkins, Abildskov,
et al., 1997; Hurley, Fisher, and Taber, 2008). Higher magnet strengths for MRI, e.g., 3 Tesla (the current standard; Scheid et al., 2007) or 7 Tesla (not yet approved for routine clinical use with human participants; Biessels et al., 2010), have allowed even more fine-grained visualization of neural structure. A number of advanced techniques have evolved from MRI (e.g., diffusion weighted imaging; perfusion imaging), giving the clinician an unprecedented degree of detailed information regarding neural constituents. The timing of these procedures is a major factor in their usefulness, not only as to what kinds of information will be visualized but also in the choice of specific diagnostic tools. A CT might be best suited for acute head injury when skull fracture and/or bleeding are suspected, whereas MRI (with diffusion tensor imaging [DTI]) might be the study of choice in the chronic stages of head injury, when the clinician is especially concerned about white matter integrity. Positron emission tomography (PET) visualizes brain metabolism directly as glucose radioisotopes emit decay signals, their quantity indicating the level of brain activity in a given area (Hurley, Fisher, and Taber, 2008). PET not only contributes valuable information about the functioning of diseased brains but has also become an important tool for understanding normal brain activity (Aguirre, 2003; M.S. George et al., 2000; Rugg, 2002). Single photon emission computed tomography (SPECT) is similar to PET but less expensive and involves a contrast agent that is readily available. Comparison of interictal and ictal SPECT scans (i.e., between and during seizures) in epilepsy surgery candidates has been valuable for identifying the site of seizure onset (So, 2000). In experimental applications, procedures such as PET and SPECT typically compare data obtained during an activation task of interest (e.g., stimulus identification) to data from a resting or other “baseline” state, to isolate the blood flow correlates of the behavioral function of interest. These procedures have limitations. For example, PET applications are limited by their dependence on radioisotopes that have a short half-life and must be generated in a nearby cyclotron (Hurley, Fisher, and Taber, 2008). Cost and accessibility are other factors—these procedures have been expensive and available mainly at large medical centers. This has changed in recent years, and now PET and especially SPECT are fairly widely available, and not prohibitively expensive—and increasingly, covered by insurance plans. One important clinical application for PET is in the diagnosis of neurodegenerative diseases. For example, many neurodegenerative diseases, including Alzheimer ’s disease and frontotemporal dementia, produce brain alterations that are detectable with PET even when structural neuroimaging (CT or MRI) fails to show specific abnormalities (D.H.S. Silverman, 2004). The diagnostic
accuracy of PET to assess dementia has shown convincingly that PET and, in particular, that the 18F-FDG PET procedure (which involves a resting study) can demonstrate clear patterns of abnormality that aid in the diagnosis of dementia and in the differential diagnosis of various neurodegenerative diseases (D.H.S. Silverman, 2004). 18F-FDG PET may be especially informative in the early, milder phases of the disease when diagnostic certainty based on the usual procedures (including neuropsychological assessment) tends to be more equivocal. Functional magnetic resonance imaging (fMRI) is a technique that capitalizes on the neural phenomenon that increasing neuronal activity requires more oxygen; the amount of oxygen delivered by blood flow (or the blood volume; see Sirotin et al., 2009) actually tends to exceed demand, creating a ratio of oxygenated to deoxygenated blood that is known as the BOLD signal which can be precisely and accurately measured and quantified. This signal is highly localizable (normally by mapping the BOLD response onto a structural MRI) at an individual subject level, giving fMRI a remarkably high degree of spatial resolution which permits visualization of brain areas that are “activated” during various cognitive tasks. The popularity of fMRI as a means of studying brain-behavior relationships exploded during the late 1990s and throughout the 2000s, not only because of its superior spatial resolution but also due in large measure to the facts that fMRI is widely available, noninvasive, and does not require a “medical” context for its application. Thus fMRI is a popular method for investigating all manner of psychological processes such as time perception (S.M. Rao, Mayer, and Harrington, 2001), semantic processing (Bookheimer, 2002), emotional processing (M.S. George et al., 2000; R.C. Gur, Schroder, et al., 2002) , response inhibition (Durston et al., 2002), face recognition (Joseph and Gathers, 2002), somatosensory processing (Meador, Allison, Loring et al., 2002), sexual arousal (Arnow et al., 2002), and many, many others. Perhaps more so than the other techniques discussed, fMRI has and will continue to be involved with neuropsychology as well as cognitive neuroscience in general, in part due to its widespread use. fMRI is not without controversy, though: the technique has suffered from being used and abused by investigators whose knowledge of the brain and of historical brain-behavior relationship studies is woefully inadequate (for critical discussions and examples, see Coltheart, 2006; Fellows et al., 2005; Logothetis, 2008). Even the nature of the basic signal that is measured with fMRI continues to be debated (Logothetis and Wandell, 2004; Sirotin et al.,
2009). As neuropsychology evolves through the 2010s, it will be interesting to see whether and how fMRI settles into a reliable constituent slot in the armamentarium of techniques for studying and measuring brain functions and brain–behavior relationships. The need to identify cerebral language and memory dominance in neurosurgery candidates led to the development of techniques such as the Wada test (intracarotid injection of amobarbital for temporary pharmacological inactivation of one side of the brain) and electrical cortical stimulation mapping (Loring, Meador, Lee, and King, 1992; Ojemann, Cawthon, and Lettich, 1990; Penfield and Rasmussen, 1950). Not only have these procedures significantly reduced cognitive morbidity following epilepsy surgery, but they have also greatly enhanced our knowledge of brain-behavior relationships. Atypical language representation, for example, alters the expected pattern of neuropsychological findings, even in the absence of major cerebral pathology (S.L. Griffin and Tranel, 2007; Loring, Strauss, et al., 1999) . These procedures have limitations in that they are invasive and afford only a limited range of assessable behavior due to the restrictions on patient response in an operating theater and the short duration of medication effects (Thierry, 2008). Generalizability of data obtained by these techniques is further restricted by the fact that patients undergoing such techniques typically have diseased or damaged brains (e.g., a seizure disorder) which could have prompted reorganization of function (S.L. Griffin and Tranel, 2007). Many of the same questions addressed by the Wada test and cortical stimulation mapping in patients may be answered in studies of healthy volunteers using such techniques as transcranial magnetic stimulation (L.C. Robertson and Rafal, 2000), functional transcranial Doppler (Knecht et al., 2000), magnetoencephalography/magnetic source imaging (Papanicolaou et al., 2001; Simos, Castillo, et al., 2001), and fMRI (J.R. Binder, Swanson, et al., 1996; J.E. Desmond, Sum, et al., 1995; Jokeit et al., 2001). These techniques, which are less invasive than the Wada test and cortical stimulation mapping, have had increasing use in recent years, although they have yet to supplant the time-tested Wada as a reliable means of localizing language function presurgically. NEUROPSYCHOLOGY’S CONCEPTUAL EVOLUTION Neuropsychology’s historical roots go deep into the past; Darby and Walsh (2005) begin their condensed history of neuropsychology with a 1700 BCE papyrus describing eight cases of traumatic head injury. Other writers have
traced this history in greater detail (e.g., Finger, 1994; N.J. Wade and Brozek, 2001). Some dwelt on more recent (mostly 19th and early 20th century) and specific foundation-laying events (e.g., Benton, 2000; Benton [collected papers in L. Costa and Spreen, 1985]; Finger, 2000). As befitting a text on neuropsychological assessment, this brief historical review begins in the 20th century, when neuropsychology began providing tools and expertise for clinical assessments in psychology, psychiatry, and the neurosciences. Throughout the 1930s and 40s and well into the 50s, the determination of whether a patient had “brain damage” was often the reason for consultation with a psychologist (at that time the term “neuropsychologist” did not exist). During these years, most clinicians treated “brain damage” or brain dysfunction as if it were a unitary phenomenon—often summed up under the term “organicity.” It was well recognized that behavioral disorders resulted from many different brain conditions, and that damage to different brain sites caused different effects (Babcock, 1930; Klebanoff, 1945). It was also well established that certain specific brain-behavior correlates, such as the role of the left hemisphere in language functions, appeared with predictable regularity. Yet much of the work with “brain damaged” patients continued to be based on the assumption that “organicity” was characterized by one central and therefore universal behavioral defect (K. Goldstein, 1939; Yates, 1954). Even so thoughtful an observer as Teuber could say in 1948 that “Multiple-factor hypotheses are not necessarily preferable to an equally tentative, heuristic formulation of a general factor—the assumption of a fundamental disturbance … which appears with different specifications in each cerebral region”(pp. 45– 46). The early formulations of brain damage as a unitary condition that is either present or absent were reflected in the proliferation of single function tests of “organicity” that were evaluated, in turn, solely in terms of how well they distinguished “organic” from psychiatric patients or normal, healthy persons (e.g., Klebanoff, 1945; Spreen and Benton, 1965; Yates, 1954). The “fundamental disturbance” of brain damage, however, turned out to be exasperatingly elusive. Despite many ingenious efforts to devise a test or examination technique that would be sensitive to organicity per se—a neuropsychological litmus paper, so to speak—no one behavioral phenomenon could be found that was shared by all brain injured persons but by no one else. In neuropsychology’s next evolutionary stage, “brain damage” was no longer treated as a unitary phenomenon, but identification of its presence (or not) continued to be a primary goal of assessment. With increasing
appreciation of the behavioral correlates of discrete lesions, the search for brain damage began to focus on finding sets of tests of different functions that, when their scores were combined, would make the desired discriminations between psychiatric, “organic,” and normal subjects. The Hunt-Minnesota Test for Organic Brain Damage (H.F. Hunt, 1943), for example, included the 1937 Stanford-Binet Vocabulary Test and six tests of learning and retention in auditory and visual modalities, considered to be “sensitive to brain deterioration.” It had the advantage that identification of brain damaged persons could be accomplished in 15 minutes! Halstead’s (1947) “Impairment Index,” based on a combined score derived from a battery generating ten scores from seven tests of more or less discrete functions requiring a much lengthier examination, also reflects the search for “brain damage” (see also p. 118). Another landmark pioneer who led neuropsychology’s evolution in the mid-part of the 20th century was Alexander Luria (e.g., 1964; A.-L. Christens, Goldberg, Bougakov, 2009; Tranel, 2007). For Luria, use of symptoms made evident by neuropsychological assessment to infer “local” brain dysfunction was the essence of neuropsychology. Luria’s focus was on qualitative analysis: he stressed the value of careful qualitative neuropsychological analysis of cognitive and behavioral symptoms, but he also included some psychometric instruments in his examinations. Luria emphasized the importance of breaking down complex mental and behavioral functions into component parts. Historical impetus for this came from an attempt to reconcile the long-running feud between “localizationists”—aware of specialized brain areas—and the one-diagnosis-fits-all “antilocalizationists.” Luria noted that apparent contradictions between these two camps grew out of the oversimplified nature of the analyses. He pointed out that higher mental functions represent complex functional systems based on jointly working zones of the brain cortex, and he emphasized the importance of dissecting the structure of functions and the physiological mechanisms behind those functions. Luria’s point seems patently obvious to us now—but that it took so long to enter the mainstream of neuropsychology is a lesson that cannot be ignored in neuropsychology and cognitive neuroscience. Like the concept “sick,” the concept “brain damage” (or “organicity” or “organic impairment”—the terms varied from author to author but the meaning was essentially the same) has no etiological or pathological implications, nor can predictions or prescriptions be based on such a diagnostic conclusion. Still, “brain damage” as a measurable condition remains a vigorous concept, reflected in the many test and battery indices,
ratios, and quotients that purport to represent some quantity or relative degree of neurobehavioral impairment. Advances in diagnostic medicine, with the exception of certain cases with mild or questionable cognitive impairment, have changed the educated referral question to the neuropsychologist from simply whether (or not) the patient has a brain disorder, to inquiry into the patient’s cognitive strengths and deficits, emotionality, and capacity to function in the real world. In most cases, the presence of “brain damage” has been clinically established and often verified radiologically before the patient even gets to the neuropsychologist. However, the site and extent of a lesion or the diagnosis of a neurobehavioral disease are not in themselves necessarily predictive of the cognitive and behavioral repercussions of the known condition, as they vary with the nature, extent, location, and duration of the lesion; with the age, sex, physical condition, and psychosocial background and status of the patient; and with individual neuroanatomical and physiological differences (see Chapters 3, 7, and 8). Not only does the pattern of neuropsychological deficits differ with different lesion characteristics and locations, but two persons with similar pathology and lesion sites may have distinctly different neuropsychological presentations (De Bleser, 1988; Howard, 1997; Luria, 1970), and patients with damage at different sites may present similar deficits (Naeser, Palumbo, et al., 1989). These seemingly anomalous observations make sense when considering that, in different brains, different cognitive functions may rely on the same or similar circuits and, in turn, the same functions may be organized in different circuits. Needless to say, human behavior—especially when suffering specific kinds of impairments—is enormously complex: that is an inescapable truth of clinical neuropsychology. Thus, although “brain damage” may be useful as a general concept that includes a broad range of behavioral disorders, when dealing with individual patients the concept of brain damage only becomes meaningful in terms of specific behavioral dysfunctions and their implications regarding underlying brain pathology and real-world functioning. The neuropsychological assessment helps to determine what are the (practical, social, treatment, rehabilitation, predictable, legal and, for some conditions, diagnostic) ramifications of the known brain injury or evident brain disorder. CONCERNING TERMINOLOGY The experience of wading through the older neuropsychological literature shares some characteristics with exploring an archaeological dig into a longinhabited site. Much as the archaeologist finds artifacts that are both similar
and different, evolving and discarded; so a reader can find, scattered through the decades, descriptions of various neuropsychological disorders in terms (usually names of syndromes or behavioral anomalies) no longer in use and forgotten by most, terms that have evolved from one meaning to another, and terms that have retained their identity and currency pretty much as when first coined. Thus, many earlier terms for specific neuropsychological phenomena have not been supplanted or fallen into disuse so that even now one can find two or more expressions for the same or similar observations. This rich terminological heritage can be very confusing (see, for example, Lishman’s [1997] discussion of the terminological confusion surrounding “confusion,” and other common terms that are variously used to refer to mental states, to well-defined diagnostic entities, or to specific instances of abnormal behavior). In this book we have made an effort to use only terms that are currently widely accepted. Some still popular but poorly defined terms have been replaced by simpler and more apt substitutes for these older items in classical terminology. For example, we distinguish those constructional disorders that have been called “constructional apraxia” from the neuropsychologically meaningful concept of praxis (and its disorder, apraxia), which “in the strict sense, refers to the motor integration used to execute complex learned movements” (Strub and Black, 2000). Thus, we reserve the term “apraxia” for dysfunctions due to a breakdown in the direction or execution of complex motor acts; “constructional defects” or “constructional impairment” refers to disorders which, typically, involve problems of spatial comprehension or expression but not motor control. Moreover, the term “apraxia” has problems of its own, as different investigators define and use such terms as “ideational apraxia” and “ideokinetic apraxia” in confusingly different ways (compare, for example, Hecaen and Albert, 1978; Heilman and Rothi, 2011; M. Williams, 1979). Rather than attempt to reconcile the many disparities in the use of these terms and their definitions, we call these disturbances simply “apraxias” (see also Hanna-Pladdy and Rothi, 2001). We use current and well-accepted terms but will also present, when relevant, a term’s history. DIMENSIONS OF BEHAVIOR Behavior may be conceptualized in terms of three functional systems: (1) cognition, which is the information-handling aspect of behavior; (2) emotionality, which concerns feelings and motivation; and (3) executive functions, which have to do with how behavior is expressed. Components of each of these three sets of functions are as integral to every bit of behavior as
are length and breadth and height to the shape of any object. Moreover, like the dimensions of space, each of these components can be conceptualized and treated separately even though they are intimately interconnected in complex behavior. The early Greek philosophers were the first to conceive of a tripartite division of behavior, postulating that different principles of the “soul” governed the rational, appetitive, and animating aspects of behavior. Present-day research in the behavioral sciences tends to support the philosophers’ intuitive insights into how the totality of behavior is organized. These classical and scientifically meaningful functional systems lend themselves well to the practical observation, measurement, and description of behavior and constitute a valid and transparent heuristic for organizing behavioral data generally. In neuropsychology, the “cognitive” functions have received more attention than the emotional and control (executive) systems. This probably stems from observations that the cognitive defects of brain injured patients tend to be prominent symptoms. Cognitive functions are also more readily conceptualized, measured, and correlated with neuroanatomically identifiable systems. A less appreciated fact is that the structured nature of most medical and neuropsychological examinations does not provide much opportunity for subtle emotional and control deficits to become evident. For neuropsychological examinations, this is a significant limitation that can lead to erroneous conclusions and interpretations of data. The examination of persons with known or suspected brain disorders should, as much as possible, incorporate opportunities for patients to exhibit emotional and executive behaviors and/or their deficiencies. This recommendation must be heeded as brain damage rarely affects just one of the three behavioral systems: the disruptive effects of most brain lesions, regardless of their size or location, usually involve all three systems (Lezak, 1994; Prigatano, 2009). For example, Korsakoff’s psychosis, a condition most commonly associated with severe chronic alcoholism, has typically been described with an emphasis on cognitive dysfunction, and in particular, the profound learning and memory impairment that is a hallmark of this condition. Yet chronic Korsakoff patients also exhibit radical changes in affect and in executive functions that may be more crippling and more representative of the psychological devastations of this disease than the memory impairments. These patients tend to be emotionally flat, to lack the impulse to initiate activity and, if given a goal requiring more than an immediate one- or two- step response, they are unable to organize, set into motion, and carry through a plan of action to reach it. Everyday frustrations, sad events, or worrisome problems, when brought to their attention, will arouse a somewhat appropriate affective response, as will a pleasant happening or a treat; but the arousal is transitory, subsiding with a change in topic or distraction such as someone entering the room. When not stimulated from outside or by physiological urges, these responsive, comprehending, often well-spoken and well-mannered patients sit quite comfortably doing nothing, not even attending to a TV or nearby conversation. When they have the urge to move,
they walk about aimlessly. The behavioral defects characteristic of many patients with right hemisphere damage also reflect the involvement of all three behavioral systems. It is well known that these patients are especially likely to show impairments in such cognitive activities as spatial organization, integration of visual and spatial stimuli, and comprehension and manipulation of percepts that do not readily lend themselves to verbal analysis. Right hemisphere damaged patients may also experience characteristic emotional dysfunctions such as an indifference reaction (ignoring, playing down, or being unaware of mental and physical disabilities and situational problems), uncalled-for optimism or even euphoria, inappropriate emotional responses and insensitivity to the feelings of others, and loss of the self-perspective needed for accurate self-criticism, appreciation of limitations, or making constructive changes in behavior or attitudes. Furthermore, despite strong, well-expressed motivations and demonstrated knowledgeability and capability, impairments in the capacity to plan and organize complex activities and thinking immobilize many right hemisphere damaged patients.
Behavior problems may also become more acute and the symptom picture more complex as secondary reactions to the specific problems created by the brain injury further involve each system. Additional repercussions and reactions may then occur as the patient attempts to cope with succeeding sets of reactions and the problems they bring (Gainotti, 2010). The following case of a man who sustained a relatively minor brain injury demonstrates some typical interactions between impairments in different behavioral systems. A middle-aged clerk, the father of teenage children, incurred a left-sided head injury in a car accident and was unconscious for several days. When examined three months after the accident, his principal complaint was fatigue. His scores on cognitive tests were consistently high average (between the 75th and 90th percentiles). The only cognitive difficulty demonstrated in the neuropsychological examination was a slight impairment of verbal fluency exhibited by a few word-use errors on a sentence-building task. This verbal fluency problem did not seem grave, but it had serious implications for the patient’s adjustment. Because he could no longer produce fluent speech automatically, the patient had to exercise constant vigilance and conscious effort to talk as well as he did. This effort was a continuous drain on his energy so that he fatigued easily. Verbal fluency tended to deteriorate when he grew tired, giving rise to a vicious cycle in which he put out more effort when he was tired, further sapping his energy at the times he needed it the most. He felt worn out and became discouraged, irritable, and depressed. Emotional control too was no longer as automatic or effective as before the accident, and it was poorest when he was tired. He “blew up” frequently with little provocation. His children did not hide their annoyance with their grouchy, sullen father, and his wife became protective and overly solicitous. The patient perceived his family’s behavior as further proof of his inadequacy and hopelessness. His depression deepened, he became more self-conscious about his speech, and the fluency problem frequently worsened.
COGNITIVE FUNCTIONS Cognitive abilities (and disabilities) are functional properties of the individual that are not directly observed but instead are inferred from … behavior… . All behavior (including neuropsychological test performances) is multiply determined: a patient’s failure on a test of abstract reasoning may not be due to a specific impairment in conceptual thinking but to attention disorder, verbal disability, or inability to discriminate the stimuli of the test instead.
Abigail B. Sivan and Arthur L. Benton, 1999
The four major classes of cognitive functions have their analogues in the computer operations of input, storage, processing (e.g., sorting, combining, relating data in various ways), and output. Thus, (1) receptive functions involve the abilities to select, acquire, classify, and integrate information; (2) memory and learning refer to information storage and retrieval; (3) thinking concerns the mental organization and reorganization of information; and (4) expressive functions are the means through which information is communicated or acted upon. Each functional class comprises many discrete activities—such as color recognition or immediate memory for spoken words. Although each function constitutes a distinct class of behaviors, normally they work in close, interdependent concert. Despite the seeming ease with which the classes of cognitive functions can be distinguished conceptually, more than merely interdependent, they are inextricably bound together—different facets of the brain’s activity. For example, A.R. Damasio, H. Damasio, and Tranel (1990) described the memory (information storage and retrieval) components of visual recognition. They also noted the role that thinking (concept formation) plays in the seemingly simple act of identifying a visual stimulus by name. Both practical applications and theory-making benefit from our ability to differentiate these various components of behavior. Generally speaking, within each class of cognitive functions a division may be made between verbal and nonverbal functions, where “verbal” refers to functions that mediate verbal/symbolic information and “nonverbal” refers to functions that deal with data that cannot be communicated in words or symbols, such as complex visual or sound patterns. This distinction really refers to the types of material being processed (verbal versus nonverbalizable), rather than the functions per se. However, this distinction is a time-tested heuristic tied to observations that these subclasses of functions differ from one another in their neuroanatomical organization and in their behavioral expression while sharing other basic neuroanatomical and psychometric relationships within the functional system. The identification of discrete functions within each class of cognitive functions varies—at least to some extent—with the perspective and techniques of the investigator. Examiners using simple tests that elicit discrete responses can study highly specific functions. Multidimensional tests that call for complex responses measure broader and more complex functions. Although different investigators may identify or define some of the narrower subclasses of functions differently, they tend to agree on the major functional systems and
the large subdivisions. It is important to acknowledge that functional divisions and subdivisions are, to some extent, conceptual constructions that help the clinician understand what goes into the typically very complex behaviors and test responses of their brain impaired patients. Discrete functions described here and in Chapter 3 rarely occur in isolation; normally, they contribute to larger functional patterns elaborated in the highly organized cerebrum. It is important for the examiner to be mindful that some functions may not be assessed; e.g., when, due to practical considerations of time or test environment, relevant tests are not administered, or when the examination is limited to a commercially available battery of tests. In such instances, the examiner may not gain information about how an impaired function is contributing to a patient’s deficits, or the examiner may not even be aware of the integrity (or lack thereof) of these untested functions (Teuber, 1969). Attentional functions differ from the functional groups listed above in that they underlie and, in a sense, maintain the activity of the cognitive functions. To carry the computer analogy a step further, attentional functions serve somewhat as command operations, calling into play one or more cognitive functions. For this reason, they are classified as mental activity variables (see pp. 36–37).
Neuropsychology and the Concept of Intelligence: Brain Function Is Too Complex To Be Communicated in a Single Score Clinical research on intelligence has difficulties as a blackberry-bush has thorns. D.O. Hebb, 1949
Historically, cognitive activity was often attributed to a single function, usually under the rubric of “intelligence.” Early investigators treated the concept of intelligence as if it were a unitary variable which, somewhat akin to physical strength, increased at a regular rate in the course of normal childhood development (Binet et Simon, 1908; Terman, 1916) and decreased with the amount of brain tissue lost through accident or disease (L.F. Chapman and Wolff, 1959; Lashley, 1938). It is not hard to understand why such a view was appealing. For some clinicians its attractiveness is supported by the consistent finding that intraindividual correlations between various kinds of mental abilities tend to be significant. From a neuropsychological perspective, Piercy (1964) thought of intelligence as “a tendency for cerebral regions subserving different intellectual functions to be proportionately developed in any one
individual. According to this notion, people with good verbal ability will tend also to have good nonverbal ability, in much the same way as people with big hands tend to have big feet”(p. 341). The performance of most adults on cognitive ability tests reflects both this tendency for test scores generally to converge around the same level and for some test scores to vary in differing degrees from the central tendency (Carroll, 1993; Coalson and Raiford, 2008; J.D. Matarazzo and Prifitera, 1989). Also, some neuropsychologists have attempted to identify the neural correlates of “general intelligence,” the construct commonly referred to as Spearman’s g (Spearman, 1927). In psychometric theory, g is considered a general factor of intelligence that contributes to all cognitive activities, reflecting an individual’s overall tendency to perform more or less well on cognitive tasks. Some studies suggest a relationship between specific neural sectors (e.g., the dorsolateral prefrontal cortex [dlPFC]) and this concept of intelligence. For example, dlPFC activation has been reported in ostensibly “high g” tasks such as the Raven Progressive Matrices and similar measures (J. Duncan et al., 2000; J.R. Gray et al., 2003; Njemanze, 2005). M.J. Kane and Engle (2002) proposed a prominent role for the dlPFC in novel reasoning and psychometric g. Other studies have lent support to a relationship between g and the dlPFC. Patients with disproportionate damage to dlPFC were selectively impaired on tasks requiring multiple relational premises, including matrix-reasoning-like tasks, suggesting again an association between the dlPFC and g (Waltz et al., 1999). In a large-scale lesion-deficit mapping study, Glascher, Rudrauf, and colleagues (2010) investigated the neural substrates of g in 241 patients with focal brain damage using voxel-based lesion-symptom mapping. Statistically significant associations were found between g and a circumscribed network in frontal and parietal cortex, including white matter association tracts and frontopolar cortex. Moreover, the neural correlates of g were highly coextensive with those associated with full scale IQ scores. These authors suggest that general intelligence draws on connections between regions that integrate verbal, visuospatial, working memory, and executive processes. Koziol and Budding (2009) provided a similar appraisal, noting that cognitive competency depends on “flexibility of interaction”between cortical/cognitive centers and adaptive features of subcortical systems. The work on g notwithstanding, the mental abilities measured by “intelligence”tests include many different cognitive functions, as well as other kinds of functions such as attention and processing speed (Ardila, 1999a; Frackowiak, Friston, and Frith, 1997; Glascher, Tranel, et al., 2009).
Neuropsychological research has contributed significantly to refinements in the definition of “intelligence”(Glascher, Tranel, et al., 2009; Mesulam, 2000b). One of neuropsychology’s earliest findings was that the summation scores (i.e., “intelligence quotient”[“IQ”] scores) on standard intelligence tests do not bear a predictably direct relationship to the size of brain lesions (Hebb, 1942; Maher, 1963). When a discrete brain lesion produces deficits involving a broad range of cognitive functions, these functions may be affected in different ways. Abilities most directly served by the damaged tissue may be destroyed; associated or dependent abilities may be depressed or distorted; other abilities may be spared or even heightened or enhanced (see pp. 346–347). In degenerative neurological conditions, such as Alzheimer ’s disease, major differences in the vulnerability of specific mental abilities to the effects of the brain’s deterioration appear as some functions are disrupted in the early stages of the disease while others may remain relatively intact for years (see Chapter 7, passim). Moreover, affected functions tend to decline at different rates. In normal aging, different mental functions also undergo change at different rates (e.g., Denburg, Cole, et al., 2007; Denburg, Tranel, and Bechara, 2005; Salthouse, 2009, 2010; pp. 356–360). In cognitively intact adults, too, singular experiences plus specialization of interests and activities contribute to intraindividual differences (e.g., Halpern, 1997). Socialization and cultural differences, personal expectations, educational limitations, emotional disturbance, physical illness or handicaps, and brain dysfunction are but some of the many factors that tend to magnify intraindividual test differences to significant proportions (e.g., see A.S. Kaufman, McLean, and Reynolds, 1988; Sternberg, 2004; Suzuki and Valencia, 1997). Subtle measurements of brain substance and function have shown that some persons’ brains may undergo highly differentiated development typically involving an area or related areas in response to repeated experience and, especially, to intense practice of a skill or activity (Restak, 2001). Another major problem with a construct such as Spearman’s g is that it cannot account for theories of multiple intelligences (Gardner, 1983) and, in particular, fails to incorporate emotional abilities and social intelligence (e.g., Salovey and Mayer, 1990). These important aspects of behavioral competency become evident in their absence—with paradigmatic examples in the oftcited observations of patients with damage to prefrontal cortices, especially in the ventromedial prefrontal cortex (vmPFC), who typically manifest major disruptions of complex decision-making, planning, social conduct, and emotional regulation, but have remarkably well-preserved conventional intelligence as measured by standard mental ability tests. A patient (EVR)
reported by Eslinger and Damasio (1985) is a case in point: his WAIS-R IQ scores were well into the superior range (Verbal IQ score = 129; Performance IQ score = 135; Full Scale IQ score = 135), but he was prototypical of someone with severely disrupted decision-making, planning, and social conduct following vmPFC damage. Similar patients have been described by other investigators (e.g., Blair and Cipolotti, 2000; P.W. Burgess and Shallice, 1996; Shallice and Burgess, 1991). Most neuropsychologists who have seen many patients with injuries from motor vehicle accidents have similar stories. Such findings have led to the conclusion that, when considering the role of the frontal lobes in human intellect, it is important to distinguish between intelligence as a global capacity to engage in adaptive, goal-directed behavior, and intelligence as defined by performance on standard psychometric instruments (e.g., Bechara, H. Damasio, Damasio, and Anderson, 1994; P.W. Burgess, Alderman, Forbes, et al., 2006; A.R. Damasio, Anderson, and Tranel, 2011). Although the frontal cortices constitute a necessary anatomical substrate for human intelligence as a global adaptive capacity, extensive frontal lobe damage may have little or no impact on abilities measured by intelligence tests. Real life intelligent behavior requires more than basic problem solving skills: in real life problems, unlike most artificial problems posed by tests, the relevant issues, rules of engagement, and endpoints are often not clearly identified. In addition, real life behaviors often introduce heavy time processing and working memory demands, including a requirement for prioritization and weighing of multiple options and possible outcomes. Altogether, such factors seem to conspire against patients with frontal lobe damage, who, despite good “IQ”scores, cannot effectively deploy their intelligence in real world, online situations. Thus, knowledge of the complexities of brain organization and brain dysfunction makes the unitary concept of intelligence essentially irrelevant and potentially hazardous for neuropsychological assessment. “Cognitive abilities”or “mental abilities”are the terms we will use when referring to those psychological functions dedicated to information reception, processing, and expression, and to executive functions—the abilities necessary for metacognitive control and direction of mental experience. “IQ”and other summation or composite scores The term IQ is bound to the myths that intelligence is unitary, fixed, and predetermined… . As long as the term IQ is used, these myths will complicate efforts to communicate the meaning of test results and classification decisions. D. J. Reschly, 1981
“IQ”refers to a derived score used in many test batteries designed to measure a hypothesized general ability, viz., “intelligence.” IQ scores obtained from such tests represent a composite of performances on different kinds of items, on different items in the same tests administered at different levels of difficulty, on different items in different editions of test batteries bearing the same name, or on different batteries contributing different kinds of items (M.H. Daniel, 1997; Loring and Bauer, 2010; Urbina, 2004). Composite IQ scores are often good predictors of academic performance, which is not surprising given their heavy loading of school-type and culturally familiar items; many studies have shown that performance on “intelligence”tests is highly correlated with school achievement (e.g., Ormrod, 2008; see also Sternberg, Grigorenko, and Kidd, 2005). For neuropsychologists, however, composite IQ scores represent so many different kinds of conflated and confounded functions as to be conceptually meaningless (Lezak, 1988b). In neuropsychological assessment, IQ scores—whether they be high or low —are notoriously unreliable indices of neuropathic deterioration. Specific defects restricted to certain test modalities, for example, may give a completely erroneous impression of significant intellectual impairment when actually many cognitive functions may be relatively intact but the total score is depressed by low scores in tests involving the impaired function(s). A year after sustaining three concussions in soccer play within one month, a 16-year-old high school student and her mother were informed that she never was a good student and never could be as her full scale IQ score was 60. At the time of the examination she was troubled with headaches and dizziness, and a depressed state—being unable to function in a noisy, bright classroom, she was tutored at home, had become socially isolated, and was unable to engage in sports. Not surprisingly, her Wechsler battery scaled scores on the two timed visuographic tests were 1, and she scored 3s on each of the three attention tests (Digit Span, Letter/number Sequencing, Arithmetic). Most other scores were in the 9th to 16th percentile range except for a Scaled Score of 10 on Matrix Reasoning; the IQ score had been computed on a Comprehension score of 7, but when rescored it was 8. Shortly thereafter a visual misalignment was found, she began vision training and also entered a rehabilitation program focused on dizziness and balance problems. On ImPACT testing (see p. 760), given weeks after taking this examination, all scores were < 1%, reflecting her significant problems with attention and slowed processing speed. Twenty months later, her ImPACT verbal memory score was at the 65th percentile, reaction time was at the 75th percentile. She returned to school and earned A’s in two subjects but was struggling with mathematics and chemistry. All preinjury grade point averages hovered just above 3.0. (A one-month update: math and chemistry grades now B’s with some tutoring and time allowances on tests.)
Conversely, IQ scores may obscure selective defects in specific tests (A. Smith, 1966). Leathem (1999) illustrated this point with the case of a postencephalitic man who “could not learn anything new,” but achieved an IQ score of 128. In addition, derived scores based on a combination of scores from two or
more measures of different abilities potentially result in loss of important information. Should the levels of performance for the combined measures differ, the composite score—which will be somewhere between the highest and the lowest of the combined measures—will be misleading (Lezak, 2002). Averaged scores on a Wechsler Intelligence Scale battery provide just about as much information as do averaged scores on a school report card. Aside from the extreme ends of the spectrum (e.g., students with a four-point grade average who can only have had an A in each subject, and those with a zero grade average who failed every subject), it is impossible to predict performance in any one subject from the overall grade point average. In the same way, it is impossible to predict specific disabilities or areas of competency from averaged ability test scores (e.g., “IQ”scores). Thus, to a large extent, composite scores of any kind have no place in neuropsychological assessment. “IQ”is also popular shorthand for the concept of intelligence; e.g., in statements such as “’IQ’ is a product of genetic and environmental factors.” It may refer to the now disproven idea of an inborn quantity of mental ability residing within each person and demonstrable through appropriate testing; e.g., “Harry is a good student, he must have a high IQ”(Lezak, 1988b). Moreover, interpretations of IQ scores in terms of what practical meaning they might have can vary widely, even among professionals, such as high school teachers and psychiatrists, whose training ostensibly could have provided a common understanding of these scores (L. Wright, 1970). Such misunderstandings further underscore the hazards of using IQ scores to summarize persons’ abilities. Unfortunately, the commonly accepted institutionalization of “IQ”scores by public agencies can add further misery to already tragic situations (see Kenaya et al., 2003) . Many patients with dementing disorders, brain injuries, or brain diseases, whose mental abilities have deteriorated to the point that they cannot continue working, will still perform sufficiently well on enough of the tests in Wechsler Intelligence Scale batteries to be denied (United States) Social Security Disability benefits. One criterian the Social Security Disability Insurance (SSDI) agency uses is a drop in IQ score of at least 15 points from premorbid levels, an arbitrary number that might qualify some patients but disqualifies others. Thus, SSDI may refuse benefits to cognitively disabled persons simply on the grounds that their IQ score is too high, even when appropriate assessment reveals a pattern of disparate levels of functioning that preclude the patient from earning a living. This continues to be a major problem. Newer versions of the Wechsler batteries (WAIS- III/IV [Wechsler, 1997a;
PsychCorp, 2008], WISC-IV [PsychCorp, 2003]) have introduced various “Index Scores”in addition to (WAIS-III) or in place of (WAIS-IV, WISC-IV) traditional IQ scores. In reorganizing data summation according to large areas of brain function rather than the simplistic (and erroneous) verbal/performance split in early Wechsler Intelligence Scale (WIS) editions, this is a step in the right direction. However, these new summed scores are still combinations of individual tests, each involving a complex of functions. Thus, Index Scores, too, can obscure important information obtainable only by examining and comparing the discrete test scores (see pp. 719–720). Large differences between discrete test scores can illuminate important basic problems which would be submerged or entirely obfuscated by an Index Score. One must never misconstrue a normal intelligence test result as an indication of normal intellectual status after head trauma, or worse, as indicative of a normal brain; to do so would be to commit the cardinal sin of confusing absence of evidence with evidence of absence [italics, mdl]. (Teuber, 1969)
In sum, “IQ”as a score is often meaningless and not infrequently misleading as well. In fact, in most respects “IQ"—whether concept, score, or catchword— has outlived whatever usefulness it may once have had. In neuropsychological practice in particular, it is difficult to justify any continued use of the notion of “IQ.” CLASSES OF COGNITIVE FUNCTIONS With our growing knowledge about how the brain processes information, it becomes increasingly more challenging to make theoretically acceptable distinctions between the different functions involved in human information processing. In the laboratory, precise distinctions between sensation and perception, for example, may depend upon whether incoming information is processed by analysis of superficial physical and sensory characteristics or through pattern recognition and meaningful (e.g., semantic) associations. The fluidity of theoretical models of perception and memory in particular becomes apparent in the admission that “We have no way of distinguishing what might be conceived of as the higher echelons of perception from the lower echelons of recognition… . [T]here is no definable point of demarcation between perception and recognition”(A.R. Damasio, Tranel, and Damasio, 1989, p. 317). A.R. Damasio and colleagues were stressing their appreciation that no “line”clearly divides perceptual processes from recognition processes. This becomes evident when considering studies of nonconscious “recognition”in
prosopagnosia see p. 444). These patients cannot provide any overt indication that they recognize familiar faces yet respond with psychophysiological responses to those faces, indicating that both perception and some aspects of memory are still operating successfully but without conscious awareness (e.g., Bauer and Verfaellie, 1988; Tranel and Damasio, 1985; Tranel and Damasio, 1988). The same can be said for many other cognitive functions. It is typically unclear, and in most cases virtually impossible, to demarcate a distinctive boundary where one function stops and the other begins. Rather than entering theoretical battlegrounds on ticklish issues that are not especially germane to most practical applications in neuropsychology, we shall discuss cognitive functions within a conceptual framework that has proven useful in psychological assessment generally and in neuropsychological assessment particularly. In so doing, however, we acknowledge that there are sophisticated and valid conceptualizations of cognitive functions in the experimental literature that may differ from the organizational structure we proffer. As neuropsychology evolves, we hope that reliable and valid lessons from that literature will continue to inform the practice of clinical neuropsychology and, especially, inform the development of specific tests for measuring specific functions.
Receptive Functions Entry of information into the central processing system proceeds from sensory stimulation, i.e., sensation, through perception, which involves the integration of sensory impressions into psychologically meaningful data, and thence into memory. Thus, for example, light on the retina creates a visual sensation; perception involves encoding the impulses transmitted by the aroused retina into a pattern of hues, shades, and intensities eventually recognized as a daffodil in bloom. The components of sensation can be fractionated into very small and remarkably discrete receptive units. The Nobel Prize-winning research of Hubel and Weisel (1968) demonstrated that neurons in the visual cortex are arranged in columns that respond preferentially to stimuli at specific locations and at specific orientations. This early work was later replicated and extended by Margaret Livingstone and David Hubel (1988) who showed that discrete neural units are dedicated to the processing of elementary sensory properties such as form versus color versus movement. Moreover, the fractionation at this basic sensory level is paralleled by like dissociations at the
cognitive/behavioral level, where, for example, patients can selectively lose the capability to see form, or to see color, or to see depth or movement (e.g., A.R. Damasio, Tranel, and Rizzo, 2000). Sensory reception
Sensory reception involves an arousal process that triggers central registration leading to analysis, encoding, and integrative activities. The organism receives sensation passively, shutting it out only, for instance, by holding the nose to avoid a stench or closing the eyes to avoid bright light. Even in soundest slumber, a stomach ache or a loud noise will rouse the sleeper. However, the perception of sensations also depends heavily on attentional factors (Meador, Allison, et al., 2002; Meador, Ray et al., 2001). Neuropsychological assessment and research focus primarily on the five traditional senses: sight, hearing, touch, taste, and smell— although—commensurate with their importance in navigating the world—sight and hearing have received most attention. Perception and the agnosias
Perception involves active processing of the continuous torrent of sensations as well as their inhibition or filtering from consciousness. This processing comprises many successive and interactive stages. The simplest physical or sensory characteristics, such as color, shape, or tone, come first in the processing sequence and serve as foundations for the more complex “higher”levels of processing that integrate sensory stimuli with one another and with past experience (Fuster, 2003; A. Martin, Ungerleider, and Haxby, 2000; Rapp, 2001, passim). Normal perception in the healthy organism is a complex process engaging many different aspects of brain functioning (Coslett and Saffran, 1992; Goodale, 2000; Lowel and Singer, 2002). Like other cognitive functions, the extensive cortical distribution and complexity of perceptual activities make them highly vulnerable to brain injury. Perceptual defects resulting from brain injury can occur through loss of a primary sensory input such as vision or smell and also through impairment of specific integrative processes. Although it may be difficult to separate the sensory from the perceptual components of a behavioral defect in some severely brain injured patients, sensation and perception each has its own functional integrity. This can be seen when perceptual organization is maintained despite very severe sensory defects or when perceptual functions are markedly disrupted in patients with little or no sensory deficit. The nearly deaf person can readily understand speech patterns
when the sound is sufficiently amplified, whereas some brain damaged persons with keen auditory acuity cannot make sense of what they hear. The perceptual functions include such activities as awareness, recognition, discrimination, patterning, and orientation. Impairments in perceptual integration appear as disorders of recognition, classically known as the “agnosias” (literally, no knowledge). Teuber (1968) clarified the distinction between sensory and perceptual defects by defining agnosia as “a normal percept stripped of its meanings.” In general, the term agnosia signifies lack of knowledge and denotes an impairment of recognition. Since a disturbance in perceptual activity may affect any of the sensory modalities as well as different aspects of each one, a catalogue of discrete perceptual disturbances can be quite lengthy. For example, Benson (1989) listed six different kinds of visual agnosias. Bauer (2011) identified three distinctive auditory agnosias, and M. Williams (1979) described another three involving various aspects of body awareness. These lists can be expanded, for within most of these categories of perceptual defect there are functionally discrete subcategories. For instance, loss of the ability to recognize faces (prosopagnosia or face agnosia), one of the visual agnosias, can occur with or without intact abilities to recognize associated characteristics such as a person’s facial expression, age, and sex (Tranel, A.R. Damasio, and H. Damasio, 1988). Other highly discrete dissociations also occur within the visual modality, e.g., inability to recognize a person’s face with intact recognition for the same person’s gait, or inability to recognize certain categories of concrete entities with intact recognition of other categories (e.g., man-made tools vs. natural objects, animals versus fruits and vegetables) (H. Damasio, Tranel, Grabowski, et al., 2004; Tranel, Feinstein, and Manzel, 2008; Warrington and James, 1986). Such dissociations reflect the processing characteristics of the neural systems that form the substrates of knowledge storage and retrieval. One basic dichotomy that has proven useful, at least at the heuristic level, is the distinction between “associative”and “apperceptive”agnosia. This distinction is an old one (Lissauer, 1890); it refers to a basic difference in the mechanism underlying the recognition disorder. Associative agnosia is failure of recognition that results from defective retrieval of knowledge pertinent to a given stimulus. Here, the problem is centered on memory: the patient is unable to recognize a stimulus (i.e., to know its meaning) despite being able to perceive the stimulus normally (e.g., to see shape, color, texture; to hear frequency, pitch, timbre; and so forth). Apperceptive agnosia, by contrast, is disturbance of the integration of otherwise normally perceived components of
a stimulus. Here, the problem is centered more on perception: the patient fails to recognize a stimulus because the patient cannot integrate the perceptual elements of the stimulus even though those individual elements are perceived normally. It should be clear that the central feature in designating a condition as “agnosia”is a recognition defect that cannot be attributed simply or entirely to faulty perception. Even though the two conditions may show some overlap, in clinical practice it is usually possible to make a distinction between these two basic forms of agnosia (e.g., Tranel and Grabowski, 2009).
Memory If any one faculty of our nature may be called more wonderful than the rest, I do think it is memory. There seems something more speakingly incomprehensible in the powers, the failures, the inequalities of memory, than in any other of our intelligences. The memory is sometimes so retentive, so serviceable, so obedient—at others, so bewildered and so weak—and at others again, so tyrannic, so beyond control!—We are to be sure a miracle every way—but our powers of recollecting and forgetting, do seem peculiarly past finding out. Jane Austen, Mansfield Park, 1814 [1961]
Central to all cognitive functions and probably to all that is characteristically human in a person’s behavior is the capacity for memory, learning, and intentional access to knowledge stores, as well as the capacity to “remember”in the future (e.g., to use memory to “time travel”into the future, to imagine what will be happening to us at some future time, to plan for future activities, and so on). Memory frees the individual from dependency on physiological urges or situational happenstance for pleasure seeking; dread and despair do not occur in a memory vacuum. Severely impaired memory isolates patients from practically meaningful contact with the world about them and deprives them of a sense of personal continuity, rendering them helplessly dependent. Even mildly to moderately impaired memory can have a very disorienting effect. Different memory systems
Surgery for epilepsy, in which the medial temporal lobes were resected bilaterally, unexpectedly left the now famous patient, HM, with a severe inability to learn new information or recall ongoing events, i.e., he had a profound amnesia (literally, no memory), which, in his case, was anterograde (involving new experiences; see p. 28). Careful studies of HM by Brenda Milner (1962, 1965) and later by Corkin (1968) and N.J. Cohen and Squire (1980) showed that, despite his profound amnesia, HM was capable of learning new motor skills and other procedural-based abilities that did not rely on
explicit, conscious remembering. This remarkable dissociation was replicated and extended in other severely amnesic patients, including the patient known as Boswell studied by the Damasio group at Iowa (Tranel, A.R. Damasio, H. Damasio, and Brandt, 1994). Such work has provided the foundation for conceptualizing memory functions in terms of two long-term storage and retrieval systems: a declarative system, or explicit memory, which deals with facts and events and is available to consciousness; and a nondeclarative or implicit system, which is “nonconscious”(B. Milner, Squire, and Kandel, 1998; Squire and Knowlton, 2000). Depending on one’s perspective, the count of memory systems or kinds of memory varies. From a clinical perspective, Mayes (2000a) divided declarative memory into semantic (fact memory) and episodic (autobiographic memory), and nondeclarative memory into item-specific implicit memory and procedural memory (see also Baddeley, 2002). Numerous other divisions and subclassifications of memory systems have been proposed (e.g., B. Milner et al., 1998; Salmon and Squire, 2009). On reviewing the memory literature, Endel Tulving (2002b) found no fewer than “134 different named types of memory.” For clinical purposes, however, the dual system conceptualization— into declarative (explicit) and nondeclarative (implicit) memory with its major subsystems—provides a useful framework for observing and understanding patterns of memory competence and deficits presented by patients. Declarative (explicit) memory
Most memory research and theory has focused on abilities to learn about and remember information, objects, and events. For all intents and purposes, this is the kind of memory that patients may be referring to when complaining of memory problems, that teachers address for most educational activities, and that is the “memory”of common parlance. It has been described as “the mental capacity of retaining and reviving impressions, or of recalling or recognizing previous experiences … act or fact of retaining mental impressions”(J. Stein, 1966) and, as such, always requires awareness (Moscovitch, 2000) . Referring to it as “explicit memory,” Demitrack and his colleagues (1992) pointed out that declarative memory involves “a conscious and intentional recollection”process. Thus, declarative memory refers to information that can be brought to mind and inspected in the “mind’s eye,” and, in that sense, “declared”(Tranel and Damasio, 2002). Stages of memory processing
Despite the plethora of theories about stages (R.C. Atkinson and Shiffrin, 1968; G.H. Bower, 2000; R.F. Thompson, 1988) or processing levels (S.C. Brown and Craik, 2000; Craik, 1979), for clinical purposes a three- stage or elaborated two-stage model of declarative memory provides a suitable framework for conceptualizing and understanding dysfunctional memory (McGaugh, 1966; Parkin, 2001; Strub and Black, 2000). 1. Registration, or sensory, memory holds large amounts of incoming information briefly (on the order of seconds) in sensory store (Balota et al., 2000; Vallar and Papagno, 2002). It is neither strictly a memory function nor a perceptual function but rather a selecting and recording process by which perceptions enter the memory system. The first traces of a stimulus may be experienced as a fleeting visual image (iconic memory, lasting up to —200 msec) or auditory “replay”(echoic memory, lasting up to —2,000 msec), indicating early stage processing that is modality specific (Fuster, 1995; Koch and Crick, 2000). The affective, s et (perceptual and response predisposition), and attention-focusing components of perception play an integral role in the registration process (S.C. Brown and Craik, 2000; Markowitsch, 2000). Information being registered is further processed as short-term memory, or it quickly decays. 2a. I mmediate memory, the first stage of s hort-term memory (STM) storage, temporarily holds information retained from the registration process. While theoretically distinguishable from attention, in practice, short-term memory may be equated with simple immediate span of attention (Baddeley, 2000; Howieson and Lezak, 2002b; see p. 402). Immediate memory serves “as a limited capacity store from which information is transferred to a more permanent store”and also “as a limited capacity retrieval system”(Fuster, 1995; see also Squire, 1986). Having shown that immediate memory normally handles about seven “plus or minus two”bits of information at a time, G.A. Miller (1956) observed that this restricted holding capacity of “immediate memory impose[s] severe limitations on the amount of information that we are able to perceive, process, and remember.” Immediate memory is of sufficient duration to enable a person to respond to ongoing events when more enduring forms of memory have been lost. It typically lasts from about 30 seconds up to several minutes. Although immediate memory is usually conceptualized as a unitary process, Baddeley (1986, 2002) showed how it may operate as a set of subsystems “controlled by a limited capacity executive system,” which together is working memory, the temporary storage and processing system used for
problem solving and other cognitive operations that take place over a limited time frame. Baddeley proposed that working memory consists of two subsystems, one for processing language—the “phonological loop"—and one for visuospatial data—”the visuospatial sketch pad.” The functions of working memory are “to hold information in mind, to internalize information, and to use that information to guide behavior without the aid of or in the absence of reliable external cues”(Goldman-Rakic, 1993, p. 15). Numerous studies have supported Hebb’s (1949) insightful hunch that information in immediate memory is temporarily maintained in reverberating neural circuits (self-contained neural networks that sustain neural activity by channeling it repeatedly through the same network) (Fuster, 1995; McGaugh et al., 1990, passim; Shepherd, 1998). If not converted into a more stable biochemical organization for longer lasting storage, the electrochemical activity that constitutes the immediate memory trace spontaneously dissipates and the memory is not retained. For example, only the rare reader with a “photographic”memory will be able to recall verbatim the first sentence on the preceding page although almost everyone who has read this far will have just seen it. 2b. Rehearsal is any repetitive mental process that serves to lengthen the duration of a memory trace (S.C. Brown and Craik, 2000). With rehearsal, a memory trace may be maintained for hours (in principle, indefinitely). Rehearsal increases the likelihood that a given bit of information will be permanently stored but does not ensure it (Baddeley, 1986). 2c. Another kind of short-term memory may be distinguished from immediate memory in that it lasts from an hour or so to one or two days—longer than a reverberating circuit could be maintained by even the most conscientious rehearsal efforts, but not yet permanently fixed as learned material in longterm storage (Fuster, 1995; Tranel and Damasio, 2002). This may be evidence of an intermediate step “in a continuous spectrum of interlocked molecular mechanisms of … the multistep, multichannel nature of memory”(Dudai, 1989). 3. Long-term memory (LTM) or secondary memory— i.e., learning, the acquisition of new information— refers to the organism’s ability to store information. Long-term memory is most readily distinguishable from shortterm memory in amnestic patients, i.e., persons unable to retain new information for more than a few minutes without continuing rehearsal. Although amnesic conditions may have very different etiologies (see Chapter 7, passim), they all have in common a relatively intact short-term memory capacity with significant long-term memory impairments (Baddeley and
Warrington, 1970; O’Connor and Verfaellie, 2002; Tranel, H. Damasio, and Damasio, 2000). The process of storing information as long-term memory—i.e., consolidation—may occur quickly or continue for considerable lengths of time, even without active, deliberate, or conscious effort (Lynch, 2000; Mayes, 1988; Squire, 1987). Learning implies consolidation: what is learned is consolidated. Larry Squire has written that “Consolidation best refers to a hypothesized process of reorganization within representations of stored information, which continues as long as information is being forgotten”(Squire, 1986, p. 241). Many theories of memory consolidation propose a gradual transfer of memory that requires processing from hippocampal and medial temporal lobe structures to the neocortex for longer term storage (Kapur and Brooks, 1999; B. Milner et al., 1998). “Learning”often requires effortful or attentive activity on the part of the learner. Yet when the declarative memory system is intact, much information is also acquired without directed effort, by means of incidental learning (Dudai, 1989; Kimball and Holyoak, 2000). Incidental learning tends to be susceptible to impairment with some kinds of brain damage (S. Cooper, 1982; C. Ryan, Butters, Montgomery, et al., 1980). Long-term memory storage presumably involves a number of processes occurring at the cellular level, although much of this is poorly understood in humans. These processes include neurochemical alterations in the neuron (nerve cell), neurochemical alterations of the synapse (the point of interaction between nerve cell endings) that may account for differences in the amount of neurotransmitter released or taken up at the synaptic juncture, elaboration of the dendritic (branching out) structures of the neuron to increase the number of contacts made with other cells (Fuster, 1995; Levitan and Kaczmarek, 2002; Lynch, 2000), and perhaps pruning or apoptosis (programmed cell death) of some connections with disuse (Edelman, 1989; Huttenlocher, 2002) and in brain development (Low and Cheng, 2006; Walmey and Cheng, 2006). Memories are not stored in a single local site; rather, memories involve contributions from many cortical and subcortical centers (Fuster, 1995; Markowitsch, 2000; Mendoza and Foundas, 2008), with “different brain systems playing different roles in the memory system”(R.F. Thompson, 1976). Encoding, storage, and retrieval of information in the memory system appear to take place according to both principles of association (Levitan and Kaczmarek, 2002; McClelland, 2000) and “characteristics that are unique to a particular stimulus”(S.C. Brown and Craik, 2000, p. 98). Thus, much of the information in the long-term storage system appears to be organized on the
basis of meaning and associations, in contrast to the short-term storage system where it is organized in terms of contiguity or of sensory properties such as similar sounds, shapes, or colors (G.H. Bower, 2000; Craik and Lockhart, 1972). Breakdown in storage or retrieval capacities results in distinctive memory disorders. Recent and remote memory are clinical terms that refer, respectively, to autobiographical memories stored within the last few hours, days, weeks, or even months and to older memories dating from early childhood (e.g., Strub and Black, 2000; see also Neisser and Libby, 2000). In intact persons it is virtually impossible to determine where recent memory ends and remote memory begins, for there are no major discontinuities in memory from the present to early wisps of infantile recollection. However, a characteristic autobiographical “memory bump”begins around age ten and lasts until the early 30s, such that persons typically can recollect more numerous and more vivid memories from this time period of their life (Berntsen and Rubin, 2002; D. Rubin and Schulkind, 1997; see Buchanan et al., 2005, 2006, for neuropsychological studies related to this phenomenon). Amnesia
Impaired memory—amnesia—results from a disturbance of the processes of registration, storage, or retrieval. The severity of the amnesia can range from subtle to profound: on the more severe end of the spectrum, patients can lose virtually all of their episodic memory and capacity to learn new information (e.g., Damasio, Eslinger, et al., 1985; J.S. Feinstein, Rudrauf, et al., 2010; Scoville and Milner, 1957). Lesion location is a major factor determining the specific nature of the memory impairment (e.g., Tranel and Damasio, 2002). Time-limited memory deficits can occur in conditions such as head injury, electroconvulsive therapy (ECT), and transient global amnesia. In such cases, the amnesia is limited to a fairly discrete period (e.g., minutes or hours) while memories before and after that period remain intact. The most common form of amnesia, anterograde amnesia, is an inability to acquire new information normally. It is the most typical memory impairment that follows the onset of a neurological injury or condition and is tantamount to impaired learning. Anterograde amnesia is a hallmark symptom of Alzheimer ’s disease. Moreover, it occurs with nearly all conditions that have an adverse impact on the functioning of the mesial temporal lobe and especially the hippocampus (see pp. 83–86). The kind and severity of the memory defect vary somewhat with the nature of the disorder (O’Connor and Verfaellie, 2002; Y. Stern and Sackeim, 2008) and extent of hippocampal
destruction (J.S. Allen et al., 2006). Loss of memory for events preceding the onset of brain injury, often due to trauma, is called retrograde amnesia. The time period for the memory loss tends to be relatively short (30 minutes or less) with TBI but can be extensive (E. Goldberg and Bilder, 1986). When retrograde amnesia occurs with brain disease, loss of one’s own history and events may go back years and even decades (N. Butters and Cermak, 1986; Corkin, Hurt, et al., 1987; J.S. Feinstein, Rudrauf, et al., 2010). There can be a rough temporal gradient to retrograde amnesia in that newer memories tend to be more vulnerable to loss than older ones on a sort of “first in, last out”principle (M.S. Albert, Butters, and Levin, 1979; Squire, Clark, and Knowlton, 2001). Many patients show a striking dissociation between anterograde and retrograde memory; typically, anterograde memory is impaired and retrograde is spared. This pattern indicates that the anatomical structures involved in new learning versus those required for retrieval of old memories are different (Markowitsch, 2000; Tranel and Damasio, 2002). The acquisition of new declarative information requires a time-sensitive, temporary processing system that is important for the formation and short-term maintenance of memories (the hippocampal complex, pp. 83–86). Long-term and permanent memories are maintained and stored elsewhere, especially in anterolateral areas of the temporal lobe and higher order sensory association cortices (R.D. Jones, Grabowski, and Tranel, et al., 1998). Long-enduring retrograde amnesia that extends back for years or decades is usually accompanied by an equally prominent anterograde amnesia; these patients neither recall much of their history nor learn much that is new. Dense retrograde amnesia in the absence of any problems with anterograde memory is highly uncommon as a bona fide neurological condition; complaints of such a problem raise the question of other, often psychiatric, factors at play (Kritchevsky et al., 2004; Stracciari et al., 2008). A 52-year-old machine maintenance man complained of “amnesia”a few days after his head was bumped in a minor traffic accident. He knew his name but denied memory for any personal history preceding the accident while registering and retaining postaccident events, names, and places normally. This burly, well-muscled fellow moved like a child, spoke in a soft—almost lisping—manner, and was only passively responsive in interview. He was watched over by his woman companion who described a complete personality change since the accident. She reported that he had been raised in a rural community in a southeastern state and had not completed high school. With these observations and this history, rather than begin a battery of tests, he was hypnotized. Under hypnosis, a manly, pleasantly assertive, rather concrete-minded personality emerged. In the course of six hypnotherapy sessions the patient revealed that, as a prize fighter when young, he had learned to consider his fists to be “lethal weapons.” Some years before the
accident he had become very angry with a brother-in-law who picked a fight and was knocked down by the patient. Six days later this man died, apparently from a previously diagnosed heart condition; yet the patient became convinced that he had killed him and that his anger was potentially murderous. Just days before the traffic accident, the patient’s son informed him that he had fathered a baby while in service overseas but was not going to take responsibility for baby or mother. This enraged the patient who reined in his anger only with great effort. He was riding with his son when the accident occurred. A very momentary loss of consciousness when he bumped his head provided a rationale—amnesia—for a new, safely ineffectual personality to evolve, fully dissociated from the personality he feared could murder his son. Counseling under hypnosis and later in his normal state helped him to learn about and cope with his anger appropriately. Aspects and elements of declarative memory
Recall vs. recognition. The effectiveness of the memory system also depends on how readily and completely information can be retrieved. Information retrieval is remembering, which, when it occurs through recall, involves an active, complex search process (S.C. Brown and Craik, 2000; Mayes, 1988). The question, “What is the capital of Oregon?” tests the recall function. When a like stimulus triggers awareness, remembering takes place through recognition. The question, “Which of the following is the capital of Oregon: Albany, Portland, or Salem?” tests the recognition function. Retrieval by recognition is much easier than free recall for both intact and brain impaired persons (N. Butters, Wolfe, Granholm, and Martone, 1986; M.K. Johnson, 1990). On superficial examination, retrieval problems can be mistaken for learning or retention problems, but appropriate testing techniques can illuminate and clarify the nature of the memory defect. Elements of declarative memory. That there are many different kinds of memory functions becomes abundantly clear with knowledge of pathological brain conditions, as dissociations between the different mnestic disorders emerge in various neurological disorders (Shimamura, 1989; Stuss and Levine, 2002; Verfaellie and O’Connor, 2000). For example, in addition to the basic distinction between short-term and long-term memory, memory subsystems are specialized for the nature of the information to be learned, e.g., verbal or nonverbal. Thus, there is a fairly consistent relationship between the side of the lesion and the type of learning impairment, such that damage to the left hippocampal system produces an amnesic syndrome that affects verbal material (e.g., spoken words, written material) but spares nonverbal material; conversely, damage to the right hippocampal system affects nonverbal material (e.g., complex visual and auditory patterns) but spares verbal material (e.g., Milner, 1974; O’Connor and Verfaellie, 2002). After damage to the left hippocampus, for example, a patient may lose the ability to learn new names
but remain capable of learning new faces and spatial arrangements (e.g., Tranel, 1991). Conversely, damage to the right hippocampal system frequently impairs the ability to learn new geographical routes (e.g., Barrash et al., 2000: see also p. 400). Another distinction can be made for modality specific memory, which depends on the specific sensory modality of testing and is most often identified when examining working memory (Conant et al., 1999; Fastenau, Conant, and Lauer, 1998). Brain disease can affect different kinds of memories in long-term storage differentially: the dissociations that can manifest in brain damaged patients often seem remarkable. For example, a motor speech habit, such as organizing certain sounds into a word, may be wholly retained while rules for organizing words into meaningful speech are lost (H. Damasio and Damasio, 1989; Geschwind, 1970). Recognition of printed words or numbers may be severely impaired while speech comprehension and picture recognition remain relatively intact. Moreover, neural structures in different parts of the left temporal lobe are important for retrieving names of objects from different conceptual categories; thus, focal damage to the anterior and/or lateral parts of the left temporal lobe may result in category-related naming defects such that a patient can retrieve common nouns but not proper nouns, or can retrieve names for tools/utensils but not names for animals (e.g., H. Damasio, Tranel, Grabowski, et al., 2004; Tranel, 2009). Similar patterns of dissociations have been reported for retrieving conceptual knowledge for concrete entities, i.e., recognizing the meaning of things such as animals, tools, or persons (e.g., Tranel, H. Damasio, and A.R. Damasio, 1997; Warrington and McCarthy, 1987; Warrington and Shallice, 1984). An important distinction is between episodic and semantic memory (Tulving, 2002a). Episodic memory refers to memories that are localizable in time and space, e.g., your first day in school. Semantic memory refers to “timeless and spaceless”knowledge, for instance, the alphabet or the meanings of words. The clinical meaningfulness of this distinction becomes evident in patients who manifest retrograde amnesia for episodic information that extends back weeks and even years, although their semantic memory—fund of information, language usage, and practical knowledge—may be entirely intact (Warrington and McCarthy, 1988). Another useful distinction is between effortful and automatic memory, which refers to whether learning involves active, effortful processing or passive acquisition (Balota et al., 2000; Hasher and Zacks, 1979; M.K. Johnson and Hirst, 1991). Clinically, the difference between automatic and effortful memory commonly shows up in a relatively normal immediate recall of digits
or letters that is characteristic of many brain disorders (e.g., TBI, Alzheimer ’s disease, multiple sclerosis)—recall that requires little effortful processing, in contrast to reduced performance on tasks requiring effort, such as reciting a string of digits in reverse. Aging can also amplify the dissociation between effortful versus automatic memory processing. Other subtypes of memory have been identified, based mainly on research in memory disordered patients. Source memory (K.J. Mitchell and Johnson, 2000; Schacter, Harbluk, and McLachlan, 1984; Shimamura, 2002) or contextual memory (J.R. Anderson and Schooler, 2000; Parkin, 2001; Schacter, 1987) refers to knowledge of where or when something was learned, i.e., the contextual information surrounding the learning experience. Prospective memory is the capacity for “remembering to remember,” and it is also an aspect of executive functioning (Baddeley, Harris, et al., 1987; Brandimonte et al., 1996, passim; Shimamura, Janowsky, and Squire, 1991). The importance of prospective memory becomes apparent in those patients with frontal lobe injuries whose memory abilities in the classical sense may be relatively intact but whose social dependency is due, at least in part, to their inability to remember to carry out previously decided upon activities at designated times or places (Sohlberg and Mateer, 2001). For example, it may not occur to them to keep appointments they have made, although when reminded or cued it becomes obvious that this information was not lost but rather was not recalled when needed. Another form of “future”memory is future episodic memory. Humans have a remarkable ability to time travel mentally; that is, we are able to revisit our past experiences through our memories, as well as imagine future experiences and situations. Research has suggested that the structures involved in creating memories for past experiences may also be necessary for imagining and simulating future experiences (Hassabis et al., 2007). The creation of future scenarios requires drawing upon past experiences to guide one’s representation of what might happen in the future. The hippocampus may be involved in flexibly recombining past autobiographical information for use in novel future contexts (Konkel et al., 2008). Functional neuroimaging studies corroborated conjectures that the hippocampus is involved in both creating memories for the past and creating and imagining the future (see Addis et al., 2006; Schacter and Addis, 2007). Nondeclarative memory
The contents of nondeclarative memory have been defined as “knowledge that is expressed in performance without subjects’ phenomenological awareness
that they possess it”(Schacter, McAndrews, and Moscovitch, 1988). Two subsystems are clinically relevant: procedural memory, and priming or perceptual learning (Baddeley, 2002; Mayes, 2000b; Squire and Knowlton, 2000). Classical conditioning is also considered a form of nondeclarative memory (Squire and Knowlton, 2000). Different aspects of nondeclarative memory and learning activities are processed within neuroanatomically different systems (Fuster, 1995; Squire and Knowlton, 2000; Tranel and Damasio, 2002; pp. 49, 95). Procedural, or skill memory, includes motor and cognitive skill learning and perceptual—”how to"— learning. Priming refers to a form of cued recall in which, without the subject’s awareness, prior exposure facilitates the response. Two elements common to these different aspects of memory are their preservation in most amnesic patients (O’Connor and Verfaillie, 2002; Tranel, Damasio, H. Damasio, and Brandt, 1994) and that they are acquired or used without awareness or deliberate effort (Graf et al., 1984; Koziol and Budding, 2009; Nissen and Bullemer, 1987). That procedural memory is a distinctive system has long been apparent from observations of patients who remember nothing of ongoing events and little of their past history, yet retain abilities to walk and talk, dress and eat, etc.; i.e., their well-ingrained habits that do not depend on conscious awareness remaining intact (Fuster, 1995; Gabrieli, 1998; Mayes, 2000b). Moreover, procedural memory has been demonstrated in healthy subjects taught unusual skills, such as reading inverted type (Kolers, 1976) or learning the sequence for a set of changing locations (Willingham et al., 1989). Forgetting
Some loss of or diminished access to information—both recently acquired and stored in the past—occurs continually as normal forgetting. Normal forgetting rates differ with psychological variables such as personal meaningfulness of the material and conceptual styles, as well as with age differences and probably some developmental differences. Normal forgetting differs from amnesic conditions in that only amnesia involves the inaccessibility or nonrecording of large chunks of personal memories. The mechanism underlying normal forgetting is still unclear. What is forgotten seems to be lost from memory through disuse or interference by more recently or vividly learned information or experiences (Mayes, 1988; Squire, 1987). Perhaps most important of these processes is “autonomous decay … due to physiologic and metabolic processes with progressive erosion of synaptic connections”(G.H. Bower, 2000). Fuster (1995) pointed out that
initial “poor fixation of the memory”accounts for some instances of forgetting. This becomes most apparent in clinical conditions in which attentional processes are so impaired that passing stimuli (in conversation or as events) are barely attended to, weakly stored, and quickly forgotten (Howieson and Lezak, 2002b). Rapid forgetting is characteristic of many degenerative dementing conditions, e.g., Alzheimer ’s disease (Bondi, Salmon, and Kaszniak, 2009; Dannenbaum et al., 1988; Gronholm-Nyman et al., 2010), frontotemporal dementia (Pasquier et al., 2001) , and vascular dementia (Vanderploeg, Yuspeh, and Schinka, 2001). There is also the Freudian notion that nothing is really “lost”from memory and the problem is with faulty or repressed retrieval processes. This view is not scientifically tenable, although psychodynamic suppression or repression of some unwanted or unneeded memories can take place and account for certain types of “forgetting.” This “forgotten”material can be retrieved, sometimes spontaneously, sometimes with such psychological assistance as hypnosis (e.g., case report, p. 30).
Expressive Functions Expressive functions, such as speaking, drawing or writing, manipulating, physical gestures, and facial expressions or movements, make up the sum of observable behavior. Mental activity is inferred from them. Apraxia
Disturbances of purposeful expressive functions are known as apraxias (literally, no work) (Liepmann, [1900] 1988). The apraxias typically involve impairment of learned voluntary acts despite adequate motor innervation of capable muscles, adequate sensorimotor coordination for complex acts carried out without conscious intent (e.g., articulating isolated spontaneous words or phrases clearly when volitional speech is blocked, brushing crumbs or fiddling with objects when intentional hand movements cannot be performed), and adequate comprehension of the elements and goals of the desired activity. Given the complexity of purposeful activity, it is not surprising that apraxia can occur with disruption of pathways at different stages (initiation, positioning, coordination, and/or sequencing of motor components) in the evolution of an act or sequential action (Grafton, 2003; Heilman and Rothi, 2011). Apraxic disorders may appear when pathways have been disrupted that
connect the processing of information (e.g., instructions, knowledge of tools or acts) with centers for motor programming or when there has been a breakdown in motor integration and executive functions integral to the performance of complex learned acts (Mendoza and Foundas, 2008). Thus, when asked to show how he would use a pencil, an apraxic patient who has adequate strength and full use of his muscles may be unable to organize finger and hand movements relative to the pencil sufficiently well to manipulate it appropriately. He may even be unable to relate the instructions to hand movements although he understands the nature of the task. Apraxias tend to occur in clusters of disabilities that share a common anatomical pattern of brain damage (Mendoza and Foundas, 2008, passim). For example, apraxias involving impaired ability to perform skilled tasks on command or imitatively and to use objects appropriately and at will are commonly associated with lesions near or overlapping speech centers. They typically appear concomitantly with communication disabilities (Heilman and Rothi, 2011; Kertesz, 2005; Meador, Loring, Lee, et al., 1999). A more narrowly defined relationship between deficits in expressive speech (Broca’s aphasia) and facial apraxia further exemplifies the anatomical contiguity of brain areas specifically involved in verbal expression and facial movement (Kertesz, 2005; Kertesz and Hooper, 1982; Verstichel et Cambier, 2005), even though these disorders have been dissociated in some cases (Heilman and Rothi, 2011). Apraxia of speech, too, may appear in impaired initiation, positioning, coordination, and/or sequencing of the motor components of speech. These problems can be mistaken for or occur concurrently with defective articulation (dysarthria). Yet language (symbol formulation) deficits and apraxic phenomena often occur independently of one another (Haaland and Flaherty, 1984; Heilman and Rothi, 2011; Mendoza and Foundas, 2008). Constructional disorders
Constructional disorders, often classified as apraxias, are actually not apraxias in the strict sense of the concept. Rather, they are disturbances “in formulative activities such as assembling, building, drawing, in which the spatial form of the product proves to be unsuccessful without there being an apraxia of single movements”(Benton, 1969a). They often occur with lesions of the nonspeech hemisphere and are associated with defects of spatial perception (Benton, 1973, 1982), although constructional disorders and disorders involving spatial perception can manifest as relatively isolated impairments. Different constructional disorders also may appear in relative isolation. Thus, some patients will experience difficulty in performing all constructional tasks; others
who make good block constructions may consistently produce poor drawings; still others may copy drawings well but be unable to do free drawing. Certain constructional tasks, such as clock drawing, are useful bedside examination procedures as the multiple factors required for success (planning, spatial organization, motor control) make such a seemingly simple task sensitive to cognitive impairments resulting from a variety of conditions (M. Freedman, Leach, et al., 1994; Tranel, Rudrauf, et al., 2008; see pp. 594–606). Aphasia
Aphasia (literally, no speech) can be defined as an acquired disturbance of the comprehension and formulation of verbal messages (A.R. Damasio and Damasio, 2000). Aphasia can be further specified as a defect in the two-way translation mechanism between thought processes and language; that is, between the organized manipulation of mental representations which constitutes thought, and the organized processing of verbal symbols and grammatical rules which constitutes sentences. In aphasia, either the formulation or comprehension of language, or both will be compromised. An aphasic disorder can affect syntax (the grammatical structure of sentences), the lexicon (the dictionary of words that denote meanings), or word morphology (the combination of phonemes that results in word structure). Deficits in various aspects of language occur with different degrees of severity and in different patterns, producing a number of distinctive syndromes (or subtypes) of aphasia. Each syndrome has a defining set of neuropsychological manifestations, associated with a typical site of neural dysfunction. The designation of different syndromes of aphasia dates back to the 19th century observations of Broca, Wernicke, and other neurologists (Grodzinsky and Amunts, 2006, Historical Articles, pp. 287–394). The essence of those early classifications has stood the test of time very well. With refinements in analysis at both behavioral and neuro- anatomical levels, it has become possible to identify different aphasia syndromes reliably, as seen in several typical classificatory schemes (e.g., Benson, 1993 [ten types]; A.R. Damasio and Damasio, 2000 [eight types]; Kertesz, 2001 [ten types]; Mendoza and Foundas, 2008 [six types]; Verstichel et Cambier, 2005 [nine types]) (see Table 2.1). Many investigators have taken issue with the usual typologies as having outlived both their usefulness and contradictory new data (e.g., A. Basso, 2003; D. Caplan, 2011; Caramazza, 1984). While it is true that the traditional diagnostic categories for aphasia map only loosely onto behavioral and anatomical templates, they have survived because of their utility in
summarizing and transmitting information about certain general consistencies across individuals with aphasia (A.R. Damasio and Damasio, 2000; Darby and Walsh, 2005; Festa et al., 2008). However, the presentation of aphasic symptoms also varies enough from patient to patient and in individual patients over time that clear distinctions do not hold up in many cases (M.P. Alexander, 2003; Wallesch, Johannsen-Horbach, and Blanken, 2010). Thus, it is not surprising that the identification of aphasia syndromes (sets of symptoms that occur together with sufficient frequency as to “suggest the presence of a specific disease”or site of damage [Geschwind and Strub, 1975]) is complicated both by differences of opinion as to what constitutes an aphasia syndrome and differences in the labels given those symptom constellations that have been conceptualized as syndromes. TABLE 2.1 Most Commonly Defined Aphasic Syndromes
For syndrome descriptions, see Benson, 1993; A.R. Damasio and Damasio, 2000; Goodglass and Kaplan, 1983a; Kertesz, 2001; Tranel and Anderson, 1999; Verstichel et Cambier, 2005. *Denotes syndromes named in all the above references.
Several alternative ways of classifying the aphasias have been suggested, most focusing on different patterns of impairment and ability-sparing involving such aspects of verbal communication as speech fluency, comprehension, repetition, and naming (e.g., Table 2.1). Like other kinds of cognitive defects, language disturbances usually appear in clusters of related dysfunctions. For example, agraphia (literally, no writing) and alexia (literally, no reading) only rarely occur alone; rather, they are often found together and in association with other communication deficits (Coslett, 2011; Kertesz, 2001; Roeltgen, 2011). In contrast to alexia, which denotes reading defects in persons who could read before the onset of brain damage or disease, dyslexia typically refers to developmental disorders in otherwise competent children who do not make normal progress in reading (Coltheart, 1987; Lovett, 2003).
Developmental dysgraphia differs from agraphia on the same etiological basis (Ellis, 1982).
Thinking Thinking may be defined as any mental operation that relates two or more bits of information explicitly (as in making an arithmetic computation) or implicitly (as in judging that this is bad, e.g., relative to that) (Fuster, 2003). A host of complex cognitive functions is subsumed under the rubric of thinking, such as computation, reasoning and judgment, concept formation, abstracting and generalizing; ordering, organizing, planning, and problem solving overlap with executive functions. The nature of the information being mentally manipulated (e.g., numbers, design concepts, words) and the operation being performed (e.g., comparing, compounding, abstracting, ordering) define the category of thinking. Thus, “verbal reasoning”comprises several operations done with words; it generally includes ordering and comparing, sometimes analyzing and synthesizing (e.g., Cosmides and Tooby, 2000). “Computation”may involve operations of ordering and compounding done with numbers (Dehaene, 2000; Fasotti, 1992), and distance judgment involves abstracting and comparing ideas of spatial extension. The concept of “higher”and “lower”mental processes originated with the ancient Greek philosophers. This concept figures in the hierarchical theories of brain functions and mental ability factors in which “higher”refers to the more complex mental operations and “lower”to the simpler ones. The degree to which a concept is abstract or concrete also determines its place on the scale. For example, the abstract idea “a living organism”is presumed to represent a higher level of thinking than the more concrete idea “my cat Pansy"; the abstract rule “file specific topics under general topics”is likewise considered to be at a higher level of thinking than the instructions “file ‘fir ’ under ‘conifer,’ file ‘conifer ’ under ‘tree’.” The higher cognitive functions of abstraction, reasoning, judgment, analysis, and synthesis tend to be relatively sensitive to diffuse brain injury, even when most specific receptive, expressive, or memory functions remain essentially intact (Knopman, 2011; Mesulam, 2000a). Higher functions may also be disrupted by any one of a number of lesions in functionally discrete areas of the brain at lower levels of the hierarchy (Gitelman, 2002) . Thus, in a sense, the higher cognitive functions tend to be more “fragile”than the lower,
more discrete functions. Conversely, higher cognitive abilities may remain relatively unaffected in the presence of specific receptive, expressive, and memory dysfunctions (E. Goldberg, 2009; Pincus and Tucker, 2003). Problem solving can take place at any point along the complexity and abstraction continua. Even the simplest activities of daily living demand some problem solving, e.g., inserting tooth brushing into the morning routine or determining what to do when the soap dish is empty. Problem solving involves executive functions as well as thinking since a problem first has to be identified. Patients with executive disorders can look at an empty soap dish without recognizing that it presents a problem to be solved, and yet be able to figure out what to do once the problem has been brought to their attention. Arithmetic concepts and operations are basic thinking tools that can be disrupted in specific ways by more or less localized lesions giving rise to one of at least three forms of acalculia (literally, no counting) (Denburg and Tranel, 2011; Grafman and Rickard, 1997). The three most common acalculias involve impairment of (1) appreciation and knowledge of number concepts (acalculias associated with verbal defects); (2) ability to organize and manipulate numbers spatially as in long division or multiplication of two or more numbers; or (3) ability to perform arithmetic operations (anarithmetria). Neuroimaging studies have further fractionated components of number processing showing associations with different cerebral regions (Dehaene, 2000; Gitelman, 2002). Unlike other cognitive functions, thinking cannot be tied to specific neuroanatomical systems, although the disruption of feedback, regulatory, and integrating mechanisms can affect complex cognitive activity more profoundly than other cognitive functions (Luria, 1966) . “There is no … anatomy of the higher cerebral functions in the strict sense of the word … . Thinking is regarded as a function of the entire brain that defies localization”(Gloning and Hoff, 1969). As with other cognitive functions, the quality of any complex operation will depend in part on the extent to which its sensory and motor components are intact at the central integrative (cortical) level. For example, patients with certain somatosensory defects tend to do poorly on reasoning tasks involving visuospatial concepts (Farah and Epstein, 2011; Teuber, 1959); patients whose perceptual disabilities are associated with lesions in the visual system are more likely to have difficulty solving problems calling on visual concepts (B. Milner, 1954; Harel and Tranel, 2008). Verbal defects tend to have more obvious and widespread cognitive consequences than defects in other functional systems because task instructions are frequently verbal, self-
regulation and self-critiquing mechanisms are typically verbal, and ideational systems—even for nonverbal material—are usually verbal (Luria, 1973a). The emphasis on verbal mediation, however, should not be construed as obligatory, and it is abundantly clear that humans without language can still “think”(e.g., see Bermudez, 2003; Weiskrantz, 1988). One need only interact with a patient with global aphasia, or a young preverbal child, to see nonlanguage thinking demonstrated.
Mental Activity Variables These are behavior characteristics that have to do with the efficiency of mental processes. They are intimately involved in cognitive operations but do not have a unique behavioral end product. They can be classified roughly into three categories: level of consciousness, attentional functions, and activity rate. Consciousness
The concept of consciousness has eluded a universally acceptable definition (R. Carter, 2002; Dennett, 1991; Prigatano, 2009). Thus, it is not surprising that efforts to identify its neural substrate and neurobiology are still at the hypothesis-making stage (e.g., Koch and Crick, 2000; Metzinger, 2000, passim). Consciousness generally concerns the level at which the organism is receptive to stimulation or is awake. The words “conscious”or “consciousness”are also often used to refer to awareness of self and surroundings and in this sense can be confused with “attention.” To maintain a clear distinction between “conscious”as indicating an awake state and “conscious”as the state of being aware of something, we will refer to the latter concept as “awareness”(Merikle et al., 2001; Sperry, 1984; Weiskrantz, 1997) . In the sense used in this book, specific aspects of awareness can be blotted out by brain damage, such as awareness of one’s left arm or some implicit skill memory (Farah, 2000; Schacter, McAndrews, and Moscovitch, 1988). Awareness can even be divided, with two awarenesses coexisting, as experienced by “split-brain”patients (Baynes and Gazzaniga, 2000; Kinsbourne, 1988; Loring, Meador, and Lee, 1989). Moreover, beyond the awake state and awareness, Prigatano (2010) includes “conscious awareness of another ’s mental state”as the third component of a theoretical model of conscious. Yet consciousness is also a general manifestation of brain activity that may become more or less responsive to stimuli but has no separable parts. Level of consciousness ranges over a continuum from full alertness through
drowsiness, somnolence, and stupor, to coma (Plum and Posner, 1980; Strub and Black, 2000; Trzepacz and Meagher, 2008). Even slight depressions of the alert state may significantly diminish a person’s mental efficiency, leading to tiredness, inattention, or slowness. Levels of alertness can vary in response to organismic changes in metabolism, circadian rhythms, fatigue level, or other organic states (e.g., tonic changes) (Stringer, 1996; van Zomeren and Brouwer, 1987). Brain electrophysiological responses measured by such techniques as electroencephalography and evoked potentials vary with altered levels of consciousness (Daube, 2002; Frith and Dolan, 1997). Although disturbances of consciousness may accompany a functional disorder, they usually reflect pathological conditions of the brain (Lishman, 1997; Trzepacz et al., 2002). Attentional functions
Attention refers to capacities or processes of how the organism becomes receptive to stimuli and how it may begin processing incoming or attended-to excitation (whether internal or external) (Parasuraman, 1998). Definitions of attention vary widely as seen, for example, in Mirsky’s (1989) placement of attention within the broader category of “information processing”and Gazzaniga’s (1987) conclusion that “the attention system … functions independently of information processing activities and [not as] … an emergent property of an ongoing processing system.” Many investigators seem most comfortable with one or more of the characteristics that William James (1890) and others ascribed to attention (e.g., see Leclercq, 2002; Parasuraman, 1998; Pashler, 1998). These include two aspects, “reflex”(i.e., automatic processes) and “voluntary”(i.e., controlled processes). Other characteristics of attention are its finite resources and the capacities both for disengagement in order to shift focus and for responsivity to sensory or semantic stimulus characteristics. Another kind of difference in attentional activities is between sustained tonic attention as occurs in vigilance, and the responsive shifting of phasic attention, which orients the organism to changing stimuli. “At its core, attention includes both perceptual and inhibitory processes—when one attends to one thing, one is refraining from attending to other things”(Koziol and Budding, 2009, p. 71; see also Kinsbourne, 1993). Most investigators conceive of attention as a system in which processing occurs sequentially in a series of stages within the different brain systems involved in attention (Butter, 1987; Luck and Hillyard, 2000). This system appears to be organized in a hierarchical manner in which the earliest entries are modality specific while late-stage processing—e.g., at the level of awareness—is supramodal (Butter, 1987; Posner, 1990). Disorders of attention
may arise from lesions involving different points in this system (L.C. Robertson and Rafal, 2000; Rousseaux, Fimm, and Cantagallo, 2002). A salient characteristic of the attentional system is its limited capacity (Lavie, 2001; Pashler, 1998; Posner, 1978) . Only so much processing activity can take place at a time, such that engagement of the system in processing one attentional task calling on controlled attention can interfere with a second task having similar processing requirements. Thus, one may be unable to concentrate on a radio newscast while closely following a sporting event on television yet can easily perform an automatic (in this case, highly overlearned) attention task such as driving on a familiar route while listening to the newscast. (The use of cell phones while driving, however, is an entirely different story as it creates attentional defects that can have disastrous consequences; see Caird et al., 2008; Charlton, 2009; McCartt et al., 2006.) Another key characteristic involves bottom-up processes which bias attention toward salient “attention-getting”stimuli like a fire alarm, and topdown processes determined by the observer ’s current goals (C.E. Connor et al., 2004). For example, one of the many studies of the interplay between bottomup and top-down visual attention processes found that, under certain task conditions attention is automatically directed toward conspicuous stimuli, despite their irrelevance and possible detrimental effect on performance. In contrast, top-down attentional biases can be sufficiently strong to override stimulus-driven responses (Theeuwes, 2010). Attentional capacity varies not only between individuals but also within each person at different times and under different conditions. Depression or fatigue, for example, can temporarily reduce attentional capacity in healthy persons (Landro, Stiles, and Sletvold, 2001; P. Zimmerman and Leclercq, 2002). An aging brain (Parasuraman and Greenwood, 1998; Van der Linden and Collette, 2002) and brain injury may irreversibly reduce attentional capacity (L.C. Robertson and Rafal, 2000; Rousseaux, Fimm, and Cantagallo, 2002). Simple immediate span of attention—how much information can be grasped at once—is a relatively effortless process that tends to be resistant to the effects of aging and many brain disorders. It may be considered a form of working memory but is an integral component of attentional functioning (Howieson and Lezak, 2002b). Four other aspects of attention are more fragile and thus often of greater clinical interest (Leclercq, 2002; Mateer, 2000; Posner, 1988; Van der Linden and Collette, 2002). (1) Focused or selective attention is probably the most studied aspect and the one people usually have in mind when talking about attention. It is the capacity to highlight the one or two important stimuli or ideas being dealt with while suppressing awareness of
competing distractions. It may also be referred to as concentration. Sohlberg and Mateer (1989) additionally distinguish between focused and selective attention by attributing the “ability to respond discretely”to specific stimuli to the focusing aspect of attention and the capacity to ward off distractions to selective attention. (2) Sustained attention, or vigilance, refers to the capacity to maintain an attentional activity over a period of time. (3) Divided attention involves the ability to respond to more than one task at a time or to multiple elements or operations within a task, as in a complex mental task. It is thus very sensitive to any condition that reduces attentional capacity. (4) Alternating attention allows for shifts in focus and tasks. While these different aspects of attention can be demonstrated by different examination techniques, even discrete damage involving a part of the attentional system can create alterations that affect more than one aspect of attention. Underlying many patients’ attentional disorders is slowed processing, which can have broad-ranging effects on attentional activities (Gunstad et al., 2006). Patients with brain disorders associated with slowed processing—certain traumatic brain injuries and multiple sclerosis, for example—often complain of “memory problems,” although memory assessment may demonstrate minimal if any diminution in their abilities to learn new or retrieve old information. On questioning, the examiner discovers that these “memory problems”typically occur when the patient is bombarded by rapidly passing stimuli. These patients miss parts of conversations (e.g., a time or place for meeting, part of a story). Many of them also report misplacing objects as an example of their “memory problem.” What frequently has happened is that on entering the house with keys or wallet in hand they are distracted by children or a spouse eager to speak to them or by loud sounds or sight of some unfinished chore. With no recollection of what they have been told or where they set their keys, they and their families naturally interpret these lapses as a “memory problem.” Yet the problem is due to slowed processing speed which makes difficult the processing of multiple simultaneous stimuli. Given an explanation of the true nature of these lapses, patients and families can alter ineffective methods of exchanging messages and conducting activities with beneficial effects on the patient’s “memory.” (Howieson and Lezak, 2002b)
Impaired attention and concentration are among the most common mental problems associated with brain damage (Leclercq, Deloche, and Rousseaux, 2002; Lezak, 1978b, 1989), and also with psychiatric disease (R.A. Cohen et al., 2008). When attentional deficits occur, all the cognitive functions may be intact and the person may even be capable of some high-level performances, yet overall cognitive productivity suffers. Activity rate
Activity rate refers to the speed at which mental activities are performed and to speed of motor responses. Behavioral slowing is a common characteristic of
both aging and brain damage. Slowing of mental activity shows up most clearly in delayed reaction times and in longer than average total performance times in the absence of a specific motor disability. It can be inferred from patterns of mental inefficiency, such as reduced auditory span plus diminished performance accuracy plus poor concentration, although each of these problems can occur on some basis other than generalized mental slowing. Slowed processing speed appears to contribute significantly to the benign memory lapses of elderly persons (Luszcz and Bryan, 1999; D.C. Park et al., 1996; Salthouse, 1991a). EXECUTIVE FUNCTIONS The executive functions consist of those capacities that enable a person to engage successfully in independent, purposive, self-directed, and self-serving behavior. They differ from cognitive functions in a number of ways. Questions about executive functions ask how or whether a person goes about doing something (e.g., Will you do it and, if so, how and when?); questions about cognitive functions are generally phrased in terms of what or how much (e.g., How much do you know? What can you do?). So long as the executive functions are intact, a person can sustain considerable cognitive loss and still continue to be independent, constructively self-serving, and productive. When executive functions are impaired, even if only partially, the individual may no longer be capable of satisfactory self-care, of performing remunerative or useful work independently, or of maintaining normal social relationships regardless of how well preserved the cognitive capacities are—or how high are the person’s scores on tests of skills, knowledge, and abilities. Cognitive deficits usually involve specific functions or functional areas; impairments in executive functions tend to show up globally, affecting all aspects of behavior. Moreover, executive disorders can affect cognitive functioning directly in compromised strategies to approaching, planning, or carrying out cognitive tasks, or in defective monitoring of the performance (E. Goldberg, 2009; Lezak, 1982a; Tranel, Hathaway-Nepple, and Anderson, 2007). A young woman who survived a severe motor vehicle accident displayed a complete lack of motivation with inability to initiate almost all behaviors including eating and drinking, leisure or housework activities, social interactions, sewing (which she had once done well), or reading (which she can still do with comprehension). Although new learning ability is virtually nonexistent and her constructional abilities are significantly impaired, her cognitive losses are relatively circumscribed in that verbal skills and much of her background knowledge and capacity to retrieve old information—both semantic and episodic—are fairly intact. Yet she performs these cognitive tasks—and any other activities—only when expressly directed or stimulated by others, and then external supervision must be maintained for her to complete what
she began.
Many of the behavior problems arising from impaired executive functions may be apparent to casual or naive observers, but they may not appreciate their importance with respect to the patient’s overall behavioral competence. For experienced clinicians, these problems are symptoms or hallmarks of significant brain injury or dysfunction that may be predictive of more social and interpersonal problems ahead (Lezak, 1996). Among them are a defective capacity for self-control or self-direction such as emotional lability (see pp. 39, 387) or flattening, a heightened tendency to irritability and excitability, impulsivity, erratic carelessness, rigidity, and difficulty in making shifts in attention and in ongoing behavior. Other defects in executive functions, however, are not so obvious. The problems they occasion may be missed or not recognized as “neuropsychological”by examiners who see patients only in the wellstructured inpatient and clinic settings in which psychiatry and neurology patients are commonly observed (Lezak, 1982a). Perhaps the most serious of these problems, from a psychosocial standpoint, are impaired capacity to initiate activity, decreased or absent motivation (anergia), and defects in planning and carrying out the activity sequences that make up goal-directed behaviors (Darby and Walsh, 2005; Lezak, 1989; Luria, 1966). Patients without significant impairment of receptive or expressive functions who suffer primarily from these kinds of executive control defects are often mistakenly judged to be malingering, lazy or spoiled, psychiatrically disturbed, or—if this kind of defect appears following a legally compensable brain injury— exhibiting a “compensation neurosis”that some interested persons may believe will disappear when the patient’s legal claim has been settled. The crippling defects of executive functions are vividly demonstrated by the case of a hand surgeon who had had a hypoxic (hypoxia: insufficient oxygen) event during a cardiac arrest that occurred in the course of minor facial surgery. His cognitive abilities, for the most part, were not greatly affected; but initiating, self-correcting, and self-regulating behaviors were severely compromised. He also displayed some difficulty with new learning—not so much that he lost track of the date or could not follow sporting events from week to week but enough to render his memory, particularly prospective memory, unreliable for most practical purposes. One year after the anoxic episode, the patient’s scores on Wechsler Intelligence Scale tests ranged from high average (75th percentile) to very superior (99th percentile), except on Digit Symbol, performed without error but at a rate of speed that placed this performance low in the average score range. His Trail Making Test speed was within normal limits and he demonstrated good verbal fluency and visual discrimination abilities—all in keeping with his highest educational and professional achievements. On the basis of a clinical psychologist’s conclusion that these high test scores indicated “no clear evidence of organicity”and a psychiatric diagnosis of “traumatic depressive neurosis,” the patient’s insurance company denied his claim (pressed by his guardian brother) for disability payments. Retesting six years later, again at the request of the
brother, produced the same pattern of scores. The patient’s exceptionally good test performances belied his actual behavioral capacity. Seven years after the hypoxic episode, this 45-year-old man who had had a successful private practice was working for his brother as a delivery truck driver. This youthful-looking, nicely groomed man explained, on questioning, that his niece bought all of his clothing and even selected his wardrobe for important occasions such as this examination. He knew neither where nor with what she bought his clothes, and he did not seem to appreciate that this ignorance was unusual. He was well-mannered and pleasantly responsive to questions but volunteered nothing spontaneously and made no inquiries in an hour-and-a-half interview. His matter-of-fact, humorless manner of speaking remained unchanged regardless of the topic. When asked, the patient reported that his practice had been sold but he did not know to whom, for how much, or who had the money. This once briefly married man who had enjoyed years of affluent independence had no questions or complaints about living in his brother’s home. He had no idea how much his room and board cost or where the money came from for his support, nor did he exhibit any curiosity or interest in this topic. He said he liked doing deliveries for his brother because “I get to talk to people.” He had enjoyed surgery and said he would like to return to it but thought that he was too slow now. When asked what plans he had, his reply was, “None.” His sister-in-law reported that it took several years of rigorous rule-setting to get the patient to bathe and change his underclothes each morning. He still changes his outer clothing only when instructed. He eats when hungry without planning or accommodating himself to the family’s plans. If left home alone for a day or so he may not eat at all, although he makes coffee for himself. In seven years he has not brought home or asked for any food, yet he enjoys his meals. He spends most of his leisure time in front of the TV. Though once an active sports enthusiast, he has made no plans to hunt or fish in seven years, but he takes pleasure in these sports when accompanying relatives. Since he runs his own business, the patient’s brother is able to keep the patient employed. The brother explained that he can give the patient only routine assignments that require no judgment, and these only one at a time. As the patient finishes each assignment, he calls into his brother’s office for the next one. Although he knows that his brother is his guardian, the patient has never questioned or complained about his legal status. When the brother reinstituted suit for the patient’s disability insurance, the company again denied the claim in the belief that the high test scores showed he was capable of returning to his profession. It was only when the insurance adjustor was reminded of the inappropriateness of the patient’s lifestyle and the unlikelihood that an experienced, competent surgeon would contentedly remain a legal dependent in his brother’s household for seven years that the adjustor could appreciate the psychological devastation the surgeon had suffered.
PERSONALITY/EMOTIONALITY VARIABLES Changes in emotion and personality are common with brain disorders and after brain injury (Gainotti, 2003; Lezak, 1978a; Lishman, 1997; see Chapter 7, passim). Some changes tend to occur as fairly characteristic behavior patterns that relate to specific anatomical sites (e.g., S.W. Anderson, Barrash, et al., 2006; R.J. Davidson and Irwin, 2002). Among the most common direct effects of brain injury on personality are emotional dulling, disinhibition, diminution of anxiety with associated emotional blandness or mild euphoria, and reduced social sensitivity (Barrash, Tranel, and Anderson, 2000).
Heightened anxiety, depressed mood, and hypersensitivity in interpersonal interactions may also occur (Blumer and Benson, 1975; D.J. Stein and Rauch, 2008; Yudofsky and Hales, 2008, passim). Some of the emotional and personality changes that follow brain injury seem to be not so much a direct product of the illness but develop as reactions to experiences of loss, chronic frustration, and radical changes in lifestyle. Consequently, depression is probably the most common single emotional characteristic of brain damaged patients generally, with pervasive anxiety following closely behind (J.F. Jackson, 1988; Lezak, 1978b). When mental inefficiency (i.e., attentional deficits typically associated with slowed processing and diffuse damage) is a prominent feature, obsessive-compulsive traits frequently evolve (Lezak, 1989; D.J. Stein and Rauch, 2008) . Some other common behavior problems of brain injured people are irritability, restlessness, low frustration tolerance, and apathy (Blonder et al., 2011). It is important to recognize that the personality changes, emotional distress, and behavior problems of brain damaged patients are usually the product of the complex interactions involving their neurological disabilities, present social demands, previously established behavior patterns and personality characteristics, and ongoing reactions to all of these (Gainotti, 1993). When brain injury is mild, personality and the capacity for self-awareness usually remain fairly intact so that emotional and characterological alterations for the most part will be reactive and adaptive (compensatory) to the patients’ altered experiences of themselves. As severity increases, so do organic contributions to personality and emotional changes. With severe damage, little may remain of the premorbid personality or of reactive capabilities and responses. Some brain injured patients display emotional instability characterized by rapid, often exaggerated affective swings, a condition called emotional lability. Three kinds of lability associated with brain damage can be distinguished. 1. The emotional ups and downs of some labile patients result from weakened executive control and lowered frustration tolerance. This is often most pronounced in the acute stages of their illness and when they are fatigued or stressed. Their emotional expression and their feelings are congruent, and their sensitivity and capacity for emotional response are intact. However, emotional reactions, particularly under conditions of stress or fatigue, will be stronger and may last longer than was usual for them premorbidly. 2. A second group of labile patients have lost emotional sensitivity and the capacity for modulating emotionally charged behavior. They tend to overreact
emotionally to whatever external stimulation impinges on them. Their emotional reactivity can generally be brought out in an interview by abruptly changing the subject from a pleasant topic to an unpleasant one and back again, as these patients will beam or cloud up with each topic change. When left alone and physically comfortable, they may appear emotionless. 3. A third group of labile patients differs from the others in that their feelings are generally appropriate, but brief episodes of strong affective expression— usually tearful crying, sometimes laughter—can be triggered by even quite mild stimulation. This has sometimes been termed pseudobulbar state (Blonder et al., 2011; Lieberman and Benson, 1977; R.G. Robinson and Starkstein, 2002) . It results from structural lesions that involve the frontal cortex and connecting pathways to lower brain structures. The feelings of patients with this condition are frequently not congruent with their appearance, and they generally can report the discrepancy. Because they tend to cry with every emotionally arousing event, even happy or exciting ones, family members and visitors see them crying much of the time and often misinterpret the tears as evidence of depression. Sometimes the bewildered patient comes to the same mistaken conclusion and then really does become depressed. These patients can be identified by the frequency, intensity, and irrelevancy of their tears or guffaws; the rapidity with which the emotional reaction subsides; and the dissociation between their appearance and their stated feelings. Although most brain injured persons tend to undergo adverse emotional changes, for a few, brain damage seems to make life more pleasant. This can be most striking in those emotionally constricted, anxious, overly responsible people who become more easygoing and relaxed as a result of a pathological brain condition. A clinical psychologist wrote about himself several years after sustaining significant brain damage marked by almost a week in coma and initial rightsided paralysis: People close to me tell me that I am easier to live with and work with, now that I am not the highly self-controlled person that I used to be. My emotions are more openly displayed and more accessible, partially due to the brain damage which precludes any storing up of emotion, and partially due to the maturational aspects of this whole life-threatening experience… . Furthermore, my blood pressure is amazingly low. My one-track mind seems to help me to take each day as it comes without excessive worry and to enjoy the simple things of life in a way that I never did before. (Linge, 1980)
However, their families may suffer instead, as illustrated in the following example:
A young Vietnam War veteran lost the entire right frontal portion of his brain in a land mine explosion. His mother and wife described him as having been a quietly pleasant, conscientious, and diligent sawmill worker before entering the service. When he returned home, all of his speech functions and most other cognitive abilities were intact. He was completely free of anxiety and thus without a worry in the world. He had also become very easygoing, selfindulgent, and lacking in both drive and sensitivity to others. His wife was unable to get him to share her concerns when the baby had a fever or the rent was due. Not only did she have to handle all the finances, carry all the family and home responsibilities, and do all the planning, but she also had to see that her husband went to work on time and that he did not drink up his paycheck or spend it in a shopping spree before getting home on Friday night. For several years his wife tried to cope with the burdens of a carefree husband. She finally left him after he had ceased working and had begun a pattern of monthly drinking binges that left little of his considerable compensation checks.
One significant and relatively common concomitant of brain injury is an altered sexual drive (Foley and Sanders, 1997a,b; Wiseman and Fowler, 2002; Zasler, 1993). A married person who has settled into a comfortable sexual activity pattern of intercourse two or three times a week may begin demanding sex two and three times a day from the bewildered spouse. More often, the patient loses sexual interest or capability (L.M. Binder, Howieson, and Coull, 1987; Forrest, 2008; Lechtenberg, 1999). Moreover, some brain damaged men are unable to achieve or sustain an erection, or they may have ejaculatory problems secondary to nervous tissue damage (D.N. Allen and Goreczny, 1995; Foley and Sanders, 1997b). This can leave the partner feeling unsatisfied and unloved, adding to other tensions and worries associated with cognitive and personality changes in the patient (Lezak, 1978a; Zasler, 1993). Patients who become crude, boorish, or childlike as a result of brain damage no longer are welcome bed partners and may be bewildered and upset when rejected by their once affectionate mates. Younger persons who sustain brain damage before experiencing an adult sexual relationship may not be able to acquire acceptable behavior and appropriate attitudes (S.W. Anderson, Bechara, et al., 1999). Adults who were normally functioning when single often have difficulty finding and keeping partners because of cognitive limitations or social incompetence resulting from their neurological impairments. For all these reasons, the sexual functioning of many brain damaged persons will be thwarted. Although some sexual problems diminish in time, for many patients they seriously complicate the problems of readjusting to new limitations and handicaps by adding another strange set of frustrations, impulses, and reactions.
3 The Behavioral Geography of the Brain So much is now known about the brain—and yet so little, especially how cognitive processes emerge from brain function. Current technology has visualized the structure of the brain so well that even minute details of cell structure can be seen with electron microscopy and other techniques. For example, structural changes in the neuron associated with learning can be microscopically identified and living cells imaged (Bhatt et al., 2009; Nagerl et al., 2008). Contemporary neuroimaging permits the visualization and analysis of the major pathways of the brain (Schmahmann and Pandya, 2006) ; these are readily imaged in the living individual (Pugliese et al., 2009). Now neuroimaging techniques can identify which brain areas are involved in a particular task and how brain regions come “on line”during a mental task. This beginning understanding of the complexities of brain activation lays the foundation for a neuroscience-based revision of the big questions selfconscious humans have asked for centuries: What is the neural (anatomic, physiologic) nature of consciousness (e.g., R. Carter, 2002; Crick and Koch, 2005; Dehaene, 2002) ? What are the relative contributions and interactions of genotype and experience (Huttenlocher, 2002; Pennington, 2002; van Haren et al., 2008)? What are the neuroanatomic bases of “self”(S.C. Johnson, Ries, et al., 2007; Legrand and Ruby, 2009; Rilling, 2008)? New technology has supported many traditional beliefs about the brain and challenged others. The long-held belief that neurons do not proliferate after early stages of development is incorrect. It is now known that new neurons are produced in some brain regions of adults in a number of mammalian species, including human, perhaps playing a role in brain injury repair, new learning, and maintenance of healthy neural functioning (Basak and Taylor, 2009). Adult neurogenesis has been identified in the hippocampus and olfactory bulb in mammalian brains—including human—and implicated in other limbic regions, in the neocortex, striatum, and substantia nigra (E. Gould, 2007). Neurogenesis in the hippocampus is thought to be especially critical for maintaining normal cognition and emotional well-being (Alleva and Francia, 2009; Elder et al., 2006). The importance of these findings for neuropsychology, human aging and diseases are just beginning to emerge.
In addition, the roles of many brain regions are far more complex and functionally interconnected than previously thought. The basal ganglia and cerebellum, once believed to be background motor control centers, are increasingly appreciated for their influences on cognition and psychiatric disorders (Baillieux et al., 2008; Dow, 1988; Grahn et al., 2009; Manto, 2008). Even the motor cortex appears to play an active role in processing abstract learned information (A.F. Carpenter et al., 1999). How single neurons participate in unified neural function can be seen within all neural systems including those once thought to be dedicated to a single function, like motor ability (C. Koch and Segev, 2000). The importance of subtle aberrations coming from a few neurons disrupting larger networks is central to the model of cerebral dysfunction offered by Izhikevich and Edelman (2008) and reinforces the principle that strategically occurring lesions or abnormalities albeit small may nonetheless influence neuropsychological function (Geschwind, 1965). This chapter presents a brief and necessarily superficial sketch of some of the structural arrangements in the human central nervous system that are intimately connected with behavioral function. This sketch is followed by a review of anatomical and functional interrelationships that appear with enough regularity to have psychologically meaningful predictive value (P. Brodal, 1992). More detailed information on neuroanatomy and its behavioral correlates is available in such standard references as Afifi and Bergman (1998), Hendelman (2000), and Nolte (1999). A.R. Damasio and Tranel (1991), Mesulam (2000c), and Harel and Tranel (2008) provide excellent reviews of brain-behavior relationships. Reviews of the brain correlates for a variety of neuropsychological disorders can be found in Feinberg and Farah (2003a), Heilman and Valenstein (2011), Kolb and Whishaw (2009), Mendoza and Foundas (2007), Rizzo and Eslinger (2004), and Yudofsky and Hales (2008). Physiological and biochemical events in behavioral expression add another important dimension to neuropsychological phenomena. Most work in these areas is beyond the scope of this book. Readers wishing to learn how neural systems, biochemistry, and neurophysiology relate to behavioral phenomena can consult M.F.F. Bear et al. (2006), Cacioppo and Bernston (2005), and Kandel et al. (2010). BRAIN PATHOLOGY AND PSYCHOLOGICAL FUNCTION There is no localizable single store for the meaning of a given entity or event within a cortical region. Rather, meaning is achieved by widespread
multiregional activation of fragmentary records pertinent to a given stimulus and according to a combinatorial code specific or partially specific to the entity … the meaning of an entity, in this sense, is not stored anywhere in the brain in permanent fashion; instead it is re-created anew for every instantiation. Daniel Tranel and Antonio R. Damasio, 2000
The relationship between brain and behavior is exceedingly intricate and frequently puzzling. Our understanding of this fundamental relationship is still very limited, but the broad outlines and many details of the correlations between brain and behavior have been sufficiently well explained to be clinically useful. Any given behavior is the product of a myriad of complex neurophysiological and biochemical interactions involving the whole brain. Complex acts, even as fundamental as swatting a fly or reading this page, are the products of countless neural interactions involving many, often far-flung sites in the neural network; their neuroanatomical correlates are not confined to any local area of the brain (Fuster, 2003; Luria, 1966; Sherrington, 1955). Yet discrete psychological activities such as the perception of a pure tone or the movement of a finger can be disrupted by lesions (localized abnormal tissues changes) involving approximately the same anatomical structures in most human brains. Additionally, one focal lesion may affect many functions when the damaged neural structure is either a pathway, nucleus, or region that is central in regulating or integrating a particular function or functions. These disruptions can produce a neurobehavioral syndrome, a cluster of deficits that tend to occur together with some regularity (Benton, 1977b [1985]; H. Damasio and Damasio, 1989; E. Goldberg, 1995). Disruptions of complex behavior by brain lesions occur with such consistent anatomical regularity that inability to understand speech, to recall recent events, or to copy a design, for example, can often be predicted when the site of the lesion is known (Benton, 1981 [1985]; Filley, 1995, 2008; Geschwind, 1979). Knowledge of the localization of dysfunction, the correlation between damaged neuroanatomical structures and behavioral functions also enables neuropsychologists and neurologists to make educated guesses about the site of a lesion on the basis of abnormal patterns of behavior. However, similar lesions may have quite dissimilar behavioral outcomes (Bigler, 2001b). Markowitsch (1984) described the limits of prediction: “[a] straightforward correlation between a particular brain lesion and observable functional deficits is … unlikely … as a lesioned structure is known not to act on its own, but depends in its function on a network of input and output channels, and as the equilibrium of the brain will be influenced in many and up
to now largely unpredictable ways by even a restricted lesion”(p. 40). Moreover, localization of dysfunction cannot imply a “pushbutton”relationship between local brain sites and specific behaviors as the brain’s processing functions take place at multiple levels (e.g., encoding a single modality of a percept, energizing memory search, recognition, attribution of meaning) within complex, integrated, interactive, and often widely distributed systems. Thus lesions at many different brain sites may alter or extinguish a single complex act (Luria, 1973b; Nichelli, Grafman, et al., 1994; Sergent, 1988), as can lesions interrupting the neural pathways connecting areas of the brain involved in the act (Geschwind, 1965; Tranel and Damasio, 2000). E. Miller (1972) reminded us: It is tempting to conclude that if by removing a particular part of the brain we can produce a deficit in behavior, e.g., a difficulty in verbal learning following removal of the left temporal lobe in man, then that part of the brain must be responsible for the impaired function… . [T]his conclusion does not necessarily follow from the evidence as can be seen from the following analogy. If we were to remove the fuel tank from a car we would not be surprised to find that the car was incapable of moving itself forward. Nevertheless, it would be very misleading to infer that the function of the fuel tank is to propel the car (pp. 19–20).
THE CELLULAR SUBSTRATE The nervous system makes behavior possible. It is involved in the reception, processing, storage, and transmission of information within the organism and in the organism’s exchanges with the outside world. It is a dynamic system in that its activity modifies its performance, its internal relationships, and its capacity to mediate stimuli from the outside. The basic cell of the brain that gives rise to its complexity and ability to regulate behavior is the neuron. an overly simplified schematic of a neuron is shown in Figure 3.1. The neuron also has a supporting cast of cells, the glial cells. neurons conduct electrochemical impulses that transmit information in the brain and throughout the peripheral and central nervous system (CNS). a primary function of the neuron is to provide a network of connectivity between neurons and different regions of the brain. Brain connectivity is key to brain functioning. one direct estimate suggests that the number of neurons in the neocortex alone is approximately 20 billion (pakkenberg and Gundersen, 1997). estimates of all other structures in the CNS double or triple the total number of neurons. At birth the full complement of neurons appears to be present (larsen et al., 2006), indicating an astonishing growth pattern from conception to birth. At peak periods of development tens of thousands to hundreds of thousands of cells are created each minute to reach the ultimate goal of billions of brain cells (levitt,
2003; A.K. McAllister et al., 2008). Glial cells are supporting brain cells which come in several types. While they do not transmit information (like neurons) (Carnevale and Knes, 2006; Kandel et al., 2010; levitan and Kaczmarek, 2002), glial cells, particularly astrocytes, likely facilitate neural transmission and probably play a more direct role in synaptic functioning and neural signaling than previously thought (Araque and navarette, 2010; Fellin, 2009). Glial cells not only serve as structural supports, but they also appear to have nutritional and scavenger functions and to release growth factors. Astrocytes are a major type of glial cell that have an additional role as a component of the blood-brain barrier which prevents some substances in the blood from entering the CNS (P.A. Stewart, 1997). Another major type of glial cells are oligodendroglia, which also form myelin, the white fatty substance of axonal sheaths (see Fig. 3.1). Glia are substantially more numerous than neurons by a factor of two to three (Pelvig et al., 2008). Thus the total number of individual cells within the CNS may be in excess of a hundred billion. Neurons vary in shape and function (Carnevale and Knes, 2006; levitan and Kaczmarek, 2002). Most have a well-defined nucleus within a cell body as seen in a photomicrograph taken of human thalamic neurons (blue insert in Fig. 3.1); they have multiple branching dendrites that receive stimulation from other neurons, and an axon that carries the electrical nerve impulses (action potentials). neural cells are very small, their size measured in microns (1/10,000 of a mm); the inset photomicrograph in Figure 3.1 shows the cell body to be less than 10 microns. The typical length and diameter of a neuron cell body is approximately 30 microns (Carnevale and hines, 2006). neurons have only one initial segment, the axon, which may branch to produce collateral segments; these can be very numerous in some neurons (Kandel et al., 2010; robber and Samuels, Mitochondrion 2009). Axons vary in length with the average estimated at approximately 1,000 microns. Coursing fasciculi (impulse transmitting axonal bundles), are comprised of axons from 10 to 15 centimeters in length to in excess of 30 centimeters (e.g., motor cortex to a synapse in the spine), depending on the size of the individual. Long axons have myelin sheaths that provide insulation for high-speed neural conduction. The average axon diameter varies only from approximately one to a few microns. Neurons communicate via the synapse.
FIGURE 3.1 Schematic of a neuron. photomicrograph from Bigler and Maxwell (2011) used with permission from Springer publishing.
The typical dendrite, which is the receptive process of the neuron that interfaces with other neurons, is also about the same diameter as an axon (see Fig. 3.1), but the typical dendritic field ranges from 200 to 600 microns. The surface of the dendrite may change in response to neural activity forming what is referred to as a spine; spine development is thought to be particularly important in the formation of new memories and neural plasticity (Kasai et al., 2010; Shepherd and Koch, 1998). At the tips of an axon are synaptic vesicles that produce and house neurotransmitters which, when released, interface with dendrites on the postsynaptic neuron through electrochemical reactions. The many and differing interactions among excitatory and inhibitory pathways and neurotransmitters make the entire process of interneural communication extremely complex (Connors and Long, 2004; D.E. Feldman, 2009). Given the brain’s primary activity of neural transmission and connectivity and the billions of neural cells, a phenomenal level of complexity is present in even the simplest cognitive, motor, or sensory task. Neural connectivity and effective neural transmission become even more awesome when one considers the estimated rate of ionic changes that have to occur via the cell membrane for a neural event to be passed on to the next cell in line. During neural conduction a shift in ions through the cell membrane occurs via ion channels (see Fig. 3.1). When an axon is propagating an action potential, an estimated 100 million ions pass through a single channel in one second (A.K. McAllister et al., 2008) . In addition, a single neuron may have direct synaptic contact with thousands of other neurons and thereby be involved in the almost unfathomable multiplicity and complexity of functioning synapses underlying
behavior and cognition at any given moment. This also means that a few strategic CNS cells misfiring and/ or misconnecting can produce significant changes in brain function (Izhikevich and Edelman, 2008). The postsynaptic cell is constantly computing its excitatory and inhibitory inputs. It either maintains an excitatory or inhibitory valence or fires a neural impulse in the form of an action potential. Stimulation applied to a neural pathway heightens that pathway’s sensitivity and increases the efficacy with which neuronal excitation may be transmitted through its synapses (C. Koch and Segev, 2000; A.K. McAllister et al., 2008; Toni et al., 1999). Such alterations in spatial and temporal excitation patterns in the brain’s circuitry can add considerably more to its dynamic potential. Long-lasting synaptic modifications are called l ong-term potentiation and long-term depression; these are critical neuro-physiological features of memory and learning (Fuster, 1995; Korn et al., 1992; G. Lynch, 2000). Together these mechanisms of synaptic modification provide the neural potential for the variability and flexibility of human behavior (Carnevale and Hines, 2006; Levitan and Kaczmarek, 2002; E.T. Rolls, 1998). Neurons do not touch one another at synapses (M.F.F. Bear et al., 2006; Cacioppo and Bernston, 2005; Kandel et al., 2010). Rather, communication between neurons is made primarily through the medium of neurotransmitters —chemical agents generated within and secreted by stimulated neurons. These substances bridge synaptic gaps between neurons to activate receptors within the postsynaptic neurons (E.S. Levine and Black, 2000; D. A. McCormick, 1998; P.G. Nelson and Davenport, 1999) . The identification of more than 100 neurotransmitters (National Advisory Mental Health Council, 1989) gives some idea of the possible range of selective activation between neurons. Each neurotransmitter can bind to and thus activate only those receptor sites with the corresponding molecular conformation, but a single neuron may produce and release more than one of these chemical messengers (Carnevale and Hines, 2006; Hokfelt et al., 1984; Levitan and Kaczmarek, 2002) . The key transmitters implicated in neurologic and psychiatric diseases are acetylcholine, dopamine, norepinephrine, serotonin, glutamate, and gammaaminobutyric acid (GABA) (Alagbe et al., 2008; A.K. McAllister et al., 2008; Wilcox and Gonzales, 1995). When a neural cell is injured or diseased, it may stop functioning and the circuits to which it contributed will then be disrupted. Some circuits may eventually reactivate as damaged cells resume some functioning or alternative patterns involving different cell populations take over (see p. 356 regarding brain injury and neuroplasticity). When a circuit loses a sufficiently great
number of neurons, the broken circuit can neither be reactivated nor replaced. As it is now known that neurogenesis does occur in some areas of the brain, investigations of its role in response to injury are ongoing (T.C. Burns et al., 2009; A. Rolls et al., 2009). Probably most postinjury improvement comes from adaptation and the use and/or development of alternative pathways and synaptic modifications within existing pathways participating in functions for which they were not primarily developed (M.V. Johnston, 2009). During development some neurons initiate apoptosis, which is, programmed cell death, which enhances the organization and efficiency of specific neuronal pathways in a process called pruning (Rakic, 2000; Yuan and yankner, 2000). While apoptosis occurs normally in the development of the nervous system and—over the lifespan—normal age-related apoptotic cellular changes occur, some nervous system diseases may result from apoptotic processes gone awry or other forms of cell death which are normally prevented by neurotrophic factors (Leist and Nicotera, 1997; A.K. McAllister et al., 2008; raff, 1998). THE STRUCTURE OF THE BRAIN The brain is an intricately patterned complex of small and delicate structures that form elaborate networks with identifiable anatomical landmarks. in embryological development, three major anatomical divisions of the brain, succeed one another: the hindbrain (pons, medulla, and cerebellum), the midbrain, and the forebrain (divided into the telencephalon and diencephalon) (Fig. 3.2a); (for detailed graphic displays of brain development and anatomy, see Hendelman, 2006; leichnetz, 2006; Montemurro and Bruni, 2009; netter, 1983). Structurally, the lowest brain centers are the most simply organized and mediate simpler, more primitive functions. The cerebral hemispheres mediate the highest levels of behavioral and cognitive function. A lateral view of the gross surface anatomy of the brain is shown in Figure 3.3 in which a postmortem brain on the left is compared to a similar view generated from an Mn of a living individual on the right. note how closely the gross anatomy of the living brain as depicted by an Mn matches the postmortem specimen.
FIGURE 3.2 (a) axial Mn of anatomical divisions of the brain. (b) Coronal Mn of anatomical divisions of the brain. (c) Sagittal Mn of anatomical divisions of the brain.
FIGURE 3.3 Lateral surface anatomy postmortem (left) with MRI of living brain (right).
The sections of the brain in different planes (Fig. 3.2) are from the same living individual. The MRI depictions are sliced in the traditional planes: axial (Fig. 3.2a), coronal (Fig. 3.2b), and sagital (Fig. 3.2c). As shown in Figure 3.4, within the brain are four fluid-filled pouches, or ventricles, through which cerebrospinal fluid (CSF) flows internally. The
surface of the brain is also bathed in CSF circulating in the space between the arachnoid membrane (the fine textured inner lining of the brain) and the undersurface of the dura mater (the leathery outer lining) (Blumenfeld, 2010; see also Netter, 1983). Together these membranes are called the meninges. The most prominent of the pouches, the lateral ventricles, are a pair of horn-shaped reservoirs situated inside the cerebral hemispheres, running from front to back and curving around and down into the temporal lobe. The ventricles offer a number of landmark regions that are often examined in viewing the integrity of such structures as the caudate nucleus which lies just lateral to the anterior horn of the lateral ventricle, the amygdala located just in front of the tip of the temporal horn, and the hippocampus in the floor of the temporal horn. The third ventricle is situated in the midline within the diencephalon (“between-brain”, see Figs. 3.2 and 3.4), dorsally (i.e., back of body) connected to the two lateral ventricles via a foramen (opening) with ventral (i.e., front of body) connections via the cerebral aqueduct with the fourth ventricle. These connections permit CSF to flow freely throughout each chamber. The fourth ventricle lies within the brain stem. Cerebrospinal fluid is produced within the choroid plexi, specialized structures located within the ventricles but mostly within the lateral ventricles. CSF is pressurized within the ventricles, serving as a shock absorber and helping to maintain the shape of the soft nervous tissue of the brain by creating an outward pressure gradient that is held in check by the mass of the brain.
FIGURE 3.4 Ventricle anatomy. (1) Anterior horn, (2) body, (3) atria, (4) posterior horn, and (5) temporal horn of the lateral ventricle, (6) III ventricle, (7) aqueduct, and (8) IV ventricle.
Blockage somewhere within the ventricular system affects CSF flow, often in one of the foramen or the aqueduct, producing obstructive hydrocephalus; no obvious CFS flow obstruction is identified in normal pressure hydrocephalus (NPH) but the ventricles are nonetheless dilated (see p. 303– 304). In disorders in which brain substance deteriorates, such as in degenerative diseases, the ventricles enlarge to fill the void. Since ventricular size can be an important indicator of the brain’s status, it is one of the common features examined in neuroimaging studies (see Figs. 7.12, 7.21, and 7.22, pp. 198, 330, and 331). Almost as intricate and detailed as neural tissue is the incredibly elaborate network of blood vessels (vasculature) that maintains a rich supply of nutrients to brain tissue, which is very oxygen and glucose dependent (Festa and Lazar, 2009). Figure 3.5 shows the exquisite detail at the capillary level of the vasculature. These blood vessels have been impregnated with acrylic casting agent and then viewed with an electron microscope. The microvasculature interfaces with individual neurons and glial cells, feeding neurons through capillaries. When vascular pathology occurs its effects are typically associated with one or a combination of the major blood vessels of the brain (Sokoloff, 1997; Tatu et al., 2001). However, it is in the intimate interaction between
individual capillaries and neurons that neural function or dysfunction occurs. How blood flow responds to the brain as it engages in a particular function —the basis of functional neuroimaging—is dependent on local autoregulation. The interface of oxygen and glucose-laden blood with neural cells takes place at this microscopic level. The capillaries that deliver blood to brain cells are not much bigger than the neural cells, creating a very delicate microenvironment between blood and brain cells (see Fig. 3.5). This is a major reason why degenerative, neoplastic, and traumatic disorders affect not only neural tissue but the vascular system as well. It is the interplay between vascular damage and brain damage that gives rise to neuropsychological impairments.
FIGURE 3.5 Scanning electron micrograph showing an overview of corrosion casts from the occipital cortex in a control adult postmortem examination: (1) pial vessels, (2) long cortical artery, (3) middle
cortical artery, (4 ) superficial capillary zone, (5) middle capillary zone, and (6) deep capillary zone. Scale bar = 0.86 mm. From Rodriguez-Baeza et al. (2003) reproduced with permission from Wiley-Liss.
The three major blood vessels of the brain have distinctly different distributions (see Fig. 3.6). The anterior and middle cerebral arteries branch from the internal carotid artery. The anterior division supplies the anterior medial (toward the midline) frontal lobe extending posteriorly to all of the medial parietal lobe. The middle cerebral artery feeds the lateral temporal, parietal, and posterior frontal lobes and sends branches deep into subcortical regions. The posterior circulation originates from the vertebral arteries that ascend along the borders of the spinal column from the heart. They provide blood to the brain stem and cerebellum. The vertebral arteries join to form the basilar artery which divides into the posterior cerebral arteries and supplies the occipital cortex and medial and inferior regions of the temporal lobe. Significant neuropathological effects occur from disruption of either arterial flow or venous return of deoxygenated blood and their byproducts (Rodriguez-Baeza et al., 2003). However, the most frequent vascular source of neuropsychological deficits is associated with the arterial side of blood flow which is why only the arterial system is highlighted in Figure 3.6. The site of disease or damage to arterial circulation determines the area of the brain cut off from its oxygen and nutrient supply and, to a large extent, the neuropathologic consequences of vascular disease (Lim and Alexander, 2009; see pp. 229–239 for pathologies arising from cerebrovascular disorders).
The Hindbrain The medulla oblongata
The lowest part of the brain stem is the hindbrain, and its lowest section is the medulla oblongata or bulb (see Fig. 3.2a). The corticospinal tract, which runs down it, crosses the midline here so that each cerebral hemisphere has motor control over the opposite side of the body. The hindbrain is the site of basic life-maintaining centers for neural control of respiration, blood pressure, and heartbeat. Significant injury or pathology to the medulla generally results in death or such profound disability that fine-grained behavioral assessments are irrelevant (Nicholls and Paton, 2009). The medulla contains nuclei (clusters of functionally related nerve cells) involved in movements of mouth and throat structures necessary for swallowing, speech, and such related activities as gagging and control of drooling. Damage to lateral medullary structures can result in sensory deficits (J.S. Kim, Lee, and Lee, 1997).
The reticular formation
Running through the brainstem extending upward to forebrain structures (the diencephalon, see p. 53) is the reticular formation, a network of intertwined and interconnecting nerve cell bodies and fibers that enter into or connect with all major neural tracts going to and from the brain. The reticular formation is not a single functional unit but contains many nuclei. These nuclei mediate important and complex postural reflexes, contribute to the smoothness of muscle activity, and maintain muscle tone. From about the level of the lower third of the pons, see below, up to and including diencephalic structures, the reticular formation is also the site of the reticular activating system (RAS), the part of this network that controls wakefulness and alerting mechanisms that ready the individual to react (S. Green, 1987; Mirsky and Duncan, 2005). The RAS modulates attention through its arousal of the cerebral cortex and its connections with the diffuse thalamic projection system (E.G. Jones, 2009; Mirsky and Duncan, 2001; Parasuraman, Warm, and See, 1998). The intact functioning of this network is a precondition for conscious behavior since it arouses the sleeping or inattentive organism (G. Roth, 2000; Tononi and Koch, 2008). Brain stem lesions involving the RAS give rise to sleep disturbances and to global disorders of consciousness and responsivity such as drowsiness, somnolence, stupor, or coma (A.R. Damasio, 2002; M.I. Posner et al., 2007).
FIGURE 3.6 Major blood vessels schematic. The pons
The pons is high in the hindbrain (Fig. 3.2a). It contains major pathways for fibers running between the cerebral cortex and the cerebellum. Together, the pons and cerebellum correlate postural and kinesthetic (muscle movement sense) information, refining and regulating motor impulses relayed from the cerebrum at the top of the brain stem. Lesions of the pons may cause motor, sensory, and coordination disorders including disruption of ocular movements and alterations in consciousness (Felicio, Bichuetti, et al., 2009). The cerebellum
The cerebellum is attached to the brain stem at the posterior base of the brain (Fig. 3.2). In addition to reciprocal connections with vestibular (system involved in balance and posture) and brain stem nuclei, the hypothalamus (p. 52), and the spinal cord, it has strong connections with the motor cortex (p. 58). It contributes to motor functions through influences on the programming and execution of actions and background motor control. Cerebellar damage is commonly known to produce problems of fine motor control, coordination, and postural regulation, all of which require rapid and complex integration between the cerebellum and other brain regions (G. Koch et al., 2009). Dizziness (vertigo) and jerky eye movements may also accompany cerebellar damage. The cerebellum has many nonmotor functions involving all aspects of behavior (Glickstein and Doron, 2008; Habas, 2009; Schmahmann, Weilburg, and Sherman, 2007; Strick et al., 2009). Highly organized neural pathways project through the pons to the cerebellum from both lower and higher areas of the brain (Koziol and Budding, 2009; Llinas and Walton, 1998; Schmahmann and Sherman, 1998). Cerebellar projections also run through the thalamus to the same cortical areas from which it receives input, including frontal, parietal, and superior temporal cortices (Botez-Marquard et Lalonde, 2005; Middleton and Strick, 2000a; Schmahmann and Sherman, 1998; Zacks, 2008). Through its connections with these cortical areas and with subcortical sites, cerebellar lesions can disrupt abstract reasoning, verbal fluency, visuospatial abilities, attention, memory and emotional modulation (Botez-Marquard et Lalonde, 2005; Middleton and Strick, 2000a; Schmahmann, 2010), along with planning and time judgment (Dow, 1988; Ivry and Fiez, 2000). The cerebellum is also involved in linguistic processing (Leiner et al., 1989), word generation (Raichle, 2000), set shifting (Le et al., 1998), working memory and other types of memory and learning (Desmond et al., 1997; Manto, 2008)—especially habit formation (Eichenbaum and Cohen, 2001; Leiner et al., 1986; R.F. Thompson, 1988) . Moreover, speed of information processing slows with cerebellar lesions (Spanos et al., 2007). Some disruptions may be transient (Botez-Marquard, Leveille, and Botez, 1994; Schmahmann and Sherman, 1998). Personality changes and psychiatric disorders have also been linked to cerebellar dysfunction (Barlow, 2002; Gowen and Miall, 2007; Konarski et al., 2005; Parvizi, Anderson, et al., 2001).
The Midbrain
The midbrain (mesencephalon), a small area just forward of the hindbrain, includes the major portion of the RAS. Its functioning may be a prerequisite for conscious experience (Parvizi and Damasio, 2001). It also contains both sensory and motor pathways and correlation centers (see Fig. 3.2). Auditory and visual system processing that takes place in midbrain nuclei (superior colliculi for vision and inferior colliculi for audition) contribute to the integration of reflex and automatic responses. The substantia nigra, a dopamine-rich area of the brain that projects to the basal ganglia, is located at the level of the midbrain (for importance of the neurotransmitter dopamine, see p. 271). Midbrain lesions within the cerebral peduncle can produce paralysis and may also be related to specific movement disabilities such as certain types of tremor, rigidity, and extraneous movements of local muscle groups. Even impaired memory retrieval has been associated with damage to midbrain pathways projecting to structures in the memory system (E. Goldberg, Antin, et al., 1981; Hommel and Besson, 2001). Acquired lesions in strategic motor areas at the level of the midbrain typically have devastating effects on motor and sensory function with poor functional outcome (Bigler, Ryser, et al., 2006).
The Forebrain: Diencephalic Structures Two subdivisions of the brain evolved at the anterior, or most forward, part of the brain stem. The diencephalon (“between-brain”) is composed mainly of the thalamus, the site of correlation and relay centers that connect throughout the brain; and the hypothalamus which connects with the pituitary body (the controlling endocrine gland). These structures are almost completely embedded within the two halves of the forebrain, the telencephalon (see Fig. 3.2). The thalamus
The thalamus is a small, paired, somewhat oval structure lying along the right and left sides of the third ventricle (see Figs. 3.2, 3.7–3.9). Many symmetric nuclei are located in each half of the thalamus and project intrathalamically or to regions throughout the brain. The two halves are matched approximately in size, shape, and position to corresponding nuclei in the other half. Most of the anatomic interconnections formed by these nuclei and many of their functional contributions involve widespread projections to the cerebral cortex. Figure 3.7 shows the extensive reciprocal connections of thalamic nuclei with the cerebral
cortex (see Johansen-Berg and Rushworth, 2009; S.M. Sherman and Koch, 1998). These thalamic projections are topographically organized (see Fig. 3.7B). The thalamus is enmeshed in a complex of fine circuitry, feedback loops, and many functional systems with continuous interplay between its neurophysiological processes, its neurotransmitters, and its structures. Moreover, as shown in Figure 3.7 (Plate V) C and D, thalamic projections feed into all areas of the cortex such that small thalamic lesions or even small lesions in the thalamic tracks just outside the thalamus may have widespread disruptive effects on cerebral function. Sensory nuclei in the thalamus serve as major relay and processing centers for all senses except smell and project to primary sensory cortices (see pp. 57– 59). The thalamus may also play a role in olfaction, but quite different than the relay functions for touch, vision, and hearing (Tham et al., 2009). Body sensations in particular may be degraded or lost with damage to specific thalamic nuclei (L.R. Caplan, 1980; Graff-Radford, Damasio, et al., 1985) ; inability to make tactile discriminations and identification of what is felt (tactile object agnosia) can occur as an associated impairment (Bauer, 2011; Caselli, 1991). Although pain sensation typically remains intact or is only mildly diminished, with some kinds of thalamic damage it may be heightened to an excruciating degree (A. Barth et al., 2001; Brodal, 1981; Clifford, 1990). Other thalamic nuclei are relay pathways for vision, hearing, and taste (J.S. Kim, 2001). Still other areas are relay nuclei for limbic system structures (see below and p. 54). Motor nuclei receive input from the cerebellum and the basal ganglia and project to the motor association cortex and also receive somatosensory feedback. As the termination site for the ascending RAS, it is not surprising that the thalamus has important arousal and sleep-producing functions (Llinas and Steriade, 2006) and that it alerts—activates and intensifies—specific processing and response systems via the diffuse thalamic projection system (Crosson, 1992; LaBerge, 2000; Mesulam, 2000b). Thalamic involvement in attention shows up in diminished awareness of stimuli impinging on the side opposite the lesion (unilateral inattention) (Heilman, Watson, and Valenstein, 2011; G.A. Ojemann, 1984; M.I. Posner, 1988). The thalamus plays a significant role in regulating higher level brain activity (Tononi and Koch, 2008). The dorsomedial nucleus is of particular interest because of its established role in memory and its extensive reciprocal connections with the prefrontal cortex (see Fig. 3.8) (Graff-Radford, 2003; Hampstead and Koffler, 2009; Mesulam, 2000b). It also receives input from the temporal cortex, amygdala (see pp. 86–87), hypothalamus, and other
thalamic nuclei (Afifi and Bergman, 1998). That the dorsomedial nuclei of the thalamus participate in memory functions has been known ever since lesions here were associated with the memory deficit of Korsakoff ’s psychosis (von Cramon, et al., 1985; Victor, Adams, and Collins, 1971; see pp. 310–314). In most if not all cases of memory impairment associated with the thalamus, lesions have extended to the mammillothalamic tract (Graff-Radford, 2003; Markowitsch, 2000; Verfaellie and Cermak, 1997). As viewed in Figure 3.8, this tract connects the mammillary bodies (small structures at the posterior part of the hypothalamus involved in information correlation and transmission [A. Brodal, 1981; Crosson, 1992]) to the thalamus which sends projections to the prefrontal cortex and medial temporal lobe (Fuster, 1994; Markowitsch, 2000).
FIGURE 3.7 Thalamo-cortical topography demonstrated by DTI tractography. (a) On conventional MRI it is not possible to visualize the intrinsic structure of the thalamus, yet we know from histology in (b), the thalamus consists of cytoarchitectonically distinct nuclei. Cortical target regions are identified in (c) and classified thalamic voxels according to the cortical region with which they had the highest probability of connection are shown in (d). Compare (b) and (d) for specific thalamic nuclei. From Johansen-Berg and Rushworth (2009) used with permission from Annual Reviews.
FIGURE 3.8 Memory and the limbic system. From Budson and Price, 2005. Reprinted courtesy of New England Journal of Medicine.
Two kinds of memory impairments tend to accompany thalamic lesions: (1) Learning is compromised (anterograde amnesia), possibly by defective encoding which makes retrieval difficult if not impossible (N. Butters, 1984a; Mayes, 1988; Ojemann, Hoyenga, and Ward, 1971); possibly by a diminished ability of learning processes to free up readily for succeeding exposures to new information (defective release from proactive inhibition) (N. Butters and Stuss, 1989; Parkin, 1984). A rapid loss of newly acquired information may also occur (Stuss, Guberman, et al., 1988), although usually when patients with thalamic memory impairment do learn they forget no faster than intact persons (Parkin, 1984). (2) Recall of past information is defective (retrograde amnesia), typically in a temporal gradient such that recall of the most recent (premorbid) events and new information is most impaired, and older memories are increasingly better retrieved (N. Butters and Albert, 1982; Kopelman, 2002). Montaldi and Parkin (1989) suggested that these two kinds of memory impairment are different aspects of a breakdown in the use of context (encoding), as retrieval depends on establishing and maintaining “contextual relations among existing memories.” Errors made by an unlettered file clerk would provide an analogy for these learning and retrieval deficits: Items filed randomly remain in the file cabinet but cannot be retrieved by directed search, yet they may pop up from time to time, unconnected to any intent to find them (see also Hodges, 1995).
Amnesic patients with bilateral diencephalic lesions, such as Korsakoff patients, tend to show disturbances in time sense and in the ability to make temporal discriminations; this may play a role in their prominent retrieval deficits (Graff-Radford, Tranel, et al., 1990; Squire, Haist, and Shimamura, 1989). Characteristically, memory impaired patients with thalamic or other diencephalic lesions lack appreciation of their deficits, in this differing from many other memory impaired persons (Mesulam, 2000b; Parkin, 1984; Schacter, 1991). In a review of 61 cases of adults with thalamic lesions, mostly resulting from stroke, half had problems with concept formation, flexibility of thinking, or executive functions (Y.D. Van der Werf, Witter, et al., 2000). In advanced neuroimaging studies, Korsakoff patients demonstrated structural changes in the hippocampus, cerebellum, and pons in addition to the bilateral diencephalic lesions characteristic of the disorder (E.V. Sullivan and Pfefferbaum, 2009). Discrete thalamic lesions may produce very specific memory deficits depending on which thalamic nuclei are affected (Y.D. Van der Werf, Jolles, et al., 2003). Differences in how the two halves of the brain process data, so pronounced at the highest cortical level, first appear in thalamic processing of sensory information (A. Barth, Bogousslavsky, and Caplan, 2001; J.W. Brown, 1975; J.A. Harris et al., 1996; D.M. Hermann et al., 2008). The lateral asymmetry of thalamic organization parallels cortical organization in that left thalamic structures are more implicated in verbal activity, and right thalamic structures in nonverbal aspects of cognitive performance. For example, patients who have left thalamic lesions or who are undergoing left thalamic electrostimulation have not lost the capacity for verbal communication but may experience dysnomia (defective verbal retrieval) and other language disruption (Crosson, 1992; Graff-Radford, Damasio, et al., 1985; M.D. Johnson and Ojemann, 2000). This disorder is not considered to be a true aphasia but rather has been described as a “withering”of language functioning that sometimes leads to mutism. Language deficits do not appear with very small thalamic lesions, suggesting that observable language deficits at the thalamic level require destruction of more than one pathway or nucleus, as would happen with larger lesions (Wallesch, Kornhuber, et al., 1983). With larger thalamic lesions prominent language disturbances can occur (Carrera and Bogousslavsky, 2006; De Witte et al., 2008; Perren et al., 2005). Apathy, confusion, and disorientation often characterize this behavior pattern (J.W. Brown, 1974; see also D. Caplan, 1987; Mazaux and Orgogozo, 1982). Patients with left thalamic lesions may achieve lower scores on verbal tests than patients whose thalamic damage is limited to the right side (Graff-Radford
et al., 1985; Vilkki, 1979). Attentional deficits may also occur with thalamic lesions, particularly posterior ones (J.C. Snow, Allen, et al., 2009). Neuroimaging studies have shown that right thalamic regions are involved in identifying shapes or locations (LaBerge, 2000). Patients who have right thalamic lesions or who undergo electrostimulation of the right thalamus can have difficulty with face or pattern recognition and pattern matching (Fedio and Van Buren, 1975; Vilkki and Laitinen, 1976), maze tracing (M.J. Meier and Story, 1967), and design reconstruction (Graff-Radford, Damasio, et al., 1985). Heilman, Valenstein, and Watson (2000) provided graphic evidence of patients with right thalamic lesions who displayed left-sided inattention characteristic of patients with right-sided—particularly right posterior— cortical lesions (the “visuospatial inattention syndrome"; see pp. 427–429). This phenomenon may also accompany left thalamic lesions, although unilateral inattention occurs more often with right-sided damage (Formaglio et al., 2009; Velasco et al., 1986; Vilkki, 1984). Although some studies have suggested that unilateral thalamic lesions lead to modality-specific memory deficits (Graff-Radford, Damasio, et al., 1985; M.D. Johnson and Ojemann, 2000; Stuss, Guberman, et al., 1988) , conflicting data leave this question unresolved (N. Kapur, 1988b; Rousseaux et al., 1986). Alterations in emotional capacity and responsivity tend to accompany thalamic damage, typically manifesting as apathy, loss of spontaneity and drive, and affective flattening, emotional characteristics that are integral to the Korsakoff syndrome (M. O’Connor, Verfaillie, and Cermak, 1995; Schott et al., 1980; Stuss, Guberman, et al., 1988). Yet disinhibited behavior and emotions occasionally appear with bilateral thalamic lesions (Graff-Radford, Tranel, et al., 1990). Transient manic episodes may follow right thalamic infarctions, with few such reactions—or strong emotional responses—seen when the lesion is on the left (Cummings and Mega, 2003; Starkstein, Robinson, et al., 1988). These emotional and personality changes in diencephalic amnesia patients reflect how intimately interlocked are the emotional and memory components of the limbic system (see pp. 311–313). Other limbic system structures with close connections to the thalamus have been specifically implicated in impaired recording and consolidation processes of memory. These are the mammillary bodies and the fornix (a central forebrain structure that links the hippocampal and the mammillothalamic areas of the limbic system, see Fig. 3.8) (N. Butters and Stuss, 1989; Markowitsch, 2000; Tanaka et al., 1997). Massive anterograde amnesia and some retrograde amnesia can result from diffuse lesions involving the mammillary bodies and the thalamus (Graff-Radford, Tranel, et
al., 1990; Kopelman, 2002; Squire, Haist, and Shimamura, 1989) . Recording of ongoing events may be impaired by lesions of the fornix (Grafman, Salazar, et al., 1985; R.J. Ojemann, 1966; D.F. Tate and Bigler, 2000). The hypothalamus
The hypothalamus is located beneath the thalamus in the ventral wall of the third ventricle. Although it takes up less than one-half of one percent of the brain’s total weight, the hypothalamus regulates such important physiologically based drives as appetite, sexual arousal, and thirst (E.T. Rolls, 1999; C.B. Saper, 1990). It receives inputs from many brain regions and coordinates autonomic and endocrine functions. It is one of the centers involved in regulating homeostasis and stress reactions for the rest of the body (A. Levine, Zagoory-Sharon, et al., 2007). It may also participate in the neural processing of cognitive and social cues (Averbeck, 2010). Behavior patterns having to do with physical protection, such as rage and fear reactions, are also regulated by hypothalamic centers. Depending on the site of the damage, lesions to hypothalamic nuclei can result in a variety of symptoms, including obesity, disorders of temperature control, fatigue, and diminished drive states and responsivity (F.G. Flynn et al., 1988). Mood states may also be affected by hypothalamic lesions (Cowles et al., 2008; Wolkowitz and Reus, 2001). Damage to the mammillary bodies located adjacent to the posterior extension of the hypothalamus disrupts memory processing (Bigler, Nelson, et al., 1989; E.V. Sullivan, Lane, et al., 1999; Tanaka et al., 1997).
The Forebrain: The Cerebrum Structures within the cerebral hemispheres—the basal ganglia and the limbic areas of the cingulate cortex, amygdala and hippocampus—are of especial neuropsychological importance. Some of these structures have rather irregular shapes. To help visualize their location and position within the brain, see Figure 3.9, derived from the 3-D MRI used in Figure 3.2. It is often helpful to visualize the position of these brain structures in reference to the ventricular system which is also shown. The basal ganglia
The cerebrum, the most recently evolved, most elaborated, and by far the largest brain structure, has two hemispheres which are almost but not quite identical mirror images of each other (see Figs. A1.x, x). Within each cerebral
hemisphere are situated a cluster of subcortical nuclear masses known as the basal ganglia (“ganglion”is another term for “nucleus"; see Figs. 3.2 and 3.9). These include the caudate, putamen, and globus pallidus. Some authorities also consider the amygdala, subthalamic nucleus, substantia nigra, and other subcortical structures to be part of the basal ganglia (e.g., Koziol and Budding, 2009). Direct connections from the cerebral cortex to the caudate and putamen, and the globus pallidus and substantia nigra project back to the cerebral cortex through the thalamus. The caudate and gray matter bands, called striations, connect the caudate and putamen with the amygdala. These striations together with the caudate and putamen are referred to as the striatum or the neostriatum, “neo-”referring to the more recently evolved aspects of the caudate and putamen. The neostriatum is part of the system which translates cognition into action (Brunia and Van Boxtel, 2000; Divac, 1977; Grahn et al., 2009).
FIGURE 3.9 Cut-away showing brain anatomy viewed from a left frontal perspective with the left frontal and parietal lobes removed. (A) Cingulate Gyrus, (B) Atrium of the Lateral Ventricle, (C) Posterior Horn of the Lateral Ventricle, (D) IV Ventricle, (E) Temporal Horn of the Lateral Ventricle, (F) Preoptic recess of the III ventricle, (G) Anterior Horn of the Lateral Ventricle, (H) Massa Intermedia and I-M Corpus Callosum, (I) Body, (J) Isthmus, (K) Splenium, (L) Rostrum and (M) Genu. Color code: aquamarine: Ventricular System, gray: Thalamus, blue: Globus Pallidus, purple: Putamen, yellow: Hippocampus, red: Amygdala.
In addition to important connections to the motor cortex, the basal ganglia have many reciprocal connections with other cortical areas, including subdivisions of the frontal lobes (Middleton and Strick, 2000a, b; E.T. Rolls, 1999). Somatotopic representation of specific body parts (e.g., hand, foot, face) within basal ganglia structures overlap, are similar for different individuals, and are unlike the pattern of cortical body part representation (Maillard et al.,
2000; see Fig. 3.14). The basal ganglia influence all aspects of motor control. They are not motor nuclei in a strict sense, as damage to them gives rise to various motor disturbances but does not result in paralysis. What these nuclei contribute to the motor system, cognition, and behavior is less well understood (Haaland and Harrington, 1990; J.M. Hamilton et al., 2003; Thach and Montgomery, 1990). Movement disorders (particularly chorea, tremor and/ or dystonias) may be the most common and obvious symptoms of basal ganglia damage (Crosson, Moore, et al., 2003; Tröster, 2010). In general, diseases of the basal ganglia are characterized by abnormal involuntary movements at rest. Much of the understanding of how the basal ganglia engage movement and other aspects of behavior has been obtained by studying patients with Parkinson’s disease and Huntington’s disease (see pp. 271–286). Difficulties in starting activities and in altering the course of ongoing activities characterize both motor and mental aspects of Parkinson’s disease (R.G. Brown, 2003; Doyon, Bellec, et al., 2009). Huntington patients also appear to have trouble initiating cognitive processes (Brandt, Inscore, et al., 2008) along with impaired movements (De Diego-Balaguer et al., 2008; Richer and Chouinard, 2003). In both conditions, many cognitive abilities may be impaired and emotional disturbances are common. These nuclei also play an important role in the acquisition of habits and skills (Blazquez et al., 2002; Jog et al., 1999). The neostriatum appears to be a key component of the procedural memory system (Budson and Price, 2005; Doyon et al., 2009), perhaps serving as a procedural memory buffer for established skills and response patterns and participating in the development of new response strategies (skills) for novel situations (Saint-Cyr and Taylor, 1992). Damage to the basal ganglia reduces cognitive flexibility—the ability to generate and shift ideas and responses (Lawrence, Sahakian, et al., 1999; Mendez, Adams, and Lewandowski, 1989). Hemispheric lateralization becomes apparent with unilateral lesions, both in motor disturbances affecting the side of the body contralateral to the lesioned nuclei and in the nature of the concomitant cognitive disorders (L.R. Caplan, Schmahmann, et al., 1990). Several different types of aphasic and related communication disorders have been described in association with leftsided lesions (Crescentini et al., 2008; Cummings and Mega, 2003; De DiegoBalaguer et al., 2008). Symptoms tend to vary with the lesion site in a fairly regular manner (Alexander, Naeser, and Palumbo, 1987; A. Basso, Della Sala, and Farabola, 1987; A.R. Damasio, H. Damasio, and Rizzo, 1982; Tanridag and Kirshner, 1985), paralleling the cortical aphasia pattern of reduced output with anterior lesions, reduced comprehension with posterior ones (Crosson,
1992; Naeser, Alexander, et al., 1982) . In some patients, lesions in the left basal ganglia alone or in conjunction with left cortical lesions have been associated with defective knowledge of the colors of familiar objects (Varney and Risse, 1993). Left unilateral inattention accompanies some right-sided basal ganglia lesions (L.R. Caplan, Schmahmann, et al., 1990; Ferro, Kertesz, and Black, 1987). Alterations in basal ganglia circuits involved with nonmotor areas of the cortex have been implicated in a wide variety of neuropsychiatric disorders including schizophrenia, obsessive-compulsive disorder, depression, Tourette’s syndrome, autism, and attention deficit disorders (Chudasama and Robbins 2006; Koziol and Budding, 2009; Middleton and Strick, 2000b). Emotional flattening with loss of drive resulting in more or less severe states of inertia can occur with bilateral basal ganglia damage (Bhatia and Marsden, 1994; Strub, 1989) . These anergic (unenergized, apathetic) conditions resemble those associated with some kinds of frontal damage, illuminating the interrelationships between the basal ganglia and the frontal lobes. Mood alterations may trouble new stroke patients with lateralized basal ganglia lesions with depression more common in patients who have left-sided damage than in those with right-sided involvement (Starkstein, Robinson, et al., 1988). The nucleus basalis of Meynert is a small basal forebrain structure lying partly within and partly adjacent to the basal ganglia (N. Butters, 1985; H. Damasio and Damasio, 1989). It is an important source of the cholinergic neurotransmitters implicated in learning. Loss of neurons here occurs in degenerative dementing disorders in which memory impairment is a prominent feature (Hanyu et al., 2002; Teipel et al., 2005; N.M. Warren et al., 2005) and may also occur in traumatic brain injury (Arciniegas, 2003).
The Limbic System The limbic system includes the amygdala and two phylogenetically old regions of cortex: the cingulate gyrus and the hippocampus (pp. 54, 83–87, 94; Figs. 3.8 and 3.9, pp. 51, 53). Connecting pathways, most prominently the fornix, link the hippocampus with the mammillary bodies, the mammillary bodies with the thalamus, and back to the cerebral cortex via connections through the cingulate gyrus as shown in Figure 3.8 (P. Andersen et al., 2007; Markowitsch, 2000; Papez, 1937). These connections form a loop, often referred to as the limbic loop. Its components are embedded in structures as far apart as the RAS in the brain stem and olfactory nuclei underlying the forebrain. These
structures play important roles in emotion, motivation, and memory (Markowitsch, 2000; Mesulam, 2000b; D.M. Tucker et al., 2000.) The intimate connection between memory and emotions is illustrated by Korsakoff patients with severe learning impairments who retain emotionally laden words better than neutral ones (J. Kessler et al., 1987; Pincus and Tucker, 2003; Wieser, 1986). Disturbances in emotional behavior also occur in association with seizure activity involving these structures (see p. 246). The cingulate cortex
The cingulate gyrus is located in the medial aspects of the hemispheres above the corpus callosum (Figs. 3.2, 3.8, and 3.9). Within it lie the extensive white matter tracts that make up the cingulum, also referred to as the cingulum bundle (see Fig. 3.10). It has important influences on attention, response selection, processing of pain, and emotional behavior (Brunia and Van Boxtel, 2000; J.S. Feinstein et al., 2009; E.T. Rolls, 1999) . Anterior and posterior portions differ in their projections and roles (p. 246). Intracerebral conduction pathways The mind depends as much on white matter as on its gray counterpart. Christopher M. Filley, 2001
Much of the bulk of the cerebral hemispheres is white matter, consisting of densely packed axons. These are conduction fibers that transmit neural impulses between cortical points within a hemisphere (association fibers), between the hemispheres (commissural fibers), or between the cerebral cortex and lower centers (projection fibers). The major tracts of the brain can be readily identified with diffusion tensor imaging (DTI) (see Fig. 3.10). Lesions in cerebral white matter sever connections between lower and higher centers or between cortical areas within a hemisphere or between hemispheres (disconnection syndromes, see pp. 348–349). White matter lesions are common features of many neurological and neuropsychiatric disorders and are often associated with slowed processing speed and attentional impairments (Libon, Price, et al., 2004; Schmahmann, Smith, et al., 2008).
FIGURE 3.10 DTI (diffusion tensor imaging) of major tracts as shown from a dorsal view (left), frontal (middle) and right hemisphere (right). The colors reflect standardized fiber tract orientation where green indicates tract in the anterior-posterior or front-to-back direction, with warm colors (orange to red) indicating lateral or side-to-side direction and blue indicates vertical direction.
The corpus callosum is the big band of commissural fibers connecting the two hemispheres (see Figs. 3.11 and 3.12). It can be readily imaged: DTI makes visible the aggregate tracts of the corpus callosum and where they project. Other interhemispheric connections are provided by some smaller bands of fibers, including the anterior and posterior commissures. Interhemispheric communication by the corpus callosum and other commissural fibers maintains integration of cerebral activity between the two hemispheres (Bloom and Hynd, 2005; Zaidel, Iacoboni, et al., 2011). It is organized with great regularity (J.M. Clarke et al., 1998). Studies of whether/ how differences in overall size of the corpus callosum might relate to cognitive abilities have produced inconsistent findings (Bishop and Wahlsten, 1997; H.L. Burke and Yeo, 1994; Davatzikos and Resnick, 1998). Some studies have reported that the corpus callosum tends to be larger in nonright-handers (Cowell et al., 1993; Habib, Gayraud, et al., 1991; Witelson, 1989). Surgical section of the corpus callosum cuts off direct interhemispheric communication (Baynes and Gazzaniga, 2000; Bogen, 1985; Seymour et al., 1994), which can be a successful treatment of otherwise intractable generalized epilepsy (Rahimi et al., 2007). When using examination techniques restricting stimulus input to one hemisphere (see E. Zaidel, Zaidel, and Bogen, 1990), patients who have undergone section of commissural fibers (commissurotomy) exhibit distinct behavioral discontinuities between perception, comprehension, and response, which reflect significant functional differences between the hemispheres (see also p. xx). Probably because direct communication between two cortical points occurs far less frequently than indirect communication
relayed through lower brain centers, especially through the thalamus and the basal ganglia, these patients generally manage to perform everyday activities quite well. These include tasks involving interhemispheric information transfer (J.J. Myers and Sperry, 1985; Sergent, 1990, 1991b; E. Zaidel, Clarke, and Suyenobu, 1990) and emotional and conceptual information not dependent on language or complex visuospatial processes (Cronin-Golomb, 1986) . In noting that alertness remains unaffected by commissurotomy and that emotional tone is consistent between the hemispheres, Sperry (1990) suggested that both phenomena rely on bilateral projections through the intact brain stem.
FIGURE 3.11 DTI of major tracts through the corpus callosum. Five major fasciculi involving the temporal lobe are colorized simply to identify their position: these colors do not indicate fiber tract orientation as represented in diffusion tensor imaging (DTI) color maps. The following tracts are associated with these colors: Green: cingulum bundle (CB), Purple: arcuate fasciculus (AF), Turquoise-Blue: uncinate fasciculus (UF), Chartreuse: inferior fronto-occipital fasciculus (IFOF), Red: inferior longitudinal fasciculus (ILF). The IFOF is mostly hidden in this illustration, but an outline of its occipital-frontal projections can be visualized. Reproduced with permission from Springer Publishing from Bigler, McCauley, Wu et al. (2010).
FIGURE 3.12 (TOP) Representative commissural DTI “streamlines”showing cortical projections. Colors show the direction of projecting fibers: green reflects anterior-posterior orientation; warm colors (redorange) reflect lateral or back-and-forth projections; blue, a vertical orientation. (BOTTOM) Cortical termination of corpus callosum projections are shown on “Inflated”or “ballooned”appearing brains with the lateral surface shown in the middle view and the bottom view reflects projections to the medial surface. Note the high specificity and organization of projecting fibers across the corpus callosum. From Pannek et al. (2010) used with permission from Elsevier.
Some persons with agenesis of the corpus callosum (a rare congenital condition in which the corpus callosum is insufficiently developed or absent altogether) are identified only when some other condition brings them to a neurologist’s attention. Normally they display no neurological or neuropsychological defects (L.K. Paul et al., 2007; Zaidel, Iacoboni, Berman, et al., 2011) other than slowed motor performances, particularly of bimanual
tasks (Lassonde et al., 1991). However, persons with congenital agenesis of the corpus callosum also tend to be generally slowed on perceptual and language tasks involving interhemispheric communication, and some show specific linguistic and/or visuospatial deficits (Jeeves, 1990, 1994; see also Zaidel and Iacoboni, 2003) . In some cases, problems with higher order cognitive processes such as concept formation, reasoning, and problem solving with limited social insight have been observed (W.S. Brown and Paul, 2000). The cerebral cortex
The cortex of the cerebral hemispheres (see Fig. 3.3, p. 46), the convoluted outer layer of gray matter composed of nerve cell bodies and their synaptic connections, is the most highly organized correlation center of the brain, but the specificity of cortical structures in mediating behavior is neither clear-cut nor circumscribed (R.C. Collins, 1990; Frackowiak et al., 1997). Predictably established relationships between cortical areas and behavior reflect the systematic organization of the cortex and its interconnections (Fuster, 2008). Now modern visualizing techniques display what thoughtful clinicians had suspected: multiple cortical and subcortical areas are involved in complex interrelationships in the mediation of even the simplest behaviors (Fuster, 1995; Mesulam, 2009; Seeley et al., 2009) and specific brain regions are typically multifunctional (Lloyd, 2000). While motor, sensory and certain receptive and expressive language functions have relatively well-defined regions that subserve these functions, the boundaries of other functionally definable cortical areas, or zones, are vague. Cells subserving a specific function are highly concentrated in the primary area of a zone, thin out, and overlap with other zones as the perimeter of the zone is approached (E. Goldberg, 1989, 1995; Polyakov, 1966). Cortical activity at every level, from the cellular to the integrated system, is maintained and modulated by complex feedback loops that in themselves constitute major subsystems, some within the cortex and others involving subcortical centers and pathways. “Processing patterns take many forms, including parallel, convergent [integrative], divergent [spreading excitation], nonlinear, recursive [feeding back onto itself] and iterative“ (H. Damasio and Damasio, 1989, p. 71). Even those functions that are subserved by cells located within relatively well-defined cortical areas have a significant number of components distributed outside the local cortical center (A. Brodal, 1981; Paulesu et al., 1997) . Much of what neuropsychological assessment techniques evaluate is the functioning of the cerebral cortex and its final control over behavior.
THE CEREBRAL CORTEX AND BEHAVIOR Cortical involvement appears to be a prerequisite for awareness of experience (Changeux, 2004; Fuster, 2003). Patterns of functional localization in the cerebral cortex are organized broadly along two spatial planes. The lateral plane refers to the left and right sides of the brain and thus cuts through homologous (in the corresponding position) areas of the left and right hemispheres, with the point of demarcation being the longitudinal fissure. The longitudinal plane runs from the front to the back of the cortex, with the demarcation point being the central sulcus (fissure of Rolando), roughly separating functions that are primarily localized in the anterior (or rostral) portion of the cortex and those that are primarily localized in the posterior (or caudal) portion of the cortex. Both of these axes—lateral and longitudinal— should be understood as constructs helpful for conceptualizing brain-behavior relations, and not as rigid rules that dictate functional organization.
Lateral Organization Lateral symmetry
At a gross macroscopic level, the two cerebral hemispheres are roughly symmetrical. For example, the primary sensory and motor centers are homologously positioned within the cerebral cortex of each hemisphere in a mirror-image relationship. Many afferent and efferent systems are crossed, so that the centers in each cerebral hemisphere predominantly mediate the activities of the contralateral (other side) half of the body (see Fig. 3.13). Thus, an injury to the primary somatosensory (sensations on the body) cortex of the right hemisphere results in decreased or absent sensation in the corresponding left-sided body part(s); similarly, an injury affecting the left motor cortex results in a right-sided weakness or paralysis (hemiplegia).
FIGURE 3.13 Schematic diagram of visual fields, optic tracts, and the associated brain areas, showing left and right lateralization in humans. (From Sperry, 1984 )
FIGURE 3.14 Diagram of a “motor homunculus”showing approximately relative sizes of specific regions of the motor cortex representing various parts of the body, based on electrical stimulation of the exposed human cortex. From Penfield, W. and Rasmussen, T. (1950). The cerebral cortex of man. NY: Macmillan. Used with permission of Cengage Group.
Point-to-point representation on the cortex. The organization of both the primary sensory and primary motor areas of the cortex provides for a pointto-point representation of the body. The amount of cortex associated with each body portion or organ is roughly proportional to the number of sensory or motor nerve endings in that part of the body, rather than to its size. For example, the areas concerned with sensation and movement of the tongue or fingers are much more extensive than the areas representing the elbow or back. This gives rise to the famous distorted-looking “homunculous,” the “little man”drawing which depicts the differential assignment of cortical areas to various body parts (Fig 3.14). The visual system is also organized on a contralateral plan, but it is onehalf of each visual field (the entire view encompassed by the eye) that is
projected onto the contralateral visual cortex (see Fig. 3.13). Fibers originating in the right half of each retina, which regist er stimuli in the left visual field, project to the right visual cortex; fibers from the left half of each retina convey the right visual field image to the left visual cortex. Thus, destruction of either eye leaves both halves of the visual field intact, although some aspects of depth perception will be impaired. Destruction of the right or the left primary visual cortex or of all the fibers leading to either side results in blindness for the opposite side of visual field (homonymous hemianopia). Lesions involving a portion of the visual projection fibers or visual cortex can result in circumscribed field defects, such as areas of blindness (scotoma, pl. scotomata) within the visual field of one or both eyes, depending on whether the lesion involves the visual pathway before (one eye) or after (both eyes) its fibers cross on their route from the retina of the eye to the visual cortex. The precise point-to-point arrangement of projection fibers from the retina to the visual cortex permits especially accurate localization of lesions within the primary visual system (Sterling, 1998). Higher order visual processing is mediated by two primary systems, each with different pathways involving different parts of the cortex. A ventral or “what”system is specialized for pattern analysis and object recognition (“what”things are), and is differentiated from a dorsal or “where”system which is specialized for spatial analysis and movement perception (“where”things are) (Goodale, 2000; Mendoza and Foundas, 2008; Ungerleider and Mishkin, 1982). Some patients with brain injuries that do not impair basic visual acuity or recognition complain of blurred vision or degraded percepts, particularly with sustained activity, such as reading, or when exposure is very brief (Hankey, 2001; Kapoor and Ciuffreda, 2005; Zihl, 1989). These problems reflect the complexity of an interactive network system in which the effects of lesions resonate throughout the network, slowing and distorting multiple aspects of cerebral processing with these resultant visual disturbances. A majority of the nerve fibers transmitting auditory stimulation from each ear are projected to the primary auditory centers in the opposite hemisphere; the remaining fibers go to the ipsilateral (same side) auditory cortex. Thus, the contralateral, crossed pattern is preserved to a large degree in the auditory system too. However, because the projections are not entirely crossed, destruction of one of the primary auditory centers does not result in complete loss of hearing in the contralateral ear. A point-to-point relationship between sense receptors and cortical cells is also laid out on the primary auditory cortex, with cortical representation arranged according to pitch, from high to low tones (Ceranic and Luxon, 2002; Mendoza and Foundas, 2008).
Destruction of a primary cortical sensory or motor area results in specific sensory or motor deficits, but generally has little effect on the higher cognitive functions. For instance, an adult-onset lesion limited to the primary visual cortex produces loss of visual awareness (cortical blindness), while reasoning ability, emotional control, and even the ability for visual conceptualization may remain intact (Farah and Epstein, 2011; Guzeldere et al., 2000; Weiskrantz, 1986). Association areas of the cortex. Cortical representation of sensory or motor nerve endings in the body takes place on a direct point-to-point basis, but stimulation of the primary cortical area gives rise only to vague, somewhat meaningless sensations or nonfunctional movements (Brodal, 1981; Luria, 1966; Mesulam, 2000b). Complex functions involve the cortex adjacent to primary sensory and motor centers (E. Goldberg, 1989, 1990; Mendoza and Foundas, 2008; Paulesu et al., 1997). Neurons in these secondary cortical areas integrate and refine raw percepts or simple motor responses. Tertiary association or overlap zones are areas peripheral to functional centers where the neuronal components of two or more different functions or modalities are interspersed. The posterior association cortex, in which the most complex integration of perceptual functions takes place, has also been called the multimodal (Pandya and Yeterian, 1990), heteromodal Mesulam, 2000b), or supramodal (Darby and Walsh, 2005) cortex. These processing areas are connected in a “stepwise”manner such that information-bearing stimuli reach the cortex first in the primary sensory centers. They then pass through the cortical association areas in order of increasing complexity, interconnecting with other cortical and subcortical structures along the way to frontal and limbic system association areas and finally become manifest in action, thought, and feeling (Arciniegas and Beresford, 2001; Mesulam, 2000b; Pandya and Yeterian, 1990, 1998). These projection systems have both forward and reciprocal connections at each step in the progression to the frontal lobes; and each sensory association area makes specific frontal lobe connections which, too, have their reciprocal connections back to the association areas of the posterior cortex (E.T. Rolls, 1998) . “Anterior prefrontal cortex is bidirectionally interconnected with heteromodal association regions of the posterior cortex but not with modalityspecific regions”(E. Goldberg, 2009, p. 59). Unlike damage to primary cortical areas, a lesion involving association areas and overlap zones typically does not result in specific sensory or motor defects. Rather, the behavioral effects of such damage will more likely appear as various higher order neuropsychological deficits; e.g., lesions of the
auditory association cortex do not interfere with hearing acuity but with the appreciation or recognition of patterned sounds (see p. 24). In like manner, lesions to visual association cortices may cause impaired recognition of objects, while sparing visual acuity (see p. 21). Asymmetry between the hemispheres
A second kind of organization across the lateral plane differentiates the two hemispheres with respect to the localization of primary cognitive functions and to significant qualitative aspects of behavior processed by each of the hemispheres (Filley, 2008; E. Goldberg, 2009; Harel and Tranel, 2008). Although no two human brains are exactly alike in their structure, in most people the right frontal area is wider than the left and the right frontal pole protrudes beyond the left while the reverse is true of the occipital pole: the left occipital pole is frequently wider and protrudes further posteriorly than the right but the central portion of the right hemisphere is frequently wider than the left (A.R. Damasio and Geschwind, 1984; Janke and Steinmetz, 2003). Men show greater degrees of frontal and occipital asymmetry than women (D. Bear, Schiff, et al., 1986). These asymmetries begin in fetal brains (de Lacoste et al., 1991; Witelson, 1995). The left Sylvian fissure, the fold between the temporal and frontal lobes, is larger than the right in most people (Witelson, 1995), even in newborns (Seidenwurm et al., 1985). The posterior portion of the superior surface of the temporal lobe, the planum temporale, which is involved in auditory processing, is larger on the left side in most right-handers (Beaton, 1997; E. Strauss, LaPointe, et al., 1985). Differences in the neurotransmitters serving each hemisphere have also been associated with differences in hemisphere function (Berridge et al., 2003; Direnfeld et al., 1984; Glick et al., 1982) and sex (Arato et al., 1991). These differences may have an evolutionary foundation, for they have been found in primates and other animals (Corballis, 1991; Geschwind and Galaburda, 1985; Nottebohm, 1979). The lateralized size differential in primates is paralleled in some species by left lateralization for vocal communication (MacNeilage, 1987). For example, studies have linked intrahemispheric interconnections with this area to gestural capacity (possibly with communication potential) in macaque monkeys (Petrides, 2006). Lateralized cerebral differences may also occur at the level of cellular organization (Galuske et al., 2000; Gazzaniga, 2000; Peled et al., 1998). A long-standing hypothesis holds that the left and right hemispheres have different degrees of specialization, with left greater than right. A half century ago, Hecaen and Angelergues (1963) speculated that neural organization might
be more closely knit and integrated on the left, more diffuse on the right. This idea is consistent with findings that patients with right hemisphere damage tend to have a reduced capacity for tactile discrimination and sensorimotor tasks in both hands while those with left hemisphere damage experience impaired tactile discrimination only in the contralateral hand (Hom and Reitan, 1982; Semmes, 1968), although contradictory data have been reported (Benton, 1972). Other support comes from findings that visuospatial and constructional disabilities of patients with right hemisphere damage do not differ significantly regardless of the extensiveness of damage (Kertesz and Dobrowolski, 1981). Hammond (1982) reported that damage to the left hemisphere tends to reduce acuity of time discrimination more than right-sided damage, suggesting that the left hemisphere has a capacity for finer temporal resolution than the right. Also, the right hemisphere does not appear to be as discretely organized as the left for visuoperceptual and associated visual memory operations (Fried et al., 1982; Wasserstein, Zappula, Rosen, and Gerstman, 1984). Functional specialization of the hemispheres. Fundamental differences between the left and right hemispheres of the human brain constitute some of the bedrock principles of neuropsychology. The first— stemming from the seminal observations of Broca (1861) and Wernicke (1874)—has to do with language: in the vast majority of adults, the left side of the brain is specialized for language and for processing verbally coded information. This is true of most—usually estimated at upwards of 90%—right-handed individuals who constitute roughly 90% of the adult population and of the majority—usually estimated at around 70%—of left-handed persons (see pp. 365–366 for lateralization details). This lateralizing principle applies regardless of input modality; for example, in most people verbal information apprehended through either the auditory (e.g., speech) or visual (e.g., written text) channel is processed preferentially by the left hemisphere (Abutalebi and Cappa, 2008; M.P. Alexander, 2003; Bartels and Wallesch, 2010). The principle also applies to both the input and output aspects of language, so not only does the left hemisphere play a major role in understanding language, it also produces language (spoken and written). The principle even goes beyond spoken languages to include languages based on visuogestural signals (e.g., American Sign Language) (Bellugi et al., 1989; Hickok et al., 1996).
The right hemisphere has a very different type of specialization (A.R. Damasio, Tranel, and Rizzo, 2000; Darby and Walsh, 2005). It processes nonverbal information such as complex visual patterns (e.g., faces) or auditory signals (e.g., music) that are not coded in verbal form. For example, structures in the right temporal and occipital regions are critical for learning and navigating geographical routes (Barrash, H. Damasio, et al., 2000) . The right side of the brain is also the lead player in the cortical mapping of “feeling states,” that is, patterns of bodily sensations linked to emotions such as anger and fear (A.R. Damasio, 1994). Another, related right hemisphere capacity concerns perceptions of the body in space, in both intrapersonal and extrapersonal terms—for example, understanding of where limbs are in relationship to trunk, and where one’s body is in relationship to the
surrounding space. While not sufficient for basic language comprehension and production, the right hemisphere contributes to appreciation of the context of verbal information and, thereby, to accuracy of language processing and appropriateness of language usage (see p. 62). In early conceptualizations of left and right hemisphere differences, it was common to see references to the left hemisphere as “major”or “dominant,” while the right hemisphere was considered “minor”or “nondominant.” This thinking came from a focus on language aspects of human cognition and behavior. As a highly observable and unquestionably important capacity, language received the most scientific and clinical attention, and typically was considered the quintessential and most important human faculty. For many decades the right hemisphere was thought to contribute little to higher level cognitive functioning. Lesions to the right hemisphere typically did not produce immediately obvious language disturbances, and hence it was often concluded that a patient had lost little in the way of higher order function after rightsided brain injury. Later, it became clear that each hemisphere was dedicated to specific, albeit different, cognitive functions and the notion of “dominance”gave way to the idea of “specialization"—that is, each hemisphere was specialized for certain cognitive functions (e.g., J. Levy, 1983). Many breakthroughs in the understanding of hemispheric specialization came from studies of so-called “split-brain”patients, work led by psychologist and Nobelist Roger Sperry (e.g., Sperry, 1968, 1982). To prevent partial seizures from spreading from one side of the brain to the other, an operation severed the corpus callosum in these patients. Thus, the left and right cerebral hemispheres were “split,” and no longer able to communicate with one another. Careful investigations of these patients found that each side of the brain had its own unique style of “consciousness,” with the left and right sides operating in verbal and nonverbal modalities, respectively. Sperry’s work and that of many others (e.g., Arvanitakis and Graff-Radford, 2004; Gazzaniga, 1987, 2000; Glickstein and Berlucchi, 2008; Zaidel, Iacoboni, et al., 2011) led to several fundamental distinctions between the cognitive functions for which the left and right hemispheres are specialized (Table 3.1). The nature of hemisphere specialization also shows up in processing differences. The left hemisphere is organized for “linear”processing of sequentially presented stimuli such as verbal statements, mathematical propositions, and the programming of rapid motor sequences. The right hemisphere is superior for “configurational”processing required by information or experiences that cannot be described adequately in words or strings of symbols, such as the appearance of a face or three-dimensional
spatial relationships. Moreover, the two hemispheres process global/local or whole/detail information differently (L.C. Robertson and Rafal, 2000; Rossion et al., 2000). When asked to copy or read a large-scale stimulus such as the shape of a letter or other common symbol composed of many different symbols in small scale (see Fig. 3.15), patients with left hemisphere disease will tend to ignore the small bits and interpret the large-scale figure; those whose lesions are on the right are more likely to overlook the big symbol but respond to the small ones. This can be interpreted as indicating left hemisphere superiority in processing detailed information, and right hemisphere superiority for processing large-scale or global percepts. TABLE 3.1 Functional dichotomies of left and right hemispheric dominance Left Verbal Serial Analytic Logical Rational
Right Nonverbal Holistic Synthetic Pictorial Intuitive
Source. Adapted from Benton, 1991.
FIGURE 3.15 Example of global/local stimuli.
In considering hemispheric specialization for verbal versus nonverbal material, it should be kept in mind that absence of words does not make a stimulus “nonverbal.” Pictorial, diagrammatic, or design stimuli— and sounds, sensations of touch and taste, etc.—may be more or less susceptible to verbal labeling depending on their meaningfulness, complexity, familiarity, potential for affective arousal, and other characteristics such as patterning or number. Thus, when classifying a wordless stimulus as verbal or nonverbal, it is important to take into account how readily it can be verbalized. The left-right dichotomies in hemispheric specialization should be taken as useful concepts and not iron-clad facts. Many variables come into play in determining which hemisphere will take the lead in processing various types of
information (e.g., Beaumont, 1997; Sergent, 1990). These include the nature of the task (e.g., modality, speed factors, complexity), the subject’s set of expectancies, prior experiences with the task, previously developed perceptual or response strategies, and inherent subject (attribute) variables such as sex and handedness (Kuhl, 2000; Papadatou-Pastou et al., 2008; Tranel, H. Damasio, et al., 2005). The degree to which hemispheric specialization occurs at any given time and under any given set of task demands is relative rather than absolute (Hellige, 1995; L.C. Robertson, 1995; Sergent, 1991a). Moreover, it is important to recognize that normal behavior is a function of the whole healthy brain with important contributions from both hemispheres entering into virtually every activity, including the very notion of the self (Northoff et al., 2006). This phenomenon has been demonstrated perhaps even more compellingly in functional imaging studies in which bilateral activations are observed for virtually any task, no matter its apparent purity in terms of verbal vs. nonverbal demands, serial vs. holistic processing, or any of the other dichotomies enumerated in Table 3.1 (e.g., Cabeza and Nyberg, 2000; D’Esposito, 2000; Mazziotta, 2000). Still, in most persons, the left hemisphere is the primary mediator of verbal functions, including reading and writing, verbal comprehension and speaking, verbal ideation, verbal memory, and even comprehension of verbal symbols traced on the skin. The left hemisphere also mediates the numerical symbol system. Moreover, left hemisphere lateralization extends to control of posturing, sequencing hand and arm movements, and the bilateral musculature of speech. Processing the linear and rapidly changing acoustic information needed for speech comprehension is performed better by the left compared to the right hemisphere (Beeman and Chiarello, 1998; Howard, 1997). In addition, it has been hypothesized but never fully proven that males have stronger left hemisphere lateralization for phonological processing than females (J. Levy and Heller, 1992; Shaywitz et al., 1995; Zaidel, Aboitiz, et al., 1995). An important contribution of the right hemisphere to language processing is the appreciation and integration of relationships in verbal discourse and narrative materials (Beeman and Chiarello, 1998, passim; Jung-Beeman, 2005; Kiehl et al., 1999), which includes the capacity for enjoying a good joke (Beeman, 1998; H. Gardner, 1994) . The right hemisphere also appears to provide the possibility of alternative meanings, getting away from purely literal interpretations of verbal material (Bottini et al., 1994; Brownell and Martino, 1998; Fiore and Schooler, 1998). The right hemisphere has some simple language comprehension capacity, as demonstrated by the finding that
following commissurotomy, when speech is directed to the right hemisphere, much of what is heard is comprehended so long as it remains simple (Baynes and Eliassen, 1998; Searleman, 1977). That the right hemisphere has a language capacity can also be inferred in aphasic patients with left-sided lesions who show improvement from their immediate post-stroke deficits accompanied by measurably heightened right hemisphere activity (B.T. Gold and Kertesz, 2000; Heiss et al., 1999; Papanicolaou, Moore, et al., 1988). The right hemisphere is sensitive to speech intonations (Borod, Bloom, and Santschi-Haywood, 1998; Ivry and Lebby, 1998) and is important for meaningfully expressive speech intonation (prosody) (Borod, Bloom, and Santschi-Haywood, 1998; Filley, 1995; E.D. Ross, 2000). It takes the lead in familiar voice recognition (Van Lancker, Kreiman, and Cummings, 1989), plays a role in organizing verbal production conceptually (Brownell and Martino, 1998; Joanette, Goulet, and Hannequin, 1990), and contributes to the maintenance of context-appropriate and emotionally appropriate verbal behavior (Brownell and Martino, 1998; Joanette, Goulet, and Hannequin, 1990). Specific right hemisphere temporal and prefrontal areas contribute to comprehending story meanings (Nichelli, Grafman, et al., 1995). The right hemisphere’s characteristic contributions are not limited to communications but extend to all behavior domains (Lezak, 1994a). Examples of right hemisphere specialization for nonverbal information include the perception of spatial orientation and perspective, tactile and visual recognition of shapes and forms, reception and storage of nonverbalizable visual data, and copying and drawing geometric and representational designs and pictures. The left hemisphere seems to predominate in metric distance judgments (Hellige, 1988; McCarthy and Warrington, 1990), while the right hemisphere has superiority in metric angle judgments (Benton, Sivan, et al., 1994; Mehta and Newcombe, 1996; Tranel, Vianna, et al., 2009) . Many aspects of arithmetic calculations—for example, those involving spatial organization of problem elements as distinct from left hemisphere-mediated linear arithmetic problems, have a significant right hemisphere component (Denburg and Tranel, 2011). Some aspects of musical ability are also localized on the right (Peretz and Zatorre, 2003), as are the recognition and discrimination of nonverbal sounds (Bauer and McDonald, 2003). Data from a variety of sources suggest right hemisphere dominance for spatial attention specifically, if not attention generally. Patients with compromised right hemisphere functioning tend to have diminished awareness of or responsiveness to stimuli presented to their left side, reaction times mediated by the right hemisphere are faster than those mediated by the left, and
the right hemisphere is activated equally by stimuli from either side in contrast to more exclusively contralateral left hemisphere activation (Heilman, Watson, and Valenstein, 2011; Meador, Loring, Lee, et al., 1988; Mesulam, 2000b). Moreover, the right hemisphere predominates in directing attention to far space while the left hemisphere directs attention to near space (Heilman, Chatterjee, and Doty, 1995). The appearance of right hemisphere superiority for attention in some situations may stem from its ability to integrate complex, nonlinear information rapidly. Facial recognition studies exemplify the processing differences underlying many aspects of hemisphere specialization. When pictured faces are presented in the upright position to each field separately they are processed more rapidly when presented to the left field/right hemisphere than to the right field/left hemisphere; but no right hemisphere advantage appears when faces are inverted. “It seems that, in the right hemisphere, upright faces are processed in terms of their feature configuration, whereas inverted faces are processed in a piecemeal manner, feature by feature… . In the left hemisphere, both upright and inverted faces seem to be processed in a piecemeal manner.” (Tovee, 1996, pp. 134–135).
As illustrated in Figure 3.15 (p. 61), the distinctive processing qualities of each hemisphere become evident in the mediation of spatial relations. Left hemisphere processing tends to break the visual percept into details that can be identified and conceptualized verbally in terms of number or length of lines, size and direction of angles, and so on. In the right hemisphere the tendency is to deal with the same visual stimuli as spatially related wholes. Thus, for most people, the ability to perform such complex visual tasks as the formation of complete impressions from fragmented percepts (the closure function), the appreciation of differences in patterns, and the recognition and remembering of faces depends on the functioning of the right hemisphere. Together the two processing systems provide recognition, storage, and comprehension of discrete and continuous, serial and simultaneous, detailed and holistic aspects of experience across at least the major sensory modalities of vision, audition, and touch. Cognitive alterations with lateralized lesions. In keeping with the robust principles of hemispheric specialization, the most obvious cognitive defect associated with left hemisphere damage is aphasia (Benson and Ardila, 1996; D. Caplan, 2011; Grodzinsky and Amunts, 2006). Other neuropsychological manifestations of left hemisphere dysfunction include impaired verbal memory, verbal fluency deficits, concrete thinking, specific impairments in reading or writing, and impaired arithmetic ability characterized by defects or loss of basic mathematical concepts of operations and even of number. Patients with left hemisphere damage can also lose their ability to perform complex manual—as well as oral—motor sequences (i.e., apraxias) (Harrington and Haaland, 1992; Meador, Loring, Lee, et al., 1999; Schluter et al., 2001).
The diversity of behavioral disorders associated with right hemisphere
damage continues to thwart any neat or simple classification system (S. Clarke, 2001; Feinberg and Farah, 2003b; Filley, 1995). No attempt to include every kind of impairment reported in the literature will be made here. Rather, the most prominent features of right hemisphere dysfunction are described. Patients with right hemisphere damage may be quite fluent, even verbose (Mendoza and Foundas, 2008; Rivers and Love, 1980; E.D. Ross, 2000), but illogical and given to loose generalizations and bad judgment (Stemmer and Joanette, 1998). They are apt to have difficulty ordering, organizing, and making sense out of complex stimuli or situations. These organizational deficits can impair appreciation of complex verbal information so that verbal comprehension may be compromised by confusion of the elements of what is heard, by personalized intrusions, by literal interpretations, and by a generalized loss of gist in a morass of details (Beeman and Chiarello, 1998, passim). Their speech may be uninflected and aprosodic, paralleling their difficulty in comprehending speech intonations (E.D. Ross, 2003).
FIGURE 3.16 Example of spatial dyscalculia by the traumatically injured pediatrician described on p. 438 whose reading inattention is shown in Figure 10.8 (p. 438). Note omission of the 6 on the left of the
problem in the upper left corner; errors on the left side of bottom problem which appear to be due to more than simple inattention; labored but finally correct working out of problem in middle right side of page. The test was taken with no time limit.
Perceptual deficits, particularly left-sided inattention phenomena and deficits in comprehending degraded stimuli or unusual presentations, are not uncommon (Kartsounis, 2010; McCarthy and Warrington, 1990). The visuospatial perceptual deficits that trouble many patients with right-lateralized damage affect different cognitive activities. Arithmetic failures are most likely to appear in written calculations that require spatial organization of the problems’ elements (Denburg and Tranel, 2011; see Fig. 3.16). Visuospatial and other perceptual deficits show up in these patients’ difficulty in copying designs, making constructions, and matching or discriminating patterns or faces (e.g., Tranel, Vianna, et al., 2009). Patients with right hemisphere damage may have particular problems with spatial orientation and visuospatial memory such that they get lost, even in familiar surroundings, and can be slow to learn their way around a new area. Their constructional disabilities may reflect both their spatial disorientation and defective capacity for perceptual or conceptual organization (e.g., Tranel, Rudrauf, et al., 2008). The painful efforts of a right hemisphere stroke patient to arrange plain and diagonally colored blocks according to a pictured pattern (Fig. 3.17a [a-e]) illustrate the kind of solutions available to a person in whom only the left hemisphere is fully intact. This glib 51-year-old retired salesman constructed several simple 2 × 2 block design patterns correctly by verbalizing the relations. “The red one (block) on the right goes above the white one; there’s another red one to the left of the white one.” This method worked so long as the relationships of each block to the others in the pattern remained obvious. When the diagonality of a design obscured the relative placement of the blocks, he could neither perceive how each block fit into the design nor guide himself with verbal cues. He continued to use verbal cues, but at this level of complexity his verbalizations only served to confuse him further. He attempted to reproduce diagonally oriented designs by lining up the blocks diagonally (e.g., “to the side,” “in back of”) without regard for the squared (2 × 2 or 3 × 3) format. He could not orient any one block to more than another single block at a time, and he was unable to maintain a center of focus to the design he was constructing. On the same task, a 31-year-old former logger who had had left hemisphere surgery involving the visual association area had no difficulty until he came to a 3 × 3 design (Fig. 3.17b [f, g]). On this design he reproduced the overall pattern immediately but oriented one corner block erroneously. He attempted to reorient it but then turned a correctly oriented block into a 180° error. Though dissatisfied with this solution, he was unable to localize his error or define the simple angulation pattern.
FIGURE 3.17a Attempts of a 51-year-old right hemisphere stroke patient to copy pictured designs with colored blocks. (a) First stage in the construction of a 2 × 2 chevron design. (b) Second stage: the patient does not see the 2 × 2 format and gives up after four minutes. (c) First stage in construction of a 3 × 3 pinwheel pattern (see below). (d) Second stage. (e) Third and final stage. This patient later told his wife that he believed the examiner was preparing him for “architect school.”
FIGURE 3.17b Attempts of a 31-year-old patient with a surgical lesion of the left visual association area to copy the 3 x 3 pinwheel design with colored blocks. (f) Initial solution: 180° rotation of upper left corner block. (g) “Corrected”solution: upper left corner block rotated to correct position and lower right corner rotated 180° to an incorrect position.
Although hemispheric asymmetry and lateralization of function are relative and hypothesis-driven concepts, they have considerable clinical value. Loss of tissue in a hemisphere tends to impair its particular processing capacity. When a lesion has rendered lateralized areas essentially nonfunctional, the intact hemisphere may process activities normally handled by the damaged hemisphere (W.H. Moore, 1984; Papanicolaou et al., 1988; Fig. 3.17a is an example of this phenomenon). Moreover, a diminished contribution from one hemisphere may be accompanied by augmented or exaggerated activity of the other when released from the inhibitory or competitive constraints of normal hemispheric interactions. This phenomenon appears in the verbosity and overwriting of many right hemisphere damaged patients (Lezak and Newman,
1979; see Fig. 3.18). In an analogous manner, patients with left hemisphere disease tend to reproduce the essential configuration but leave out details (see Fig. 3.19). The functional difference between hemispheres also appears in the tendency for patients with left-sided damage to be more accurate in remembering large visually presented forms than the small details making up those forms; but when the lesion is on the right, recall of the details is more accurate than recall of the whole composed figure (Delis, Robertson, and Efron, 1986). Learning and memory are also strongly influenced by the general principles of hemispheric specialization. Thus, relationships between the side of the lesion and the type of learning impairment are fairly consistent. For example, damage to the left hippocampal system produces an amnesic syndrome that affects verbal material (e.g., spoken words, written material) but spares nonverbal material and, in contrast, damage to the right hippocampal system affects nonverbal material (e.g., complex visual and auditory patterns) but spares verbal material (e.g., B. Milner, 1968, 1972; R.G. Morris, Abrahams, and Polkey, 1995; Pillon, Bazin, Deweer, et al., 1999). After damage to the left hippocampus, a patient may lose the ability to learn new names but remain capable of learning new faces and spatial arrangements (Tranel, 1991). With surgical resection of the left temporal lobe, verbal memory— episodic (both short-term and learning), semantic, and remote—may be impaired (Frisk and Milner, 1990; Loring and Meador, 2003b; Seidenberg, Hermann, et al., 1998) . Nonverbal (auditory, tactile, visual) memory disturbances, including disturbances such as impaired route learning (Barrash, H. Damasio, et al., 2000), tend to accompany right temporal lobe damage.
FIGURE 3.18 Overwriting (hypergraphia) by a 48-year-old college-educated retired police investigator suffering right temporal lobe atrophy secondary to a local right temporal lobe stroke.
FIGURE 3.19 Simplification and distortions of four Bender-Gestalt designs by a 45-year-old assembly line worker with a high school education. These drawing were made four years after he had incurred left frontal damage in an industrial accident.
Emotional alterations with lateralized lesions. The complementary modes of processing that distinguish the cognitive activities of the two hemispheres
extend to emotional behavior as well (D.M. Bear, 1983; Heilman, Blonder, et al., 2011; Gainotti, 2003). The configurational processing of the right hemisphere lends itself most readily to the handling of the multidimensional and alogical stimuli that convey emotional tone, such as facial expressions (Adolphs, Damasio, and Tranel, 2000; Borod, Haywood, and Koff, 1997; Ivry and Lebby, 1998) and voice quality (Adolphs, Damasio, and Tranel, 2002; Joanette, Goulet, and Hannequin, 1990; Ley and Bryden, 1982). The analytic, bit-by-bit style of the left hemisphere is better suited for processing the words of emotion. A face distorted by fear and the exclamation “I’m scared to death”both convey affective meaning, but the meaning of each is normally processed well by only one hemisphere, the right and left, respectively. Thus, patients with right hemisphere damage tend to experience relative difficulty in discerning the emotional features of stimuli, whether visual or auditory, with corresponding diminution in their emotional responsivity (Adolphs and Tranel, 2004; Borod, Cicero, et al., 1998; Van Lancker and Sidtis, 1992). Impairments in emotional recognition may affect all or only some modalities. Defects in recognizing different kinds of emotional communication (e.g., facial expressions, gestures, prosody [the stresses and intonations that infuse speech with emotional meaning]) can occur independently of one another (Adolphs and Tranel, 2004; Bowers et al., 1993). Left hemisphere lesions typically do not impair processing of facial emotional expressions and emotional prosody. Selfrecognition and self-awareness are associated with predominantly right hemisphere involvement (J.P. Keenan et al., 2000), although both hemispheres contribute to processing of self-relevant information (Northoff et al., 2006). Prefrontal structures, most notably the medial prefrontal cortices regardless of side, play an important role in selfreferential processing (Gusnard et al., 2001; Macrae et al., 2004) and in the capacity for introspection (S.M. Fleming et al., 2010). Differences in emotional expression can also distinguish patients with lateralized lesions (Borod, 1993; Etcoff, 1986). Right hemisphere-lesioned patients’ range and intensity of affective intonation are frequently inappropriate (Borod, Koff, Lorch, and Nicholas, 1985; Joanette Goulet, and Hannequin, 1990; B.E. Shapiro and Danly, 1985). Some investigators have found that the facial behavior of right hemisphere damaged patients is less expressive than that of persons with left hemisphere damage or of normal comparison subjects (e.g., Brozgold et al., 1998; Montreys and Borod, 1998; see Pizzamiglio and Mammucari, 1989, for a different conclusion). The preponderance of research on normal subjects indicates heightened expressiveness on the left side of the face (Borod, Haywood, and Koff, 1997).
These findings are generally interpreted as indicating right hemisphere superiority for affective expression. There is disagreement as to whether right hemisphere impaired patients experience emotions any less than other people. Some studies have found reduced autonomic responses to emotional stimuli in right hemisphere damaged patients (Gainotti, Caltagirone, and Zoccolotti, 1993; Tranel and H. Damasio, 1994). However, given that such patients typically have impaired appreciation of emotionally charged stimuli, it is not entirely clear what is the fundamental deficit here; it could be that emotional experiences in such patients would not be impaired if the patients could apprehend emotional stimuli properly in the first place. Many clinicians have observed strong—but not necessarily appropriate—emotional reactions in patients with right-lateralized damage, leading to the hypothesis that their experience of emotional communications and their capacity to transmit the nuances and subtleties of their own feeling states differ from normal affective processing, leaving them out of joint with those around them (Lezak, 1994; Morrow, Vrtunski, et al., 1981; E.D. Ross and Rush, 1981). Other hemispheric differences have been reported for some of the emotional and personality changes that occur with lateralized brain injury (Adolphs and Tranel, 2004; Gainotti, 2003; Sackeim, Greenburg, et al., 1982). Some patients with left hemisphere lesions exhibit a catastrophic reaction (extreme and disruptive transient emotional disturbance) which may appear as acute—often disorganizing—anxiety, agitation, or tearfulness, disrupting the activity that provoked it. Typically, it occurs when patients are confronted with their limitations, as when taking a test (R.G. Robinson and Starkstein, 2002), and they tend to regain their composure as soon as the source of frustration is removed. Although it has been associated with aphasia (Jorge and Robinson, 2002), one study found that more nonaphasic than aphasic patients exhibited this problem (Starkstein, Federoff, et al., 1993). Anxiety is also a common feature of left hemisphere involvement (Gainotti, 1972; Galin, 1974). It may show up as undue cautiousness (Jones-Gotman and Milner, 1977) or oversensitivity to impairments and a tendency to exaggerate disabilities (Keppel and Crowe, 2000). Yet, despite tendencies to be overly sensitive to their disabilities, many patients with left hemisphere lesions ultimately compensate for them well enough to make a satisfactory adaptation to their disabilities and living situations (Tellier et al., 1990). Ben-Yishay and Diller (2011) point out that—regardless of injury site—a catastrophic reaction can occur when patients feel acutely threatened by failure or by a situation which, due to their disability, is perceived as dangerous. It
may be that diminished awareness of their limitations is what protects many patients with right hemisphere lesions from this acute emotional disturbance and why some authorities have associated it with left hemisphere damage. In contrast, patients whose injuries involve the right hemisphere are less likely to be dissatisfied with themselves or their performances than are those with left hemisphere lesions (Keppel and Crowe, 2000) and less likely to be aware of their mistakes (McGlynn and Schacter, 1989). They are more likely to be apathetic (Andersson et al., 1999), to be risk takers (L. Miller and Milner, 1985), and to have poorer social functioning (Brozgold et al., 1998). At least in the acute or early stages of their condition, they may display an indifference reaction, denying or making light of the extent of their disabilities (Darby and Walsh, 2005; Gainotti, 1972). In extreme cases, patients are unaware of such seemingly obvious defects as crippling left-sided paralysis or slurred and poorly articulated speech. In the long run these patients tend to have difficulty making satisfactory psychosocial adaptations (Cummings and Mega, 2003), with those whose lesions are anterior being most maladjusted in all areas of psychosocial functioning (Tellier et al., 1990). The Wada technique for identifying lateralization of function before surgical treatment of epilepsy provided an experimental model of these changes (Jones-Gotman, 1987; Wada and Rasmussen, 1960). The emotional reactions of patients undergoing Wada testing tend to differ depending on which side of the brain is inactivated (Ahern et al., 1994; R.J. Davidson and Henriques, 2000; G.P. Lee, Loring, et al., 1990). Patients whose left hemisphere has been inactivated are tearful and report feelings of depression more often than their right hemisphere counterparts who are more apt to laugh and appear euphoric. Since the emotional alterations seen with some stroke patients and in lateralized pharmacological inactivation have been interpreted as representing the tendencies of the disinhibited intact hemisphere, some investigators have hypothesized that each hemisphere is specialized for positive (the left) or negative (the right) emotions (e.g., Root et al., 2006). These positive/negative tendencies have suggested relationships between the lateralized affective phenomena and psychiatric disorders (e.g., Flor-Henry, 1986; G.P. Lee, Loring, et al., 1990). Gainotti, Caltagirone, and Zoccolotti (1993) hypothesized that the emotional processing tendencies of the two hemispheres are complementary: “The right hemisphere seems to be involved preferentially in functions of emotional arousal, intimately linked to the generation of the autonomic components of the emotional response, whereas the left hemisphere seems to play a more important role in functions of intentional control of the emotional
expressive apparatus”(pp. 86–87). They hypothesized further that language development tends to override the left hemisphere’s capacity for emotional immediacy while, in contrast, the more spontaneous and pronounced affective display characteristic of right hemisphere emotionality gives that hemisphere the appearance of superior emotional endowment. These ideas have held up reasonably well with the test of time. For example, a study using EEG and selfreport of normal participants’ emotional responses to film clips, supported this model of lateralized emotion processing (Hagemann et al., 2005). Thus, these basic characterizations of the emotional “styles”of the two cerebral hemispheres are mostly accurate in their essence. Although studies of depression in stroke patients seem to have produced inconsistent findings (A.J. Carson et al., 2000; Koenigs and Grafman, 2009a; Singh et al., 2000) , when these patients are also studied long after the acute event, a pattern appears in which depression tends to evolve—and worsen—in right hemisphere stroke patients and diminishes in those with left-sided lesions. Shimoda and Robinson (1999) found that hospitalized stroke patients with the greatest incidence of depression were those with left anterior hemisphere lesions. At short-term follow-up (3–6 months), proximity of the lesion to the frontal pole and lesion volume correlated with depression in both right and left hemisphere stroke patients. At long-term follow-up (1–2 years), depression was significantly associated with right hemisphere lesion volume and proximity of the lesion to the occipital pole. Moreover, the incidence of depression in patients with left hemisphere disease dropped over the course of the first year (R.G. Robinson and Manes, 2000). Impaired social functioning was most evident in those patients who remained depressed. Women are more likely to be depressed in the acute stages of a left hemisphere stroke than men (Paradiso and Robinson, 1998). The differences in presentation of depression in right and left hemisphere damaged patients are consistent with what is known about hemisphere processing differences. With left hemisphere damaged patients, depression seems to reflect awareness of deficit: the more severe the deficit and acute the patient’s capacity for awareness, the more likely it is that the patient will be depressed. Yet over time, many patients with residual right-sided motor/sensory defects and speech/language deficits make a kind of peace with their disabilities. In right hemisphere damaged patients, awareness of deficit is often muted or even absent (K. Carpenter et al., 1995; Meador, Loring, Feinberg, et al., 2000; Pedersen et al., 1996). These patients tend to be spared the agony of severe depression, particularly early in the course of their condition. When the
lesion is on the right, the emotional disturbance does not seem to arise from awareness of defects so much as from the secondary effects of the patient’s diminished self-awareness and social insensitivity. Patients with right hemisphere lesions who do not appreciate the nature or extent of their disability tend to set unrealistic goals for themselves or to maintain previous goals without taking their new limitations into account. As a result, they frequently fail to realize their expectations. Their diminished capacity for selfawareness and for emotional spontaneity and sensitivity can make them unpleasant to live with and thus more likely to be rejected by family and friends than are patients with left hemisphere lesions. Depression in patients with right-sided damage may take longer to develop than it does in patients with left hemisphere involvement since it is less likely to be an emotional response to immediately perceived disabilities than a more slowly evolving reaction to the development of these secondary consequences. When depression does develop in patients with right-sided disease, however, it can be more chronic, more debilitating, and more resistant to intervention. These descriptions of differences in the emotional behavior of right and left hemisphere damaged patients reflect observed tendencies that are not necessary consequences of unilateral brain disease (Gainotti, 2003). Nor are the emotional reactions reported here associated only with unilateral brain lesions. Mourning reactions naturally follow the experience of personal loss of a capacity whether it be due to brain injury, a lesion lower down in the nervous system, or amputation of a body part. Inappropriate euphoria and selfsatisfaction may accompany lesions involving brain areas other than the right hemisphere (McGlynn and Schacter, 1989). Depression in patients with bilateral lesions may be predicated on small anatomical differences as the incidence of depression is higher with lesions in the dorsolateral prefrontal area, in contrast to a lower incidence of depression with bilateral ventromedial prefrontal lesions, and relative to lesions outside the frontal lobes (Koenigs, Huey, et al., 2008; also see Koenigs and Grafman, 2009b). Further, psychological stressors associated with stroke (Fang and Cheng, 2009) and/or premorbid personality (R.G. Robinson and Starkstein, 2005) can affect the quality of patients’ responses to their disabilities. Thus, the clinician should never be tempted to predict the site of damage from the patient’s mood alone. While knowledge of the asymmetrical, lateralized pattern of cerebral organization adds to the understanding of many cognitive and emotional phenomena associated with unilateral lesions or demonstrated in commissurotomized patients or laboratory studies of normal subjects, it is important not to generalize these findings to the behavior of persons whose
brains are intact. In normal persons, the functioning of the two hemispheres is tightly yoked by the corpus callosum so that neither can be engaged without significant activation of the other (Lezak, 1982b). As much as cognitive styles and personal tastes and habits might seem to reflect the processing characteristics of one or the other hemisphere, these qualities appear to be integral to both hemispheres (Arndt and Berger, 1978; Sperry et al., 1979). We cannot emphasize enough that, “In the normal intact state, the conscious activity is typically a unified and coherent bilateral process that spans both hemispheres through the commissures“ (Sperry, 1976). Advantages of hemisphere interaction. Simple tasks in which the processing capacity of one hemisphere is sufficient, may be performed faster and more accurately than if both hemispheres are engaged (Belger and Banich, 1998; Ringo et al., 1994). However, the reality is that very few tasks rely exclusively on one cerebral hemisphere. Interaction between the hemispheres also has important mutually enhancing effects. Complex mental tasks such as reading, arithmetic, and word and object learning are performed best when both hemispheres can be actively engaged (Belger and Banich, 1998; Huettner et al., 1989; Weissman and Banich, 2000). Other mutually enhancing effects of bilateral processing show up in the superior memorizing and retrieval of both verbal and configurational material when simultaneously processed (encoded) by the verbal and configurational systems (B. Milner, 1978; Moscovitch, 1979: A. Rey, 1959; see also pp. 849–850 on use of double encoded stimuli for testing memory effort); in enhanced cognitive efficiency of normal subjects when hemispheric activation is bilateral rather than unilateral (J.-M. Berger, Perret, and Zimmermann, 1987: Tamietto et al., 2007); and in better performances of visual tasks by commissurotomized patients when both hemispheres participate than when vision is restricted to either hemisphere (Sergent, 1991a, b; E. Zaidel, 1979). Moreover, functional imaging studies in healthy participants exhibit bilateral activation, no matter the task, making it abundantly clear that both hemispheres contribute to almost every task with any degree of cognitive complexity (Cabeza and Nyberg, 2000). {g} The cerebral processing of music illuminates the differences in what each hemisphere contributes, the complexities of hemispheric interactions, and how experience can alter hemispheric roles (Peretz and Zatorre, 2003) . The left hemisphere tends to predominate in the processing of sequential and discrete tonal components of music (M.I. Botez and Botez, 1996; Breitling et al., 1987; Gaede et al., 1978). Inability to use both hands to play a musical instrument (bimanual instrument apraxia) has been reported with left hemisphere lesions that spare motor functions (Benton, 1977a). The right hemisphere
predominates in melody recognition and in melodic singing (H.W. Gordon and Bogen, 1974; Samson and Zatorre, 1988; Yamadori et al., 1977). Its involvement with chord analysis is generally greatest for musically untrained persons (Gaede et al., 1978). Training can alter these hemispheric biases so that, for musicians, the left hemisphere predominates for melody recognition (Bever and Chiarello, 1974; Messerli, Pegna, and Sordet, 1995), tone discrimination (Mazziota et al., 1982; Shanon, 1980), and musical judgments (Shanon, 1980, 1984). Moreover, intact, untrained persons tend not to show lateralized effects for tone discrimination or musical judgments (Shanon, 1980, 1984). Taken altogether, these findings suggest that while cerebral processing of different components of music is lateralized with each hemisphere predominating in certain aspects, both hemispheres are needed for musical appreciation and performance (Bauer and McDonald, 2003) . This point was emphatically demonstrated in a longitudinal study which found that when it comes to “real music,” as opposed to laboratory experiments, musical competence is highly individualized and appears to rely on widely distributed neuronal networks in both hemispheres (Altenmuller, 2003). Given these many studies, it is interesting to note that strong, reliable relationships between focal brain lesions and impaired music processing have been surprisingly elusive (E. Johnsen, Tranel, et al., 2009). The bilateral integration of cerebral function is also highlighted by creative artists, who typically have intact brains. Making music, for example, is nearly always a two-handed activity. For instruments such as guitars and the entire violin family, the right hand performs those aspects of the music that are mediated predominantly by the right hemisphere, such as expression and tonality, while the left hand interprets the linear sequence of notes best deciphered by the left hemisphere. Right-handed artists do their drawing, painting, sculpting, and modeling with the right hand, with perhaps an occasional assist from the left. Thus, by its very nature, the artist’s performance involves the smoothly integrated activity of both hemispheres. The contributions of each hemisphere are indistinguishable and inseparable as are the artist’s two eyes and two ears guiding the two hands or the bisymmetrical speech and singing structures that together render the artistic production.
Longitudinal Organization
Although no two human brains are exactly alike in their structure, all normally developed brains tend to share the same major distinguishing features (see Fig. 3.20). The external surface of each half of the cerebral cortex is wrinkled into a complex of ridges or convolutions called gyri (sing., gyrus), which are separated by two deep fissures and many shallow clefts, the sulci (sing., sulcus). The two prominent fissures and certain of the major sulci divide each hemisphere into four lobes: occipital, parietal, temporal, and frontal. For detailed delineations of cortical features and landmarks, the reader is referred to basic neuroanatomy textbooks, such as Blumenfeld (2010) or Montemurro and Bruni (2009); Mendoza and Foundas (2008) relate detailed anatomic features to brain function. The central sulcus divides the cerebral hemispheres into anterior and posterior regions. Immediately in front of the central sulcus lies the precentral gyrus which contains much of the primary motor or motor projection area. The entire area forward of the central sulcus is known as the precentral or prerolandic area, while the entire area forward of the precentral gyrus is known as the prefrontal cortex. The bulk of the primary somesthetic or somatosensory projection area is located in the gyrus just behind the central sulcus, called the postcentral gyrus. The area behind the central sulcus is also known as the retrorolandic or postcentral area. Certain functional systems have primary or significant representation on the cerebral cortex with sufficient regularity that the identified lobes of the brain provide a useful anatomical frame of reference for functional localization, much as a continent provides a geographical frame of reference for a country. Nonetheless, the lobes were originally defined solely on the basis of their gross, macroscopic appearance, and thus many functionally definable areas overlap two or even three lobes. For example, the boundary between the parietal and occipital lobes is arbitrarily defined to be in the vicinity of a minor, fairly irregular sulcus, the parieto-occipital sulcus, lying in an overlap zone for visual, auditory, and somatosensory functions. The parieto-occipital sulcus is usually better seen on the mesial aspect of the hemisphere, where it more clearly provides a demarcation between the parietal and occipital lobes.
FIGURE 3.20 The lobe-based divisions of the human brain and their functional anatomy. (From Strange, 1992.)
A two-dimensional—longitudinal, in this case—organization of cortical functions lends itself to a schema that offers a framework for conceptualizing cortical organization. In general, the posterior regions of the brain, behind the central sulcus, are dedicated to input systems: sensation and perception. The primary sensory cortices for vision, audition, and somatosensory perception are located in the posterior sectors of the brain in occipital, temporal, and parietal regions, respectively. Thus, in general, apprehension of sensory data from the world outside is mediated by posteriorly situated brain structures. Note that the “world outside”is actually two distinct domains: (1) The world that is outside the body and brain; and (2) the world that is outside the brain but inside the body. The latter, the soma, includes the smooth muscle, the viscera, and other bodily structures innervated by the central nervous system. The anterior brain regions, in front of the central sulcus, generally function as output systems, specialized for the execution of behavior. Thus the primary motor cortices are located immediately anterior to the rolandic sulcus. The motor area for speech, known as Broca’s area, is located in the left frontal operculum (Latin: lid-like structure). The right hemisphere counterpart of Broca’s area, in the right frontal operculum, is important for maintenance of prosody. Perhaps most important, a variety of higher-order executive functions, such as judgment, decision making, and the capacity to construct and implement various plans of action are associated with structures in the anterior frontal lobes. Overall, this longitudinal framework can be helpful in
conceptualizing specialization of brain functions. FUNCTIONAL ORGANIZATION OF THE POSTERIOR CORTEX Three primary sensory areas—for vision, hearing, and touch—are located in the posterior cortex. The occipital lobes at the most posterior portion of the cerebral hemisphere constitute the site of the primary visual cortex (see Fig. 3.20, p. 69). The postcentral gyrus, at the most forward part of the parietal lobe, contains the primary sensory (somatosensory) projection area. The primary auditory cortex is located on the uppermost fold of the temporal lobe close to where it joins the parietal lobe. Kinesthetic and vestibular functions are mediated by areas low on the parietal lobe near the occipital and temporal lobe boundary regions. Sensory information undergoes extensive associative elaboration through reciprocal connections with other cortical and subcortical areas. Although the primary centers of the major functions served by the posterior cerebral regions are relatively distant from one another, secondary association areas gradually fade into tertiary overlap, or heteromodal, zones in which auditory, visual, and body-sensing components commingle. As a general rule, the character of the defects arising from lesions of the association areas of the posterior cortex varies according to the extent to which the lesion involves each of the sense modalities. Any disorder with a visual component, for example, may implicate some occipital lobe involvement. If a patient with visual agnosia also has difficulty estimating close distances or feels confused in familiar surroundings, then parietal lobe areas serving spatially related functions may also be affected. Knowledge of the sites of the primary sensory centers and of the behavioral correlates of lesions to these sites and to the intermediate association areas enables the clinician to infer the approximate location of a lesion from the patient’s behavioral symptoms (see E. Goldberg, 1989, 1990, for a detailed elaboration of this functional schema). However, the clinician must always keep in mind that, in different brains, different cognitive functions may use the same or closely related circuits, and that similar functions may be organized by different circuits (Fuster, 2003).
The Occipital Lobes and Their Disorders The visual pathway travels from the retina through the lateral geniculate nucleus of the thalamus to the primary visual cortex. A lesion anywhere in the path between the lateral geniculate nucleus and primary visual cortex can
produce a homonymous hemianopia (see p. 58). Lesions of the primary visual cortex result in discrete blind spots in the corresponding parts of the visual fields, but typically do not alter the comprehension of visual stimuli or the ability to make a proper response to what is seen. Blindness and associated problems
The nature of the blindness that accompanies total loss of function of the primary visual cortex varies with the extent of involvement of subcortical or associated cortical areas. Some visual discrimination may take place at the thalamic level, but the cortex is generally thought to be necessary for the conscious awareness of visual phenomena (Celesia and Brigell, 2005; Koch and Crick, 2000; Weiskrantz, 1986). When damage is restricted to the primary visual cortex bilaterally (a fairly rare condition), the patient appears to have lost the capacity to distinguish forms or patterns while remaining responsive to light and dark, a condition called cortical blindness (Barton and Caplan, 2001; Luria, 1966). Patients may exhibit blindsight, a form of visually responsive behavior without experiencing vision (Danckert and Rossetti, 2005; Stoerig and Cowey, 2007; Weiskrantz, 1996) . This phenomenon suggests that limited information in the blind visual field may project through alternate pathways to visual association areas. Total blindness due to brain damage appears to require large bilateral occipital cortex lesions (Barton and Caplan, 2001). In some patients, blindness due to cerebral damage may result from destruction of thalamic areas as well as the visual cortex or the pathways leading to it. In denial of blindness due to brain damage, patients lack appreciation that they are blind and attempt to behave as if sighted, giving elaborate explanations and rationalizations for difficulties in getting around, handling objects, and other manifestly visually dependent behaviors (Celesia and Brigell, 2005; Feinberg, 2003). This denial of blindness, sometimes called Anton’s syndrome, may occur with several different lesion patterns, but typically the lesions are bilateral and involve the occipital lobe (Goldenberg, Mullbacher, and Nowak, 1995; McGlynn and Schacter, 1989; Prigatano and Wolf, 2010). Such denial may be associated with disruption of corticothalamic connections and breakdown of sensory feedback loops; there are many theories about the etiology of this and other related conditions (Adair and Barrett, 2011). Visual agnosia and related disorders
Lesions involving the visual association areas give rise to several types of
visual agnosia and other related disturbances of visual recognition and visual perception (Benson, 1989; A.R. Damasio, Tranel, and Rizzo, 2000; E. Goldberg, 1990). Such lesions are strategically situated so that basic vision is spared: the primary visual cortex is mostly or wholly intact, and the patient is not blind. The common sites of damage associated with visual agnosia include the ventral sector of the visual association cortices in the lower part of Brodmann areas 18/19 and extending into the occipitotemporal transition zone in Brodmann area 37, and include the fusiform gyrus (see Fig. 3.21). Damage to the upper sector of the visual association cortices, the dorsal part of Brodmann areas 18/19 and transitioning into the occipitoparietal region in Brodmann areas 7 and 39, produces visually related disturbances in spatial orientation and movement perception. Visual agnosia refers to a variety of relatively rare visual disturbances in which visual recognition is defective in persons who can see and who are normally knowledgeable about information coming through other perceptual channels (A.R. Damasio, Tranel, and H. Damasio, 1989; Farah, 1999; Lissauer, [1888] 1988). Most visual agnosias are associated with bilateral lesions to the occipital, occipitotemporal, or occipitoparietal regions (Tranel, Feinstein, and Manzel, 2008).
FIGURE 3.21 Brodmann’s cytoarchitectural map of the human brain, depicting different areas (marked by symbols and numbers) defined on the basis of small differences in cortical cell structure and organization. This figure shows lateral left hemisphere (upper) and mesial right hemisphere (lower) views. The Brodmann areas are comparable on the left and right sides of the brain, although specific areas can differ notably in size and configuration. (From Heilman and Valenstein, 2011).
Lissauer (1890) divided visual agnosia into two basic forms, apperceptive and associative. Associative agnosia refers to a failure of recognition due to defective retrieval of knowledge pertinent to a given stimulus. The problem is due to faulty sensory-specific memory: the patient is unable to recognize a stimulus (i.e., to know its meaning) despite being able to perceive the stimulus normally (e.g., to see shape, color, texture). Patients with associative visual agnosia can perceive the whole of a visual stimulus, such as a familiar object, but cannot recognize it although they may be able to identify it by touch, sound, or smell (A.R. Damasio, Tranel, and H. Damasio, 1989). Apperceptive agnosia refers to defective integration of otherwise normally perceived components of a stimulus. This problem is more a failure of perception: these patients fail to recognize a stimulus because they cannot integrate the perceptual elements of
the stimulus, even though individual elements are perceived normally (M. Grossman, Galetta, and D’Esposito, 1997; see Humphreys, 1999, for case examples). They may indicate awareness of discrete parts of a printed word or a phrase, or recognize elements of an object without organizing the discrete percepts into a perceptual whole. Drawings by these patients are fragmented: bits and pieces are recognizable but not joined. They cannot recognize an object presented in unconventional views, such as identifying a teapot usually seen from the side but now viewed from the top (Davidoff and Warrington, 1999; for test stimuli see Warrington, 1984; also see p. 44). The terms associative and apperceptive agnosia have remained useful even if the two conditions have some overlap. Clinically, it is usually possible to classify an agnosic patient as having primarily a disturbance of memory (associative agnosia) or primarily a disturbance of perception (apperceptive agnosia) (Riddoch and Humphreys, 2003). This classification has important implications for the management and rehabilitation of these patients (M.S. Burns, 2004; Groh-Bordin and Kerkhoff, 2010). It also maps onto different sites of neural dysfunction. For example, associative visual agnosia is strongly associated with bilateral damage to higher order association cortices in the ventral and mesial occipitotemporal regions, whereas apperceptive visual agnosia is associated with unilateral or bilateral damage to earlier, more primary visual cortices. To diagnose agnosia, it is also critical to establish that the patient’s defect is not one of naming. Naming and recognition are two different capacities, and they are separable both cognitively and neurally. Although recognition of an entity under normal circumstances is frequently indicated by naming, there is a basic difference between knowing and retrieving the meaning of a concept (its functions, features, characteristics, relationships to other concepts), and knowing and retrieving the name of that concept (what it is called). It is important to maintain the distinction between recognition, which can be indicated by responses signifying that the patient understands the meaning of a particular stimulus, and naming, which may not—and need not—accompany accurate recognition. The examiner can distinguish visual object agnosia from a naming impairment by asking the patient who cannot name the object to give any identifying information, such as how it is used (see also Kartsounis, 2010). Moreover, the discovery of deficits for specific categories (e.g., animals vs. plants; living things vs. nonliving things) has made apparent the highly detailed and discrete organization of that part of the cortex essential for semantic processing (Mahon and Caramazza, 2009; Warrington and Shallice, 1984; see visual object agnosia, below).
Simultaneous agnosia, or simultanagnosia, is a component of Balint’s syndrome. Simultanagnosia (also known as visual disorientation) appears as an inability to perceive more than one object or point in space at a time (Coslett and Lie, 2008; A.R. Damasio, Tranel, and Rizzo, 2000; Rafal, 1997a). This extreme perceptual limitation impairs these patients’ ability to move about: they get lost easily; even reaching for something in their field of vision becomes difficult (L.C. Robertson and Rafal, 2000). In addition to simultanagnosia, fullblown Balint’s syndrome includes defects in volitional eye movements (ocular apraxia, also known as psychic gaze paralysis) and impaired visually guided reaching (optic ataxia). These abnormalities in control of eye movements result in difficulty in shifting visual attention from one point in the visual field to another (Pierrot-Deseilligny, 2011; Striemer et al., 2007; Tranel and Damasio, 2000). This problem has also been characterized as reduced access to “spatial representations that normally guide attention from one object to another in a cluttered field”(L.R. Robertson and Rafal, 2000).
Left hemisphere lesions have been associated with a variety of visual agnosias. Color agnosia is loss of the ability to retrieve color knowledge that is not due to faulty perception or impaired naming. Patients with color agnosia cannot remember the characteristic colors of various entities, recall entities that appear in certain colors, choose the correct color for an entity, and retrieve basic knowledge about color (e.g., know that mixing red and yellow will make orange). As color agnosia is rare, only a few well-studied cases have been reported (see Tranel, 2003, for review). The neuroanatomical correlates of color agnosia include the occipitotemporal region, either unilaterally on the left or bilaterally. It is not entirely clear how this pattern differs from central achromatopsia (acquired color blindness; e.g., see Tranel, 2003), although color agnosia is probably associated with lesions that are somewhat anterior to those responsible for central achromatopsia. Functional imaging studies have shown activations in the left inferior temporal region, bilateral fusiform gyrus, and right lingual gyrus during a condition in which subjects were asked to retrieve previously acquired color knowledge (Chao and Martin, 1999; A. Martin, Haxby, et al., 1995). A. Martin and colleagues noted that these regions are not activated by color perception per se, and thus functional imaging supports the same conclusion hinted at by lesion studies: that the neural substrates for color perception and color knowledge are at least partially separable. Inability to comprehend pantomimes (pantomime agnosia), even when the ability to copy them remains intact, has been reported with lesions confined to the occipital lobes (Goodale, 2000; Rothi, Mack, and Heilman, 1986). Another disorder of visual perception associated mainly with lesions to the left inferior occipital cortex and its subcortical connections is pure alexia, a reading problem that stems from defects of visual recognition, organization, and scanning rather than from defective comprehension of written material. The latter problem usually occurs only with parietal damage or in aphasia (Coslett, 2011; Kohler and Moscovitch, 1997). Pure alexia is frequently accompanied by
defects in color processing, especially color anomia (impaired color naming) (Benson, 1989; A.R. Damasio and H. Damasio, 1983). One form of acalculia (literally, “no counting”), a disorder that Grewel (1952) considered a primary type of impaired arithmetic ability in which the calculation process itself is affected, may result from visual disturbances of symbol perception associated with left occipital cortex lesions (Denburg and Tranel, 2011). Some visual agnosias are particularly associated with unilateral damage (see Chaves and Caplan, 2001). Associative visual agnosia usually occurs with lesions of the left occipitotemporal region (De Renzi, 2000). Visual object agnosia can develop for specific categories of stimuli at a basic semantic level which accounts for its predominance with left posterior lesions (Capitani et al., 2009). Because this condition usually affects the different stimulus categories selectively (Farah and McClelland 1991; Forde and Humphreys 1999; Warrington and Shallice, 1984), it has been termed category specific semantic impairment (see Mahon and Caramazza, 2009). Patients with this condition experience major defects in the recognition of categories of living things, especially animals, with relative or even complete sparing of categories of artifactual entities (e.g., tools and utensils). Less commonly, the profile is reversed, and the patient cannot recognize tools/utensils but performs normally for animals (Tranel, H. Damasio, and Damasio, 1997; Warrington and McCarthy 1994). Lesions in the right mesial occipital/ventral temporal region, and in the left mesial occipital region, have been associated with defective recognition of animals; for lesions in the left occipital-temporalparietal junction the association appears to be with defective recognition of tools/utensils (Tranel, H. Damasio, and Damasio, 1997). Other visuoperceptual anomalies that can occur with occipital lesions include achromatopsia (loss of color vision in one or both visual half-fields, or in a quadrant of vision), astereopsis (loss of stereoscopic vision), metamorphopsias (visual distortions), monocular polyopsias (double, triple, or more vision in one eye), optic allesthesia (misplacement of percepts in space), and palinopsia (perseverated visual percept) (Barton and Caplan, 2001; Morland and Kennard, 2002; Zihl, 1989). These are very rare conditions but of theoretical interest as they may provide clues to cortical organization and function. Lesions associated with these conditions tend to involve the parietal cortex as well as the occipital cortex. Prosopagnosia
Prosopagnosia (face agnosia), the inability to recognize familiar faces, is the most frequently identified and well-studied of the visual agnosias (A.R.
Damasio, Tranel, and H. Damasio, 1990). Undoubtedly this owes in large measure to the fact that faces are such an important and intriguing class of visual stimuli. Millions of faces are visually similar, yet many people learn to recognize thousands of distinct faces. Moreover, faces are recognizable under many different conditions, such as from obscure angles (e.g., from behind, from the side), adorned with various artifacts (e.g., hat, hockey helmet), and after aging has radically altered the physiognomy. Faces also convey important social and emotional information, providing clues about the affective state of a person or about potential courses of social behavior (e.g., approach or avoidance: Darwin, 1872/1955; Adolphs, Tranel, and Damasio, 1998). The remarkable cross-cultural and cross-species consistencies in face processing provide further proof of the fundamental importance of this class of stimuli (cf. Ekman, 1973; Fridlund, 1994). Patients with prosopagnosia typically can no longer recognize the faces of previously known individuals and are also unable to learn new faces—hence, the impairment covers both the retrograde and anterograde aspects of memory. These patients are unable to recognize the faces of family members, close friends, and—in the most severe cases—even their own face (e.g., in photographs or in a mirror). The impairment is modality-specific in that it is confined to vision; thus, for example, a prosopagnosic patient can readily identify familiar persons from hearing their voices. Even within vision, the disorder is highly specific, and may not affect recognition from gait or other movement cues. The classic neural correlate of prosopagnosia is bilateral occipitotemporal damage in the cortex and underlying white matter of the ventral occipital association regions and the transition zone between occipital lobe and temporal lobe (A.R. Damasio, H. Damasio, and Rizzo, 1982; A.R. Damasio, Tranel, and H. Damasio, 1990) . However, prosopagnosia has occasionally been reported with lesions restricted to the right hemisphere (De Renzi, Perani, Carlesimo, et al., 1994; Landis, Cummings, Christen, et al., 1986; Vuilleumier, 2001). Characteristic hemisphere processing differences show up in face recognition performances of patients with unilateral occipital lobe lesions (A.R. Damasio, Tranel, and Rizzo, 2000). Left occipital lesioned patients using right hemisphere processing strategies form their impressions quickly but may make semantic (i.e., naming) errors. With right occipital lesions, recognition proceeds slowly and laboriously in a piecemeal manner, but may ultimately be successful. Oliver Sacks richly described the extraordinary condition of prosopagnosia in his book The Man who Mistook His Wife for a Hat (1987). His patient
suffered visual agnosia on a broader scale, with inability to recognize faces as just one of many recognition deficits. In patients with prosopagnosia the problem with faces is usually the most striking, but the recognition defect is often not confined to faces. Careful investigation may uncover impaired recognition of other visual entities at the normal level of specificity. The key factors that make other categories vulnerable to defective recognition are whether stimuli are relatively numerous and visually similar, and whether the demands of the situation call for specific identification. Thus, for example, prosopagnosic patients may not be able to identify a unique car or a unique house, even if they are able to recognize such entities generically; e.g., cars as cars and houses as houses. These findings demonstrate that the core defect in prosopagnosia is the inability to disambiguate individual visual stimuli. In fact, cases have been reported in which the most troubling problem for the patient was in classes of visual stimuli other than human faces—for example, a farmer who lost his ability to recognize his individual dairy cows, and a bird-watcher who became unable to tell apart various subtypes of birds (Assal et al., 1984; B. Bornstein et al., 1969). Another interesting dissociation is that most prosopagnosics can recognize facial expressions of emotion (e.g., happy, angry), and can make accurate determinations of gender and age based on face information (Humphreys et al., 1993; Tranel, Damasio, and H. Damasio, 1988). With regard to emotional expressions, the reverse dissociation can occur; for example, bilateral damage to the amygdala produces an impairment in recognizing facial expressions such as fear and surprise, but spares the ability to recognize facial identity (Adolphs, Tranel, and Damasio, 1995). An especially intriguing finding is “covert”or “non-conscious”face recognition in prosopagnosic patients. Despite a profound inability to recognize familiar faces consciously, prosopagnosic patients often have accurate, above-chance discrimination of familiar faces when tested with covert or implicit measures. For example, when prosopagnosics were presented with either correct or incorrect face-name pairs, the patients produced larger amplitude skin conductance responses (SCRs) to the correct pairs (Bauer, 1984; Bauer and Verfaellie, 1988). Rizzo and coworkers (1987) reported that prosopagnosic patients produced different patterns of eye movement scanpaths for familiar faces, compared to unfamiliar ones. De Haan and his colleagues (1987a,b) used a reaction time paradigm in which prosopagnosic patients had to decide whether two photographs were of the same or different individuals. They found that reaction time was systematically faster for familiar faces compared to unfamiliar ones. In other studies, SCRs
were recorded while prosopagnosic patients viewed well-known sets of faces randomly mixed with new faces (Tranel and Damasio, 1985; Tranel, Damasio, and H. Damasio, 1988). The patients produced significantly larger SCRs to familiar faces compared to unfamiliar ones. Covert face recognition has also been reported in developmental (congenital) prosopagnosia (R.D. Jones and Tranel, 2001). Oliver Sacks (2010) estimated that up to 10% of normal persons have weak face recognition, often occurring on a familial basis. In this it is similar to established distributions of other biologically related cognitive skills. While patients with prosopagnosia can often recognize familiar persons upon seeing their distinctive gait, patients with lesions in more dorsal occipitoparietal regions, who typically have intact recognition of face identity, often have defective motion perception and impaired recognition of movement. These findings make evident the separable and distinctive functions of the “dorsal”and “ventral”visual systems (see below). Two visuoperceptual systems
A basic anatomic dimension that differentiates visual functions has to do with a dorsal (top side of the cerebrum)- ventral (bottom) distinction (see Fig. 3.22). Within this dorsal-ventral distinction are two well-established functional pathways in the visual system (Goodale, 2000; Mesulam, 2000b; Ungerleider and Mishkin, 1982). One runs dorsally from the occipital to the parietal lobe. This occipital-parietal pathway is involved with spatial analysis and spatial orientation. It is specialized for visual “where”types of information, and hence is known as the dorsal “where”pathway. The occipital-temporal pathway, which takes a ventral route from the occipital lobe to the temporal lobe, conveys information about shapes and patterns, Its specialization is visual “what”types of information, and hence it is known as the ventral “what”pathway. This basic distinction between the “what”and “where”visual pathways provides a useful context for understanding the classic visual syndromes, such as prosopagnosia (what), achromatopsia (what), and Balint’s syndrome (where).
FIGURE 3.22 Lateral view of the left hemisphere, showing the ventral “what”and dorsal “where”visual pathways in the occipital-temporal and occipital-parietal regions, respectively. The pathways are roughly homologous in left and right hemispheres. Figure courtesy of: http://en.wikipedia.org/wiki/File:Ventraldorsal_streams.svg.
The Posterior Association Cortices and Their Disorders Association areas in the parieto-temporo-occipital region are situated just in front of the visual association areas and behind the primary sensory strip (see Fig. 3.20, p. 69). These higher order association cortices include significant parts of the parietal and occipital lobes and some temporal association areas. Functionally, higher order association cortices (secondary, tertiary) are the site of cortical integration for all behavior involving vision, touch, body awareness and spatial orientation, verbal comprehension, localization in space, abstract and complex cognitive functions of mathematical reasoning, and the formulation of logical propositions that have their conceptual roots in basic visuospatial experiences such as “inside,” “bigger,” “and,” or “instead of.” As it is within these areas that intermodal sensory integration takes place, this region has been deemed “an association area of association areas”(Geschwind, 1965), “heteromodal association cortex”(Mesulam, 2000b), and “multimodal sensory convergence areas”(Heilman, 2002). A variety of apraxias (inability to perform previously learned purposeful movements) and agnosias have been associated with parieto-temporo-occipital
lesions. Most of them have to do with verbal or with nonverbal stimuli but not with both, and thus are asymmetrically localized. A few occur with lesions in either hemisphere. Constructional disorders are among the most common disabilities associated with lesions to the posterior association cortices in either hemisphere (Benton and Tranel, 1993; F.W. Black and Bernard, 1984; De Renzi, 1997b), reflecting the involvement of both hemispheres in the multifaceted demands of such tasks (see Chapter 14). They are impairments of the “capacity to draw or construct two- or three-dimensional figures or shapes from one- and two-dimensional units”(Strub and Black, 2000) and seem to be closely associated with perceptual defects (Sohlberg and Mateer, 2001) . Constructional disorders take different forms depending on the hemispheric side of the lesion (Laeng, 2006). Left-sided lesions are apt to disrupt the programming or ordering of movements necessary for constructional activity (Darby and Walsh, 2005; Hecaen and Albert, 1978) . Defects in design copies drawn by patients with left hemisphere lesions appear as simplification and difficulty in making angles. Visuospatial defects associated with impaired understanding of spatial relationships or defective spatial imagery tend to underlie right hemisphere constructional disorders (Pillon, 1979) . Diagonality in a design or construction can be particularly disorienting to patients with right hemisphere lesions (B. Milner, 1971; Warrington, James, and Kinsbourne, 1966). The drawings of patients with right-sided involvement suffer from a tendency to a counterclockwise tilt (rotation), fragmented percepts, irrelevant overelaborativeness, and inattention to the left half of the page or the left half of elements on the page (Diller and Weinberg, 1965; Ducarne and Pillon, 1974; Warrington, James, and Kinsbourne, 1966; see Fig. 3.23a and b for freehand drawings produced by left and right hemisphere damaged patients showing typical hemispheric defects). Assembling puzzles in two- and threedimensional space may be affected by both right and left hemisphere lesions (E. Kaplan, 1988). The relative frequency with which left versus right hemisphere damaged patients manifest constructional disorders has not been fully clarified. In general, such disorders are probably more common or at least more severe and long lasting with right hemisphere lesions (Y. Kim et al., 1984; Sunderland, Tinson, and Bradley, 1994; Warrington, James, and Maciejewski, 1986). One complicating factor in this literature is that some studies excluded patients with aphasia, and other studies included them (Arena and Gainotti, 1978). Task difficulty is another relevant factor contributing to conflicting reports about constructional disorders. For example, Benton (1984) gave his
patients a difficult three-dimensional construction task while Arena and Gainotti (1978) gave their patients relatively simple geometric designs to copy. Still, a lesion in the right posterior association cortices is probably more likely to produce visuoconstruction defects than its left-sided counterpart. The integration of sensory, motor, and attentional signals within the posterior parietal cortex enables the direction and shifting of attention and response which are prerequisites for effectively dealing with space and with tasks that make demands on spatial processing (Farah, Wong, et al., 1989; Mesulam, 1983; J.F. Stein, 1991). One identified function mediated in the parietal lobes is the ability to disengage attention in order to be able to reengage it rapidly and correctly: parietal lobe damage significantly slows the disengagement process (L.C. Robertson and Rafal, 2000), with the greatest slowing occurring when the lesion is on the right (Morrow and Ratcliff, 1988; Posner, Walker, et al., 1984).
FIGURE 3.23 ( a) This bicycle was drawn by the 51-year-old retired salesman who constructed the block designs of Figure 3.17a. This drawing demonstrates that inattention to the left side of space is not due to carelessness, as the patient painstakingly provided details and was very pleased with his performance. (b) This bicycle was drawn by a 24-year-old college graduate almost a year after he received a severe injury to the left side of his head. He originally drew the bike without pedals, adding them when asked, “How do you make it go?”
Short-term memory disorders associated with lesions to the inferior parietal lobule (the lower part of the parietal lobe lying just above the temporal lobe) reflect typical hemispheric dominance patterns (Mayes, 2000b; Vallar and Papagno, 2002). Thus, with leftsided lesions in this area, a verbal short-term memory impairment reduces the number of digits, tones (W.P. Gordon, 1983), or words (Risse et al., 1984) that can be recalled immediately upon hearing them. In contrast, patients with comparable right-sided lesions
show reduced spatial short-term memory and defective short-term recall for geometric patterns. Direct cortical stimulation studies have also implicated this region as important for short-term memory (often referred to as “working memory”in this literature, especially in functional imaging studies) (Mayes, 1988; Ojemann, Cawthon, and Lettich, 1990). Functional neuroimaging has highlighted this inferior parietal region and, usually, dorsolateral prefrontal regions as well when investigating verbal (left side) or spatial (right side) cerebral activity during short-term memory tasks (Linden, 2007; E.E. Smith and Jonides, 1997; Wager and Smith, 2003). Hécaen (1969) associated difficulties in serial ordering with impairment of the parieto-temporo-occipital area of both the left and right hemispheres. Perception of the temporal order in which stimuli are presented is much more likely to be impaired by left than right hemisphere lesions involving the posterior association areas (Carmon and Nachson, 1971; von Steinbüchel, Wittman, et al., 1999). However, when the stimulus array includes complex spatial configurations, then patients with right hemisphere lesions do worse than those with left-sided lesions (Carmon, 1978). Moreover, right-sided lesions of the parieto-temporo-occipital area can interfere with the comprehension of order and sequence so that the patient has difficulty dealing with temporal relationships and making plans (Milberg, Cummings, et al., 1979). An exceptionally bright medical resident sustained a right temporal area injury in a skiing accident. He sought neuropsychological advice when he found he was unable to organize a research report he had begun preparing before the accident. On the WAIS (it was that long ago) he achieved scores in the superior and very superior ranges on all tests except for a low average Picture Arrangement. Pursuing what seemed to be a sequencing problem, he was given the Shipley Institute of Living Scale, performing as well as expected on the vocabulary section, but making many errors on the items calling for deducing sequence patterns.
Similar types of laterality effects occur with auditory stimuli such that leftsided damage impairs temporal processing (duration of signals, intervals between sounds) and right-sided damage impairs spectral processing (pitch, harmonic structure) (Robin et al., 1990). Moreover, disruption of the sequential organization of speech associated with left hemisphere lesions may result in some of the language formulation defects of aphasia: the fundamental defect of conduction aphasia—impaired verbatim repetition—is strongly associated with damage in the vicinity of the inferior parietal region (H. Damasio and Damasio, 1980). Lesions in either hemisphere involving the somatosensory association areas just posterior to the postcentral gyrus can produce tactile agnosia or
astereognosis (inability to identify an object by touch) on the contralateral body side (Caselli, 1991). Some patients with right-sided lesions may experience bilateral astereognosis (Vuilleumier, 2001). Sensitivity to the size, weight, and texture of hand-held objects is also diminished contralaterally by these lesions (A.R. Damasio, 1988). The left-sided inattention that often accompanies right posterior damage appears to exacerbate the problem such that, with severely reduced left hand sensitivity, tactile agnosia may be bilateral (Caselli, 1991). Semmes’ (1968) findings that right hemisphere lesions may be associated with impaired shape perception in both hands have received support (e.g., Boll, 1974), but the incidence of bilateral sensory defects among patients with unilateral lesions of either hemisphere is high (B. Milner, 1975). Parietal lesions in either hemisphere may disrupt the guidance of movements insofar as they depend on somatosensory contributions (Jason, 1990) ; parieto-occipital lesions can lead to the disordered visually guided reaching behavior (optic ataxia) found in Balint’s syndrome (see pp. 72, 257). A note on commonly lateralized defects. Many quite specific neuropsychological abnormalities arising from unilateral hemispheric damage are typically associated with their most usual lateralization. It should be noted, however, that these conditions can appear with lesions on the unexpected side in right-handed patients. These are not frequent events, but they happen often enough to remind the clinician to avoid setting any brain-behavior relationships in stone. There is simply too much complexity, too much variability, and too much that is not understood, to overlook exceptions. Defects arising from left posterior hemisphere lesions
On the left, the posterior language areas are situated at the juncture of the temporal and parietal lobes, especially the supramarginal (Brodmann area 40) and angular (Brodmann area 39) gyri. Fluent aphasia and related symbolprocessing disabilities are generally the most prominent symptoms of lesions in this region. The fluent aphasias that arise from damage here are usually characterized by impaired comprehension, fluent speech that is susceptible to paraphasias (misspoken words), sometimes jargon speech, or echolalia (parroted speech). Especially acutely, affected patients can manifest a striking lack of awareness of their communication disability. The critical brain area has been noted to be where “the great afferent systems”of audition, vision, and body sensation overlap (M.P. Alexander, 2003; Benson, 1988; A.R. Damasio and H. Damasio, 2000) . W.R. Russell (1963) pointed out that even very small cortical lesions in this area can have widespread and devastating consequences for verbal behavior—a not uncommon phenomenon.
Communication disorders arising from lesions in the left parieto-temporooccipital region may include impaired or absent recognition or comprehension of the semantic and logical features of language (E. Goldberg, 1990; Howard, 1997). Lesions overlapping both the parietal and occipital cortex may give rise to reading defects (Hanley and Kay, 2010); occipital/temporal lobe overlap has also been implicated in alexia (Kleinschmidt and Cohen, 2006; Mendoza and Foundas, 2008). Although writing ability can be disrupted by lesions in a number of cortical sites (Hinkin and Cummings, 1996; Luria, 1966), the most common scenario for agraphia involves lesions on the left, often in the posterior association cortex (Roeltgen, 2011). The nature of the writing defect depends on the site and extent of the lesion. In many cases, defects of written language reflect the defects of a concomitant aphasia or apraxia (Bub and Chertkow, 1988; Luria, 1970), although this is by no means necessary (Kemmerer et al., 2005). Apraxias characterized by disturbances of nonverbal symbolization, such as gestural defects or inability to demonstrate an activity in pantomime or to comprehend pantomimed activity, are usually associated with lesions involving language comprehension areas and the overlap zone for kinesthetic and visual areas of the left hemisphere (Heilman and Rothi, 2011; Kareken, Unverzagt, et al., 1998; Meador, Loring, Lee, et al., 1999) . Defective ability to comprehend gestures has been specifically associated with impaired reading comprehension in some aphasic patients, and with constructional disorders in others (Ferro, Santos, et al., 1980) . Impairments in sequential hand movements are strongly associated with left parietal lesions (Haaland and Yeo, 1989). Apraxias often occur with aphasia and may be obscured by or confused with manifestations of the language disorder. De Renzi, Motti, and Nichelli (1980) observed that while 50% of patients with leftsided lesions were apraxic, so too were 20% of those damaged on the right, although right-lesioned patients had milder deficits. That apraxia and aphasia can occur separately implicates different but anatomically close or overlapping neural networks (Heilman and Rothi, 2011; Kertesz, Ferro, and Shewan, 1984). Arithmetic abilities are complex and depend on a number of different brain regions (Rosselli and Ardila, 1989; Rickard et al., 2000; Spiers, 1987). Thus, it is no surprise that acquired disturbances of mathematical ability (acalculia) can appear in many different forms, in the setting of many different types of neurological disease, and in connection with many different lesion sites. However, left-sided lesions in the parietal region, especially the inferior parietal lobule, have been most consistently associated with acalculia (Denburg and Tranel, 2011). It has been suggested that the left parietal region constitutes
the “mathematical brain”in humans (Butterworth, 1999) and may even serve analogously in monkeys, further supporting the centrality of this area in arithmetic activity (Dehaene, Molko, et al., 2004). In general, acalculia is most common and most severe with lesions of the left posterior cortex. Pure agraphia may also result from lesions in this area (Schomer, Pegna, et al., 1998). Acalculia often accompanies disturbances of language processing, but not inevitably; some patients develop acalculia without any aphasic symptoms. Moreover, that this dissociation can occur in reverse, that is, impaired processing of linguistic information with preserved processing of numbers and mathematical calculations, further supports the neuroanatomical separability of mathematical operations and language (S.W. Anderson, Damasio, and H. Damasio, 1990). Data from fMRI studies have suggested that while “exact”types of mathematical knowledge (e.g., number facts, mathematics tables) may depend on language and may require intact inferior prefrontal structures that are also involved in word association tasks, “approximate”arithmetic (e.g., quantity manipulation, estimation, and approximation of magnitudes) may be languageindependent and rely on bilateral areas of the parietal lobes that are also involved in visuospatial processing (Dehaene, Spelke, et al., 1999). Acalculia and agraphia typically appear in association with other communication disabilities, although this association is not necessary. When acalculia and agraphia occur together with left-right spatial disorientation and finger agnosia (an inability to identify one’s own fingers, to orient oneself to one’s own fingers, to recognize or to name them), this fourfold symptom cluster is known as Gerstmann’s syndrome (Gerstmann, 1940, 1957). The classic lesion site for Gerstmann’s syndrome is the left parietooccipital region. Acalculia associated with finger agnosia typically disrupts such relatively simple arithmetic operations as counting or ordering numbers. The frequency with which these individual symptoms occur together reflects an underlying cortical organization in which components involved in the different impairments are in close anatomical proximity. Whether the Gerstmann syndrome is a true syndrome (i.e., a symptom set that consistently occurs together), or a cluster of symptoms frequently found in association with one another due to their anatomic propinquity, has been repeatedly questioned (e.g., Benton, 1977b, 1992; Geschwind and Strub, 1975). A recent hypothesis suggests that the “pure”form of this symptom complex may be a true syndrome with the four classical symptoms arising from a single subcortical lesion disconnecting “co-localized fibre tracts”(Rusconi et al., 2010). In clinical practice the Gerstmann syndrome is useful as a cluster of symptoms
which may provide valuable localizing information. Agnosias arising from left hemisphere lesions just anterior to the visual association area may appear as disorientation of either extrapersonal or personal space and are likely to disrupt either symbolic meanings or left- right direction sense (Benton, 1973 [1985]; E. Goldberg, 1990) . Not only may disorders of extrapersonal or personal space occur separately, but different kinds of personal space deficits and disorientations can be distinguished (Buxbaum and Coslett, 2001; Lishman, 1997; Newcombe and Ratcliff, 1989). However, visuospatial perception tends to be spared in these conditions (Belleza et al., 1979). Other deficits—especially aphasia—are also frequently associated with one or more of these symptoms (Benton, 1977b; Denburg and Tranel, 2011). Moreover, but rarely, both finger agnosia and right- left disorientation can be present when cortical damage is on the right (Benton, 1977b [1985]; Denburg and Tranel, 2011). Disabilities arising from left hemisphere lesions tend to be more severe when the patient is also aphasic. Although all of the disturbances discussed here can occur in the absence of aphasia, it is rare for any of them to appear as the sole defect. Defects arising from right posterior hemisphere lesions
One of the most prominent disorders arising from lesions of the right posterior association cortex is the phenomenon of inattention, which refers to impaired attention to and awareness of stimuli presented to half of personal and extrapersonal space, almost always the left half (Chatterjee and Coslett, 2003; S. Clarke, 2001; Heilman, Watson, and Valenstein, 2011; see also pp. 428–444). The defect is not due to sensory impairments yet it can be so severe that patients fail entirely to acknowledge or attend to events occurring in the left half of space (contralateral to the lesion), including manipulations of their own limbs, visual stimuli, and auditory events. Vallar and Perani (1986, 1987) identified the parietal lobe as the most common lesion site for leftsided inattention. However, Kertesz and Dobrowolski (1981) observed left-sided inattention occurring more prominently among patients whose lesions involved the area around the central sulcus in the right hemisphere (including posterior frontal and some temporal lobe tissue) than among patients whose lesions were confined to the parietal lobe and, in literature reports, the right temporoparietal cortex is most usually associated with chronic left-sided inattention. In general, the severity of the deficit increases with increased lesion size. A few left hemisphere damaged patients experience a parallel phenomenon: right-sided inattention following left hemisphere lesions (Kohler
and Moscovitch, 1997), most commonly during the acute stage of their illness (Colombo et al., 1976), but severe hemispatial inattention is very much a “right hemisphere phenomenon”just as aphasia is a “left hemisphere phenomenon.” The precise nature of left-sided inattention has been debated for a long time as there are different views on the basis of the problem, and even what it should be called. Some investigators prefer the term “neglect,” but this term implies deliberateness and even some kind of moral laxity—connotations that are simply not accurate. (Historically, and unfortunately, the term “neglect”has persisted in most textbooks despite its obvious false implications; readers can expect to find the term in many contemporary writings and research papers.) In this book, “inattention”refers to most aspects of unilaterally depressed awareness. Inattention may become evident in a number of ways, some quite nuanced. For example, it may occur as a relatively discrete and subtle disorder apparent only to the examiner. When stimulated bilaterally with a light touch to both cheeks, or fingers wiggled in the outside periphery of each visual field simultaneously (double simultaneous stimulation), inattentive patients tend to ignore the stimulus on the left although they have no apparent difficulty noticing the stimuli when presented one at a time. This form of inattention has been variously called sensory inattention, sensory extinction, sensory suppression, or perceptual rivalry (Darby and Walsh, 2005). Visual extinction is frequently associated with other manifestations of inattention in patients with right-sided lesions, but these phenomena can occur separately (Barbieri and De Renzi, 1989; S. Clarke, 2001). They are often accompanied by similar deficits in the auditory or tactile modalities, and by left nostril extinction for odors (Bellas et al., 1988). In fact, inattention can occur in any perceptual modality but rarely involves all of them (S. Clarke, 2001; Umilta, 1995). Although technically differentiable and bearing different names, extinction and inattention are probably two aspects of the same pathological process (Bisiach, 1991; Mesulam, 2000; Rafal, 2000). Inattention for personal and extrapersonal space usually presents as one syndrome but they do not always occur together (Bisiach, Perani, et al., 1986). Mild inattention to one’s own body may appear as simple negligence: patients with right-sided damage may rarely use their left hand spontaneously, they may bump into objects on the left, or may not use left-side pockets. In its more severe forms, inattention for personal space can amount to complete unawareness of the half of space or the half body opposite the side of the lesion (hemisomatognosia). Some patients with extreme loss of left-side awareness (usually associated with left hemiplegia) may even deny left-side
disabilities or be unable to recognize that their paralyzed limbs belong to them (anosognosia) (Feinberg, 2003; Orfei et al., 2007; Tranel, 1995). Most cases of anosognosia involve the inferior parietal cortex, but it can occur with purely subcortical lesions or with frontal damage (Starkstein, Jorge, and Robinson, 2010). S.W. Anderson and Tranel (1989) found that all of their patients with impaired awareness of physical disabilities also lacked awareness of their cognitive defects. Anosognosia creates a serious obstacle to rehabilitation as these patients typically see no need to exert the effort or submit to the discomforts required for effective rehabilitation. Other obstacles to rehabilitation of these patients are reduced alertness, difficulty maintaining focus, and conceptual disorganization. In left visuospatial inattention, not only may patients not attend to stimuli in the left half of space, but they may also fail to draw or copy all of the left side of a figure or design and tend to flatten or otherwise diminish the left side of complete figures (see Fig. 3.24, p. 80). When copying written material, the patient with unilateral inattention may omit words or numbers on the left side of the model, even though the copy makes less than good sense (Fig. 3.24c). Increasing the complexity of the drawing task increases the likelihood of eliciting the inattention phenomenon (Pillon, 1981a). In reading, words on the left side of the page may be omitted although such omissions alter or abolish the meaning of the text (see Fig. 10.8, p. 438) (B. Caplan, 1987; Mesulam, 2000b). This form of visual imperception typically occurs only when right parietal damage extends to occipital association areas. Left visual inattention is frequently, but not necessarily, accompanied by left visual field defects, most usually a left homonymous hemianopia. Some patients with obvious left-sided inattention, particularly those with visual inattention, display a gaze defect such that they do not spontaneously scan the left side of space, even when spoken to from the left. These are the patients who begin reading somewhere in the middle of a line of print when asked to read and who seem unaware that the reading makes no sense without the words from the left half of the line. Most such right hemisphere damaged patients stop reading of their own accord, explaining that they have “lost interest,” although they can still read with understanding when their gaze is guided. Even in their mental imagery, some of these patients may omit left-sided features (Bisiach and Luzzatti, 1978; Meador, Loring, Bowers, and Heilman, 1987).
FIGURE 3.24 a Flower drawing, illustrating left-sided inattention; drawn by a 48-year-old college professor with history of right hemisphere AVM rupture resulting in a fronto-temporo-parietal lesion.
FIGURE 3.24 c Writing to copy, illustrating inattention to the left side of the to-be-copied sentences; written by a 69 year-old man with a right temporo-parieto-occipital lesion.
FIGURE 3.24 b Copy of the Taylor Complex Figure (see p. 575), illustrating inattention to the left side of the stimulus; drawn by a 61-year-old college-educated man with history of right occipital-parietal stroke.
FIGURE 3.24 d Example of inattention to the left visual field by a 57-year-old college graduate with a right parieto-occipital lesion. A 4 5-year-old pediatrician sustained a large area of right parietal damage in a motor vehicle accident. A year later he requested that his medical license be reinstated so he could resume practice. He acknowledged a visual deficit which he attributed to loss of sight in his right eye and the left visual field of his left eye and for which he wore a little telescopic monocle with a very narrow range of focus. He claimed that this device enabled him to read. He had been divorced and was living independently at the time of the accident, but has since stayed with his mother. He denied physical and cognitive problems other than a restricted range of vision which he believed would not interfere with his ability to return to his profession. On examination he achieved scores in the superior to very superior range on tests of old verbal knowledge although he performed at only average to high average levels on conceptual verbal tasks. Verbal fluency (the rapidity with which he could generate words) was just low average, well below expectations for his education and verbal skills. On written tests he made a number of small errors, such as copying the word bicycle as “bicyclicle,” Harry as “Larry,” and mistrust as “distrust”(on a list immediately below the word displease, which he copied correctly). Despite a very superior oral arithmetic performance, he made errors on four of 20 written calculation problems, of which two involved left spatial inattention (see Fig. 3.16, p. 63). Verbal memory functions were well within normal limits. On visuoperceptual and constructional tasks, his scores were generally average except for slowing on a visual reasoning test which dropped his score to low average. In his copy of the Bender-Gestalt designs (see Fig. 14 .1, p. 570), left visuospatial inattention errors were prominent as he omitted the left dot of a dotted arrowhead figure and the left side of a three-sided square.
Although he recalled eight of the nine figures on both immediate and delayed recall trials, he continued to omit the dot and forgot the incomplete figure altogether. On Line Bisection, 13 of 19 “midlines”were pushed to the right. On the Indented Paragraph Reading Test (see Fig. 10.8, p. 438), in addition to misreading an occasional word he omitted several words or phrases on the left side of the page. Whether reading with or without his monocle, essentially the performances did not differ. In a follow-up interview he reported having had both inattention and left-sided hemiparesis immediately after the accident. In ascribing his visuoperceptual problems to compromised vision, this physician demonstrated that he had been unaware of their nature. Moreover, despite painstaking efforts at checking and rechecking his answers—as was evident on the calculation page and other paper-and-pencil tasks—he did not self-monitor effectively, another aspect of not being aware of his deficits. The extent of his anosognosia and associated judgment impairments became apparent when he persisted in his ambition to return to medical practice after being informed of his limitations.
Visuospatial disturbances associated with lesions of the parieto-occipital cortex include impairment of topographical or spatial thought and memory (De Renzi, 1997b; Landis, Cummings, Benson, and Palmer, 1986; Tranel, Vianna, et al., 2009). Some workers identify temporo-occipital sites as the critical areas for object recognition (Dolan et al., 1997; Habib and Sirigu, 1987). Another problem for patients with lesions in this area is perceptual fragmentation (Denny-Brown, 1962). A severely left hemiparetic political historian, when shown photographs of famous people he had known, named bits and pieces correctly: “This is a mouth … this is an eye,” but was unable to organize the discrete features into recognizable faces [mdl]. Warrington and Taylor (1973) also related difficulties in perceptual classification— specifically, the inability to recognize an object from an unfamiliar perspective, to right parietal lesions (see also McCarthy and Warrington, 1990). Appreciation and recognition of facial expressions, too, may be impaired (Adolphs, H. Damasio, Tranel, et al., 2000). A commonly seen disorder associated with right parietal lesions is impaired constructional ability (Benton, 1967 [1985]; Benton and Tranel, 1993; Farah and Epstein, 2011). Oculomotor disorders, defective spatial orientation, or impaired visual scanning contribute to the constructional disability. A right hemisphere dyscalculia shows up on written calculations as an inability to manipulate numbers in spatial relationships, such as using decimal places or “carrying,” although the patient retains mathematical concepts and the ability to do problems mentally (Denburg and Tranel, 2011; see Fig. 3.16, p. 63). Spatial (or visuospatial) dyscalculia is frequently associated with constructional deficits (Rosselli and Ardila, 1989) and seems to follow from more general impairments of spatial orientation or organization.
Apraxia for dressing, in which patients have difficulty relating to and organizing parts of the body to parts of their clothing, may accompany right-sided parietal lesions (A.R. Damasio, Tranel, and Rizzo, 2000; Hier, Mondlock, and Caplan, 1983a,b). It is not a true apraxia but rather symptomatic of spatial disorientation coupled, in many instances, with left visuospatial inattention (Poeck, 1986). Other performance disabilities of patients with right parietal lobe involvement are also products of a perceptual disorder, such as impaired ability to localize objects in left hemispace (Mesulam, 2000b). For example, the chief complaint of a middle-aged rancher with a right parieto-occipital lesion was difficulty in eating because his hand frequently missed when he put it out to reach the cup or his fork overshot his plate.
The Temporal Lobes and Their Disorders Temporal cortex functions: information processing and lesion-associated defects
The primary auditory cortex is located on the upper posterior transverse folds of the temporal cortex (Heschel’s gyrus), for the most part tucked within the Sylvian fissure (see Figs. 3.2, p. 45; and 3.20, p. 69). This part of the superior temporal gyrus receives input from the medial geniculate nucleus of the thalamus. Much of the temporal lobe cortex is concerned with hearing and related functions, such as auditory memory storage and complex auditory perceptual organization. In most persons, left-right asymmetry follows the verbal-nonverbal pattern of the posterior cortex: left hemisphere specialization for verbal material and right hemisphere specialization for nonverbalizable material. The superior temporal cortex and adjacent areas are critical for central auditory processing (Mendoza and Foundas, 2008; Mesulam, 2000b). The auditory pathways transmit information about sound in all parts of space to both hemispheres through major contralateral and minor ipsilateral projections. Cortical deafness occurs with bilateral destruction of the primary auditory cortices, but most cases with severe hearing loss also have subcortical lesions (Bauer and McDonald, 2003) . Patients whose lesions are limited to the cortex are typically not deaf, but have impaired recognition of auditory stimuli. “Cortical deafness”in these latter instances is a misnomer, as these patients retain some (often near normal) hearing capacity (Coslett, Brashear, and Heilman, 1984; Hecaen and Albert, 1978); the patients are better described as having auditory agnosia (see below). Unilateral damage to posterior superior temporal cortex can produce an impairment in attending to and processing multiple auditory stimuli simultaneously. Thus, for example, when presented two words simultaneously to the left and right ears in a dichotic listening paradigm, the patient may only report words from the ear on the same side as the lesion. This can occur even when basic hearing is normal and the patient can accurately report stimuli
from either ear when stimuli are presented only to one side at a time. A related phenomenon that often develops with slowed processing resulting from a brain insult (e.g., see p. 409), or becomes apparent when hearing aids raise a low hearing level, is the “cocktail party”effect—the inability to discriminate and focus on one sound in the midst of many. Polster and Rose (1998) described disorders of auditory processing that parallel those of visual processing. Pure word deafness, which occurs mostly with left temporal lesions, is an inability to comprehend spoken words despite intact hearing, speech production, reading ability, and recognition of nonlinguistic sounds. Auditory agnosia may refer to an inability to recognize auditorily presented environmental sounds independent of any deficit in processing spoken language. When confined to nonspeech sounds, auditory agnosia is most frequently associated with right-sided posterior temporal lesions. Bilateral lesions to the posterior part of the superior temporal gyrus lead to a more full-blown syndrome of auditory agnosia, in which the patient is unable to recognize both speech and nonspeech sounds (Bauer, 2011; Tranel and Damasio, 1996). This condition, almost always caused by stroke, involves the sudden and complete inability to identify the meaning of verbal and nonverbal auditory signals, including spoken words and familiar environmental sounds such as a telephone ringing or a knock on the door. A very specific manifestation of auditory agnosia is phonagnosia, the inability to recognize familiar voices. Lesions to the right parietal cortices can cause this sort of defect, even though auditory acuity is fundamentally unaltered (Van Lancker, Cummings, et al., 1988; Van Lancker and Kreiman, 1988). Lesions confined to the inferior temporal cortices tend to disrupt perception of auditory spectral information (aspects of auditory signals such as pitch and harmonic structure) (Robin et al., 1990) but may not disrupt voice recognition (Van Lancker, Kreiman, and Cummings, 1989). Anatomically distinct “what”and “where”systems, also analogous to the visual processing system, have been described (S. Clarke, Bellmann, et al., 2000; Rauschecker and Tian, 2000). Perhaps the most crippling of the communication disorders associated with left temporal lobe damage is Wernicke’s aphasia (also called sensory, fluent, or jargon aphasia) since these patients can understand little of what they hear, although motor production of speech remains intact (Benson, 1993; D. Caplan, 2011; A.R. Damasio and Geschwind, 1984; Table 2.1, p. 34). Such patients may prattle grammatically and syntactically correct speech that is complete nonsense. These patients’ auditory incomprehension does not extend to nonverbal sounds for they can respond appropriately to sirens, squealing
brakes, and the like. Acutely, many of these patients have anosognosia, neither appreciating their deficits nor aware of their errors, and thus unable to selfmonitor, self-correct, or benefit readily from therapy (J. Marshall, 2010; Rubens and Garrett, 1991). In time this tends to abate with some spontaneous improvement. Many Wernicke’s aphasics make fewer errors as they improve, owing to better monitoring of errors and probably a certain amount of associated trepidation and apprehension about their mistakes. Lesions in the left temporal lobe may interfere with retrieval of words which can disrupt fluent speech (dysnomia; anomia [literally no words], when the condition is severe) (A.R. Damasio and H. Damasio, 2000; Indefrey and Levelt, 2000). When this defect occurs in relative isolation, as a severe impairment of naming unaccompanied by other speech or language impairments, it is called “anomic aphasia.” Anomic aphasia is associated with lesions in left inferotemporal or anterior temporal regions, mostly outside the classic language areas of the left hemisphere (Tranel and Anderson, 1999). Different profiles of naming impairment have been associated with different patterns of brain lesions. For example, specific parts of the temporal lobe are relatively specialized for different categories of nouns: retrieval of proper nouns is associated with the left temporal polar region (Tranel, 2009), whereas common noun retrieval is associated with more posterior parts of the temporal lobe including the inferotemporal region in Brodmann areas 20/21 and the anterior part of area 37 (H. Damasio et al., 1996, 2004). There are even relative cortical specializations for different categories of common nouns; for example, retrieval of animal names has been associated with the anterior part of the inferotemporal region, while names for tools has been localized to the more posterior part of the inferotemporal region in and near the vicinity of the occipital-temporal-parietal junction (H. Damasio, Grabowski, et al., 1996; H. Damasio, Tranel, et al., 2004; A. Martin, Wiggs, et al., 1996). Furthermore, areas subserving retrieval of nouns and verbs are distinguishable: noun retrieval appears to be a left temporal lobe function, whereas verb retrieval is associated with the left premotor/prefrontal region (A.R. Damasio and Tranel, 1993; Hillis and Caramazza, 1995). Many patients with a naming disorder have difficulty remembering or comprehending long lists, sentences, or complex verbal material and their ability for new verbal learning is greatly diminished or even abolished. After left temporal lobectomy, patients tend to perform complex verbal tasks somewhat less well than prior to surgery, verbal memory tends to worsen (Ivnik, Sharbrough, and Laws, 1988), and they do poorly on tests that simulate everyday memory skills (Ivnik, Malec, Sharbrough, et al., 1993). It can be
difficult to disentangle name retrieval impairment from verbal memory impairment in such patients. Common sense and an understanding of these naming disorders are needed when an examiner considers giving standard list learning tasks to a patient who may be incapable of producing a valid performance. Lesions to the right temporal lobe in patients with left language laterality are unlikely to result in language disabilities. Rather, such patients may develop defects in spatial, nonverbal, and abstract reasoning, including difficulty organizing complex data or formulating multifaceted plans (Fiore and Schooler, 1998). Impairments in sequencing operations (Canavan et al., 1989; Milberg, Cummings, et al., 1979) have been associated with right temporal lobe lesions. Right temporal lobe damage may result in amusia (literally, no music), particularly involving receptive aspects of musicianship such as the abilities to distinguish tones, tonal patterns, beats, or timbre, often but not necessarily with resulting inability to enjoy music or to sing or hum a tune or rhythmical pattern (Benton, 1977a; Peretz and Zatorre, 2003; Robin et al., 1990). Right temporal lesions have been associated with impaired naming (Rapcsak, Kazniak, and Rubens, 1989) and recognition (Meletti et al., 2009) of facial expressions (e.g., happiness, fear). Damage to structures in the right anterolateral temporal region can impair recognition of unique entities (e.g., familiar persons and landmarks). For example, lesions in the right temporal pole have been associated with defective retrieval of conceptual knowledge for familiar persons (Gainotti, Barbier, and Marra, 2003; Tranel, H. Damasio, and Damasio, 1997). More posterior right temporal lesions can impair retrieval of knowledge for non-unique entities such as animals (H. Damasio, Tranel, et al., 2004). Together with interconnected right pre-frontal cortices, the right anterolateral temporal region appears to be important for the retrieval of unique factual memories (Tranel, Damasio, and H. Damasio, 2000). Since the temporal lobes also contain some components of the visual system, including the crossed optic radiations from the upper quadrants of the visual fields, temporal lobe damage can result in a visual field defect (Barton and Caplan, 2001). Damage in ventral posterior portions of the temporal cortex can produce a variety of visuoperceptual abnormalities, such as deficits in visual discrimination and in visual word and pattern recognition that occur without defects on visuospatial tasks (Fedio, Martin, and Brouwers, 1984; B. Milner, 1958). This pattern of impaired object recognition with intact spatial localization appeared following temporal lobectomies that involved the anterior portion of the occipitotemporal object recognition system (Hermann,
Seidenberg, et al., 1993). Cortices important for olfaction are located in the medial temporal lobe near the tip (part of Brodmann area 38, see p. 71), and involve the uncus. These cortices receive input from the olfactory bulb at the base of the frontal lobe. Odor perception may require intact temporal lobes (Eskenazi et al., 1986; Jones-Gotman and Zatorre, 1988) and is particularly vulnerable to right temporal lesions (Abraham and Mathai, 1983; Martinez et al., 1993). Memory and the temporal lobes
A primary function of the temporal lobes is memory; many of its regions are critical for normal learning and retention. Left temporal lobe lesions tend to disrupt verbal memory, whereas right temporal lobe lesions tend to interfere with memory for many different kinds of nonverbalizable material (Tranel and Damasio, 2002; Jones-Gotman, Zatorre, Olivier, et al., 1997; Markowitsch, 2000). Lobectomy lesions of the temporal neocortex impair learning and retention when the hippocampus is disconnected from cortical input (JonesGotman et al., 1997). Within the temporal lobes, the medial sector is of particular importance for memory, and especially for the acquisition of new information (learning). The medial temporal lobe contains several specific structures that are critical for memory, including the hippocampus, the entorhinal and perirhinal cortices, and the portion of the parahippocampal gyrus not occupied by the entorhinal cortex. These structures are collectively referred to as the hippocampal complex. Its various components are intensively interconnected by means of recurrent neuroanatomical circuits (Insausti et al., 1987; Suzuki and Amaral, 1994; Van Hoesen and Pandya, 1975). In addition, the higher order association cortices of the temporal lobe receive both input from the association cortices of all sensory modalities and feedback projections from the hippocampus. Thus, structures in the hippocampal complex have access to and influence over signals from virtually the entire brain. Hence the hippocampus is strategically situated to create memory traces that bind together the various sensations and thoughts comprising an episode (N.J. Cohen and Eichenbaum, 1993; Eichenbaum and Cohen, 2001). The importance of the hippocampal complex for the acquisition of new factual knowledge was initially documented in the famous case of H.M. (Scoville and Milner, 1957) (Fig. 3.25). Following bilateral resection of the medial temporal lobe, H.M. developed a profound inability to learn new information (which did not extend to skill learning), the form of knowledge called declarative memory (Corkin, 1984; Milner, 1972). Subsequent studies
have expanded upon the lessons learned from H.M., and have firmly established that the hippocampus and adjacent areas of the temporal lobe are critical for acquiring information (Gilboa et al., 2004; Squire, Clark, and Bayley, 2009).
FIGURE 3.25(a, b) Ventral view of H.M.’s brain ex situ using 3-D MRI reconstruction depicting the extent of the bilateral medial temporal lobe damage shown in the black mesh. Reproduced with permission from Jacopo Annese, Ph.D. and The Brain Observatory, University of California, San Diego.
However, exactly how learning occurs remains a much-debated topic in cognitive neuroscience (Kesner, 2009) . One view is that the hippocampus processes new memories by assigning each experience an index corresponding to the areas of the neocortex which, when activated, reproduce the experience or memory (Alvarez and Squire, 1994; Schacter, Norman, and Koutstaal, 1998; Tranel, H. Damasio, and Damasio, 2000) . The hippocampal index typically includes information about events and their context, such as when and where they occurred as well as emotions and thoughts associated with them. The index corresponding to a particular memory, such as a
conversation or other activity, is crucial for maintaining activation of the memory until the neocortex consolidates the memory by linking all the features of the experience to one another. After consolidation, direct neocortical links are sufficient for storing the memory (Schacter et al., 1998). Consolidation is crucial for the longevity of memory (Nader and Hardt, 2009). As shown initially by the case of H.M., bilateral damage to the hippocampus can produce severe anterograde amnesia (Rempel-Clower et al., 1996; Tulving and Markowitsch, 1998). The cortical regions adjacent to the hippocampus— the entorhinal cortex, parahip-pocampus, and other perirhinal cortices— provide major input to the hippocampus. When hippocampal lesions extend into these regions, the severity of the memory impairment worsens and the likelihood of extensive retrograde amnesia increases (K.S. Graham and Hodges, 1997; J.M. Reed and Squire, 1998). Damage to the hippocampus and adjacent areas of the temporal lobe is responsible for the memory impairment that emerges in early Alzheimer ’s disease (Cotman and Anderson, 1995; Jack et al., 1999; Kaye, Swihart, and Howieson, et al., 1997). Emotional disturbances are associated with lesions involving the hippocampus as well as the amygdala and uncus (see pp. 86–87). The hippocampus is one neural site where adult neurogenesis is known to occur; the integration of new neurons from this site is thought to play a role in new learning and plasticity (Deng et al., 2010). Different structures within the medial temporal lobe memory system make distinct contributions to declarative memory (Aggleton and Brown, 1999; N.J. Cohen and Eichenbaum, 1993; Eichenbaum and Cohen, 2001) . Cortical regions adjacent to the hippocampus appear to be sufficient for normal recognition of single stimuli (Hannula et al., 2006; Konkel et al., 2008). Many patients with focal hippocampal damage can recognize single faces, words, or objects as well as do cognitively intact persons (Barense et al., 2007; A.C.H. Lee et al., 2005; Shrager et al., 2008). Functional neuroimaging has associated selective activation in the perirhinal cortex (area around the primary olfactory cortex) with recognition memory for single items (Davachi, Mitchell, and Wagner, 2003; Davachi and Wagner, 2002; Hannula and Ranganath, 2008). Single neuron recordings demonstrate that some hippocampal cells are highly selective in their responses; others change firing patterns for processing changing information (Viskontas, 2008). Moreover, memory for relations between single stimuli requires the hippocampus (J.D. Ryan, Althoff, et al., 2000). This division of labor explains the severity of the memory disorder resulting from hippocampal lesions. Even when amnesic patients are capable of learning new pieces of information, those items lack superordinate,
organizing context. Old memories do not appear to be stored in the hippocampus; rather, storage is probably distributed throughout the cortex (Fuster, 1995; RempelClower et al., 1996; E.T. Rolls and Treves, 1998). However, an intact hippocampus likely participates in some fashion in recollection of new as well as old memories (Moscovitch, 2008), although extensive damage to this system does not prevent patients from retrieving old, remote memories of many types. The hippocampal system appears to have only a temporary role in the formation and maintenance of at least some aspects of declarative memory (Alvarez and Squire, 1994; Squire, 1992; Zola-Morgan and Squire, 1993). Consistent with this, patients with bilateral hippocampal damage exhibit a temporallygraded defect in retrograde memory (N. Butters and Cermak, 1986; Rempel-Clower et al., 1996; Victor and Agamanolis, 1990), such that memories acquired close in time to the onset of the brain injury are shattered or lost, but the farther back one goes in the autobiography of the patient, the more intact memory becomes. Neuroimaging has demonstrated patterns of activation paralleling these clinical observations as bilateral activation of the hippocampus increases in response to recognition of new information, while older information elicits decreased hippocampal activation (C.N. Smith and Squire, 2009). The principle of laterality with hemispheric asymmetry applies to the medial temporal lobe memory system: viz., the left-sided system mediates memory for verbal material, and the right-sided system mediates memory for nonverbalizable material (Milner, 1971). Thus, damage to the left hippocampal complex tends to produce disproportionate impairments in learning verbally coded material such as names, and verbal facts; whereas damage to the right hippocampal complex may result in relatively greater deficits learning information for which it is specialized, such as new faces, geographical routes, melodies, and spatial information (Barrash, Tranel, and Anderson, 2000; Milner, 1971; Tranel, 1991). Functional imaging studies give further evidence of these patterns of material-specific memory relationships (J.B. Brewer et al., 1998; A.D. Wagner et al., 1998). For example, London taxi drivers recalling familiar routes showed right hippocampal activation on PET scans (Maguire, Frackowiak, and Frith, 1997). However, rote verbal learning may be more vulnerable to left hippocampal dysfunction than learning meaningful material (e.g., a story) (Saling et al., 1993), probably because meaning aids learning for most people. Thus, not surprisingly, learning unrelated as opposed to related word pairs is disproportionately impaired with left hippocampal disease (A.G. Wood et al., 2000).
Although the hippocampal complex is crucial for acquiring declarative information that can be brought into the “mind’s eye,” it is not involved in learning nondeclarative information, e.g., motor skills, habits, and certain forms of conditioned responses and priming effects. This independence of motor skill learning from the hippocampal system was first reported by Brenda Milner (1962) in patient H.M.; it has been replicated in other patients with medial temporal damage and severe amnesia for declarative information (e.g., N.J. Cohen and Squire, 1980; Gabrieli, Corkin, et al., 1993; Tranel, Damasio, H. Damasio, and Brandt, 1994) as well as in functional neuroimaging studies (Gabrieli, Brewer, and Poldrack, 1998). Intriguingly, the hippocampal system and systems that support nondeclarative memory appear to interact or even compete when a new representation is being formed (e.g., Poldrack et al., 2001). Thus, hippocampal representations that store information about unique episodes may be less useful or even counterproductive when learning certain kinds of nondeclarative information, such as probabilistic outcomes. A number of investigators have manipulated aspects of declarative memory by asking subjects to remember (or reconstruct) the past and think about (or construct) the future (Addis and Schacter, 2008; Hassabis et al., 2007; Szpunar et al., 2007). Functional imaging has shown activation in the hippocampus during future and past episodic construction tasks (Addis, Wong, and Schacter, 2007; Okuda et al., 2003). The construction of an episodic event may depend on the ability of the hippocampus to integrate and bind the individual elements, such as objects, actions, etc., of an event or scene into a mental representation that contains the relations between the objects, actions, and so on (N.J. Cohen and Eichenbaum, 1993; Eichenbaum and Cohen, 2001). The hippocampus can also be activated when processing an out-of-order version of a previously studied sequence is called for (Kumaran and Maguire, 2006), suggesting that the structure of a memory becomes part of a network necessary for predicting the outcomes of ongoing events. Previous work has elucidated the role of the hippocampus in indexing, reactivating, and reintegrating the various elements that make up the memory trace it bound together during the initial encoding phase of an event (Moscovitch, 1992). In concert with frontal lobe structures (e.g., ventromedial prefrontal cortex), the ability to manipulate and integrate mental representations for goal-directed cognition, whether of the past, present, or future, relies critically on hippocampus and declarative memory (Buckner, 2010; M.C. Duff et al., 2007; Kumaran, Summerfield, et al., 2009). Medial temporal lobe structures were long thought to be necessary only for enduring memories. The medial temporal lobe memory system did not seem to
be crucial for immediate or working memory as patients with complete bilateral medial temporal lobe damage (including H.M.) appeared to maintain information in immediate or working memory so long as they were allowed continuous rehearsal (Sidman et al., 1968; Tranel, Damasio, and H. Damasio, 2000). However, subsequent studies of lesion patients and investigations using functional neuroimaging techniques indicate that the medial temporal lobes may be important to maintenance or processing of information over very short intervals (Dickerson and Eichenbaum, 2010; K.S. Graham, Barense, and Lee, 2010). Lesion patients are impaired for recognition of spatial relational information after intervals of only seconds (Hannula, Tranel, et al., 2006; T. Hartley et al., 2007; J.D. Ryan and Cohen, 2004; Shrager et al., 2008). Similarly, recognition of simpler materials including faces and colors also dissipates quickly after damage to the medial temporal lobes (E.A. Nichols et al., 2006; Shrager et al., 2008; I.R. Olson et al., 2006). Functional neuroimaging has also shown the timing and interconnectivity of medial temporal lobe regions over these short delays. Hippocampal activation has been reported while representations (e.g., sets of faces) are mentally maintained—activations that have been dissociated from subsequent memory performance (Ranganath and D’Esposito, 2001). On-line comparison processes have also been reported to engage the medial temporal lobes (C.E. Stern et al., 2001; J. Voss et al., 2011; D. Warren et al., 2010). In contrast, it is cortical regions that are organized for long-term storage of memories (Fuster, 1999). However, converging evidence from a variety of methods has shown the importance of the medial temporal lobes interacting with many neocortical brain regions for the maintenance and recall of remote memories (e.g., Woodard, Seidenberg, et al., 2007). For example, recall of autobiographical events depends on a network of structures involving the medial temporal lobe and regions of the neocortex (Bayley, Gold, et al., 2005). Awake patients undergoing brain surgery report vivid auditory and visual recall of previously experienced scenes and episodes upon electrical stimulation of the exposed temporal lobe cortex (Gloor et al., 1982; Penfield, 1958). Nauta (1964) speculated that these memories involve widespread neural mechanisms and that the temporal cortex and, to a lesser extent, the occipital cortex play roles in organizing the discrete components of memory for orderly and complete recall. Information involving each modality appears to be stored in the association cortex adjacent to its primary sensory cortex (A.R. Damasio, H. Damasio, and Tranel, 1990; Killackey, 1990; A. Martin, Haxby, et al., 1995). Thus, retrieval of visual information is impaired by lesions of the visual association cortex of
the occipital lobe, deficient retrieval of auditory information follows lesions of the auditory association cortex of the temporal lobe, and so on. Emotion and the temporal lobes
The amygdala, situated in the anterior medial temporal lobe, is critical for emotion. The amygdala participates in a diverse array of emotional and social behaviors (Adolphs and Tranel, 2004; Bechara, H. Damasio, et al., 1999; Buchanan et al., 2009). Lesion studies and functional neuroimaging have provided compelling evidence that the amygdala is involved in processing emotional stimuli from all major sensory modalities—visual, auditory, somatosensory, olfactory, and gustatory—although vision probably predominates, especially in humans. This small structure appears to be necessary for processing facial expressions of fear as well as facial emotion in social contexts (Adolphs, 2010). Fear conditioning in both animals and humans engages the amygdala (Bechara, Tranel, et al., 1995; LeDoux, 1996). The amygdala has been shown to be critical for the induction and experience of fear; when it is bilaterally damaged patients may lose their capacity for experiencing fear entirely, even when confronted with highly fear-inducing stimuli and situations such as interacting with live spiders and snakes or going through a haunted house (J.S. Feinstein, Adolphs, et al., 2010). Moreover, some psychiatric conditions have been linked to amygdala pathology including posttraumatic stress disorder, phobias, anxiety disorders, and autism (BaronCohen, Ring, et al., 2000; Lombardo et al., 2009). It is interesting to note that many of the fear-related disorders appear to involve over-activity of the amygdala, which is the opposite of what happens when the amygdala is bilaterally damaged and fear is abolished. Given what is known about the amygdala, it is not surprising that a variety of emotional disorders commonly occur with temporal lobe lesions— especially when the amygdala is damaged—including anxiety, delusions, and mood disorders (Drevets, 2000; Heilman, Blonder, et al., 2011; Trimble, Mendez, et al., 1997). Abnormal electrical activity of the brain associated with temporal lobe epilepsy (TLE) typically originates within the temporal lobe (see p. 212). Specific problems associated with temporal lobe epilepsy include alterations of mood, obssessional thinking, changes in consciousness, hallucinations, and perceptual distortions in all sensory modalities and pain, and stereotyped, often repetitive and meaningless motor behavior that may comprise quite complex activities (Filley, 1995; Schomer, O’Connor, et al., 2000; G.J. Tucker, 2002). Other names for these disturbances are psychomotor epilepsy and psychomotor seizures or complex partial seizures (Pincus and
Tucker, 2003). Seizure activity and experimental stimulation of the amygdala provoke visceral responses associated with fright and mouth movements involved in feeding (Bertram, 2009). The amygdala provides an emotional “tag”to memory traces with its direct as well as indirect connections with the hippocampus (Adolphs, 2009). Also, with its connections to the orbitofrontal and temporal cortices (Heimer, 2003; Heimer and Van Hoesen, 2006), this small cluster of nuclei appears to be necessary for learning the reward and emotional valence of sensory stimuli (Buchanan et al., 2006; Hikosaka et al., 2008; E. A. Murray, 2007). The amygdala is necessary for hippocampal processing of information with reward and emotional features (Chavez et al., 2009; McGaugh, 2004) . The amygdala may play an important role in memory consolidation by influencing neuroplasticity in other brain regions (McGaugh, 2000), although this line of thinking remains speculative. In humans, bilateral destruction restricted to just the amygdala does not produce a prominent amnesic disorder (G.P. Lee, Meador, Smith, et al., 1988; Markowitsch, Calabrese, Wurker, et al., 1994; I.F. Small et al., 1977), but it may alter emotional learning (Tranel, Gullickson, et al., 2006) and the perception and experience of fear (J.S. Feinstein, Adolphs, et al., 2010). However, lesions in the amygdala and nearby temporal cortex contribute to the severity of memory deficits associated with hippocampal damage (J.S. Feinstein, Rudrauf, et al., 2009; Jernigan, Ostergaard, and Fennema-Notestine, 2001) . Amygdalectomized patients are slow to acquire a mind set, but once it is established it becomes hard to dislodge; yet performance on standard measures of mental abilities (e.g., Wechsler Intelligence Scale tests) remains essentially unchanged (R. Andersen, 1978; J.S. Feinstein, Rudrauf, et al., 2009). The Kluver-Bucy syndrome emerges with bilateral destruction of the amygdala and uncus (the small hooked front end of the inner temporal lobe fold) (Hayman et al., 1998). This rare condition can occur with disease (e.g., herpes encephalitis) or trauma. These placid patients lose the capacity to learn and to make perceptual distinctions, they eat excessively and may become indiscrimately hypersexual (Cummings and Mega, 2003; Lishman, 1997). FUNCTIONAL ORGANIZATION OF THE ANTERIOR CORTEX In the course of the brain’s evolution, the frontal lobes developed most recently to become its largest structures. It was only natural for early students of brain function to conclude that the frontal lobes must therefore be the seat of the highest cognitive functions. Thus, when Hebb reported in 1939 that a small
series of patients who had undergone surgical removal of frontal lobe tissue showed no loss in IQ score on an intelligence test, he provoked a controversy that has continued, in various shapes and forms, to the present day (A.R. Damasio, Anderson, and Tranel, 2011). It is now unquestioned that important cognitive, emotional, and social functions can be disrupted by frontal lobe damage. However, many patients with frontal lobe damage show few if any frank neurological signs as their neurological examination is often entirely normal and they may also sail through most or all portions of the neuropsychological examination without mishap. Two main reasons make evaluation of the consequences of frontal lobe damage one of clinical neuropsychologists’ most challenging tasks: (1) In the not-real-life setting of a laboratory or examination room, manifestations of frontal lobe damage are often subtle; and (2) The nature of neuropsychological assessment, with its emphasis on highly structured tasks administered under conditions determined and controlled by the examiner, tends to reduce access to the most important defects associated with frontal lobe damage (Lezak, 1982a). Thus, highly standardized evaluations may reveal few unequivocal defects, even in patients who are blatantly abnormal in their real life behavior. The frontal lobes are organized into three basic subdivisions: precentral, premotor, and prefrontal (Fig. 3.26). The prefrontal subdivision contains structures critical for higher-order functions such as planning, judgment, reasoning, decision making, emotional regulation, and social conduct, and hence this subdivision receives the greatest importance in the following discussion. The three major subdivisions of the frontal lobes differ functionally, although each is involved more or less directly with behavior output (E. Goldberg, 1990; Stuss, 2011; Stuss and Benson, 1986; Stuss, Eskes, and Foster, 1994; see H. Damasio, 1991, for a detailed delineation of the anatomy of the frontal lobes and Pandya and Yeterian, 1998, for diagrams of interconnections within the frontal lobes and with other regions of the brain).
FIGURE 3.26 The major subdivisions of the human frontal lobes identified on surface 3-D MRI reconstructions of the brain (upper views) and at the mid-sagittal level (bottom view). Adapted from Stuss and Levine (2002).
Precentral Division Within the frontal lobes, the precentral division is the most posterior portion, occupying the gyrus just in front of the central (Rolandic) sulcus. This is the primary motor cortex, which mediates movement (not isolated muscles) on the opposite side of the body, and has important connections with the cerebellum, basal ganglia, and motor divisions of the thalamus. The cortex is arranged somatotopically such that different parts of the cortex represent different parts of the body, albeit with disproportionate sizes (see Fig. 3.14, p. 58). Lesions here result in weakness (paresis) or paralysis of the corresponding body parts. Inside the fold of the frontal and temporal lobes formed by the Sylvian fissure
is the primary taste cortex.
Premotor Division Situated just anterior to the precentral area, the premotor and supplementary motor areas have been identified as the site in which the integration of motor skills and learned action sequences takes place (A.R. Damasio, Anderson, and Tranel, 2011; Mendoza and Foundas, 2008; Nilsson et al., 2000). Premotor areas participate in afferent/efferent loops with the basal ganglia and thalamus; the looped interconnections are targeted to specific sites in both cortical and subcortical structures (Middleton and Strick, 2000a,b; Passingham, 1997). Lesions here do not result in loss of the ability to move, but rather disrupt the integration of the motor components of complex acts, producing discontinuous or uncoordinated movements and impaired motor skills, and may also affect limb strength (Jason, 1990; Mesulam, 2000b). Related manifestations include motor inattention, hypokinesia (sluggish movement activation), motor impersistence (reduced ability to maintain a motor act; e.g., eye closure, tongue protrusion), and perseveration (Heilman and Watson, 1991). These disorders affect patients with right-sided lesions to the premotor region much more frequently than patients with comparable lesions on the left (50% vs. 10%) (Seo et al., 2009). The supplementary motor area (SMA) mediates preparatory arousal to action at a preconscious stage in the generation of movement with critical contributions to the execution of complex motor response patterns already in the behavioral repertoire (Mendoza and Foundas, 2008). Thus, lesions in this area may disrupt the volitional aspects of movement leading to the rather bizarre syndrome of akinetic mutism in which patients do not move or talk, despite the preserved basic ability to do both (J.W. Brown, 1987; A.R. Damasio and Van Hoesen, 1983). Patients with akinetic mutism produce no speech even when spoken to, and facial expressions are few. Purposeful, goal-directed movements are also lacking except for some “automatic”and internally prompted behaviors such as going to the bathroom. These patients act as though they have lost the drive, motivation, or “will”to interact with their environment. Akinetic mutism tends to be more severe and long lasting when the damage to the supplementary motor area is bilateral, whereas unilateral lesions produce a more transient form of the condition. Human neuroimaging studies and electrophysiological studies in monkeys have also suggested that the anterior premotor regions provide a key substrate for planning and
organizing complex motor behaviors (Abe and Hanakawa, 2009). In the left hemisphere, lesions in the portion of the motor association area that mediates the motor organization and patterning of speech may result in speech disturbances with—as their common feature—disruption of speech production but intact comprehension. These deficits may range in severity from mild slowing and reduced spontaneity of speech production (Stuss and Benson, 1990) to total suppression of speech (D. Caplan, 2011). Other alterations in speech production may include stuttering, poor or monotonous tonal quality, or diminished control of the rate of speech production. Apraxia of speech (oral apraxia) can occur with lesions in this area (Luria, 1966; Ogar et al., 2005). Patients with this condition display disturbances in organizing the muscles of the speech apparatus to form sounds or in patterning groups of sounds into words. This may leave them incapable of fluent speech production although their ability to comprehend language is usually unimpaired and they are not aphasic in the classic sense. Closely associated with the supplemental motor area mediating speech mechanisms are those involved in the initiation and programming of fine hand movements (Jonas, 1987; Vuilleumier, 2001), so it is not surprising that severe agraphia can follow lesions here (Roeltgen, 2011). Damage to the premotor cortex has been associated with ideomotor apraxia (slowing in organizing or breakdown in organization of directed limb movements) (Leiguarda, 2002; Liepmann, 1988). Defects on other visuomotor tasks that make significant demands for generation or organization of motor behavior are also common with premotor lesions (Benton, 1968; Jones-Gotman and Milner, 1977). The left frontal operculum (the area lower on the lateral slope of the left prefrontal cortex and close to the premotor division, numbered by Brodmann as areas 44 and 45) contains the classic motor speech area, or Broca’s area (for a broad-based review, see Grodzinsky and Amunts, 2006). This region serves as “the final common path for the generation of speech impulses”(Luria, 1970, p. 197). Lesions to this area give rise to Broca’s (or efferent, motor) aphasia which involves defective symbol formulation as well as a breakdown in the orderly production of speech (see Table 2.1, p. 34). Patients with larger lesions, and/or when damage extends into subcortical structures and the anterior insular cortex, usually have a more severe Broca’s aphasia, with limited improvement. Lesions in corresponding areas on the right may contribute to fragmented or piecemeal thinking reflected most clearly in impairments of perceptual organization and planning. Expressive amusia or avocalia (inability to sing) can occur with lesions of either frontal lobe but may be associated with aphasia
when lesions are on the left. Other activities disturbed by lesions involving the right premotor area include diminished grip strength; motor impersistence may also appear with lesions in this area (Seo et al., 2009). Lesions to the right hemisphere area homologous with Broca’s area on the left have been linked to defects in paralinguistic communication, especially aprosodia (defective melodic contour in speech expression) (E.D. Ross, 2000). These patients may lose the ability for normal patterns of prosody and gesturing. Their communication is characterized by flat, monotonic speech, loss of spontaneous gesturing, and impaired ability to impart affective contours to their speech (i.e., to implement emotional tones in speech, such as happiness, sadness, etc.), but without deficits in the formal aspects of propositional speech that are typical of the aphasias.
Prefrontal Division The cortex and underlying white matter of the frontal lobes is the site of interconnections and feedback loops between the major sensory and motor systems, linking and integrating all components of behavior at the highest level (Fuster, 1995; Pandya and Yeterian, 1990). Pathways carrying information about the external environment from the posterior cortex—of which about 60% comes from the heteromodal association cortex and about 25% from secondary association areas (Strub and Black, 1988)—and information about internal states from the limbic system converge in the anterior portions of the frontal lobes, the prefrontal cortex. Thus, the prefrontal lobes are where already correlated incoming information from all sources—external and internal, conscious and unconscious, memory storage and visceral arousal centers—is integrated and enters ongoing activity (Fuster, 2003). “The human prefrontal cortex attends, integrates, formulates, executes, monitors, modifies, and judges all nervous system activities”(Stuss and Benson, 1987). The prefrontal cortex has been assigned the loftiest of rubrics, including “the seat of consciousness”(Perecman, 1987), the “organ of civilization”(G.A. Miller, Galanter, and Pribram, 1960), and “the brain’s CEO”(E. Goldberg, 2009). These terms are not without merit, as the prefrontal lobes subserve what are arguably the highest level, the most sophisticated, and the most quintessentially human of behaviors (A.R. Damasio, Anderson, and Tranel, 2011; Van Snellenberg and Wager, 2009). Even though the prefrontal lobes provide the anatomical platform for the most complex behaviors, lesions here tend not to disrupt basic and more
elementary cognitive functions as obviously as do postcentral lesions. In fact, a classic and still accurate tenet is that, since prefrontal lesions often leave patients with no obvious cognitive impairments (e.g., see Hebb, 1939), their performances on neuropsychological assessment can be remarkably defectfree. Rather, prefrontal lobe damage may be conceptualized as disrupting reciprocal relationships between the major functional systems—the sensory systems of the posterior cortex and the limbic-memory system with its interconnections to subcortical regions involved in arousal, affective, and motivational states—and effector mechanisms of the motor system. Nauta (1971) characterized prefrontal lobe disorders as “derangement of behavioral programming.” Fuster (1994) drew attention to a breakdown in the temporal organization of behavior with prefrontal lobe lesions which manifested both in deficient integration of immediate past experience (situational context) with ongoing activity and in defective planning. The prefrontal cortex plays the central role in forming goals and objectives and then in devising plans of action required to attain these goals. It selects the cognitive skills required to implement the plans, coordinates these skills, and applies them in a correct order. Finally, the prefrontal cortex is responsible for evaluating our actions as success or failure relative to our intentions. The prefrontal cortex is also critical for forming abstract representations of the environment as well as of complex behaviors. (E. Goldberg, 2009, pp. 22–23).
Prefrontal lobe disorders have more to do with “how”a patient responds, than with the “what"—the content—of the response. Prefrontal lobe patients’ failures on test items are more likely to result from an inappropriate approach to problems than from lack of knowledge or from perceptual or language incapacities per se. For example, some patients with frontal lobe damage (almost always involving the right frontal lobe) call item 1 on the Hooper Visual Organization Test “a duck”(see Fig. 10.19, p. 452) and then demonstrate that they understand the instructions (to figure out what the cut-up drawings would represent if put together) by answering items two and three correctly. In such cases, the completed “flying duck”shape of the top piece in item one appears to be a stronger stimulus than the directions to combine the pieces. These patients demonstrate accurate perception and adequate facility and accuracy in naming or writing but get derailed in carrying out all of an intentional performance—in this case by one strong feature of a complex stimulus. Prefrontal subdivisions
The prefrontal portion of the frontal lobes can be further subdivided according to relatively different sets of behavioral disorders that tend to occur with
relatively separable lesion sites (Fuster, 2010; Van Snellenberg and Wager, 2009). The three major subdivisions arei the ventromedial prefrontal cortex, the dorsolateral prefrontal cortex, and the superior medial prefrontal cortex. Each of these regions has connections to different thalamic nuclei (Brodal, 1981; Mayes, 1988), as well as interconnections with other cortical and subcortical structures. Most of these are two-way connections with neural pathways projecting both to and from the prefrontal cortex (E. Goldberg, 2009). Ventromedial prefrontal cortex (vmPFC). This area plays a key role in impulse control and in regulation and maintenance of set and of ongoing behavior. It encompasses the medial part of the orbital region and the lower part of the medial prefrontal cortex, including Brodmann areas 11, 12, 25, and 32 and the mesial aspect of 10 and 9. Damage here can result in disinhibition and impulsivity, with such associated behavior problems as aggressive outbursts and sexual promiscuity (S.W. Anderson, Bechara, et al., 1999; Eslinger, 1999a; Grafman, Schwab, et al., 1996). These patients’ ability to be guided and influenced by future consequences of their actions may be disrupted, a problem that can be assessed with a test such as the Iowa Gambling Task (Bechara, A.R. Damasio, et al., 1994, pp. 681–683). Many patients with vmPFC damage develop problems with social conduct, as well as defects in planning, judgment, and decision making (A.R. Damasio, Anderson, and Tranel, 2011). The array of impairments that follows vmPFC damage has been likened to “sociopathy”(Barrash, Tranel, and Anderson, 2000; A.R. Damasio, Tranel, and H. Damasio, 1990). This allusion helps convey the remarkable lack of foresight and poor judgment of many vmPFC patients although such patients, unlike the classic “psychopath,” tend not to harm others either aggressively or deliberately. Provided that damage does not include the basal forebrain, such patients do not generally develop memory disturbances, and they are remarkably free of cognitive defects (A.R. Damasio, Anderson, and Tranel, 2011; Stuss and Benson, 1986). Dramatic development of abnormal social behavior can occur with prefrontal brain injury, often due to trauma (TBI, see pp. 215–216) especially damage to the vmPFC (S.W. Anderson, Barrash, et al., 2006). These patients have a number of features in common, including an inability to organize future activity and hold gainful employment, diminished capacity to respond to punishment, a tendency to present an unrealistically favorable view of themselves, and a tendency to display inappropriate emotional reactions. Blumer and Benson (1975) described a personality type, which they termed pseudopsychopathic, that characterized patients with orbital damage; the salient
features were childishness, a jocular attitude, sexually disinhibited humor, inappropriate and nearly total self-indulgence, and utter lack of concern for others. Stuss and Benson (1984, 1986) emphasized that such patients demonstrate a virtually complete lack of empathy and awareness of others’ needs and feelings. In this respect they can be much like a two-year-old child. Other notable features include impulsivity, facetiousness, diminished anxiety, and little thought for the future. Not surprisingly, such disturbances tend to have repercussions throughout the behavioral repertoire, even when basic cognitive functions are not degraded. These behavior characteristics were observed in Phineas Gage, the first person with a clearly identified prefrontal injury (an iron rod was blown through the front part of his head in a dynamiting accident with subsequent profound personality alterations) whose behavioral alterations were well-described (see Macmillan, 2000 for a collection of stories, reports, and observations of this laborer who, following the accident, never worked again).
Dorsolateral prefrontal cortex (dlPFC). A vast expanse of cortex occupying Brodmann areas 8, 9, 46, and 10 is included in the dlPFC. Functional neuroimaging studies, more so than lesion studies, have linked the dlPFC to working memory as one of its major functions: one early review cited more than 60 such studies (Cabeza and Nyberg, 2000). Goldman-Rakic (1998) asserted that working memory is more or less the exclusive memory function of the entire prefrontal cortex, with different prefrontal regions being connected with different domains of operations. She posited further that the dlPFC has a generic function: “on-line”processing of information or working memory in the service of a wide range of cognitive functions. However, lesion studies in humans have not yielded many compelling examples supportive of the link between the dlPFC and working memory: patients with damage in the dlPFC generally achieve scores within normal limits on standard measures of working memory. (A.R. Damasio, Anderson, and Tranel, 2011). The main contribution of the frontal lobes to working memory may be in executive control over mnemonic processing, rather than working memory per se (Postle et al., 1999; Robbins, 1996). Consistent with this hypothesis, Working Memory Index (WAIS-III) scores of lesioned patients map onto the left posterior frontal and parietal cortex, not the prefrontal cortices (Glascher et al., 2009). The dlPFC appears to be involved in higher order control, regulation, and integration of cognitive activities. As Goldman-Rakic inferred, processing in the dlPFC does occur through multiple neural circuits to and from relevant sensory, motor, and limbic areas that integrate attention, memory, motor, and
affective dimensions of behavior. Damage to this sector has been linked to intellectual deficits (Stuss and Benson, 1986). Specifically, a fairly consistent run of studies, especially from functional imaging research, supports a role for the dlPFC in “fluid”(i.e., problem-solving) intelligence, as well as the more general construct of “g,” or what has traditionally been defined as “general intelligence.” Activation in dlPFC has been reported in “high g”tasks that appear to require problem solving, especially on unfamiliar and novel tasks such as the Raven Progressive Matrices (pp. 629–631) and similar reasoning tests (see Glascher, Rudrauf, et al., 2010). These findings suggest that a specific sector of prefrontal cortex—the polar aspect of left Brodmann area 10—may play a unique role in performance on traditional mental ability tests. Interestingly, this region has been associated with increased activity in fMRI studies during a variety of higher order cognitive processing (Christoff, Prabhakaran, et al., 2001; Koechlin, Basso, et al., 1999; Ramnani and Owen, 2004). Thus, the left anterior dorsolateral prefrontal region may be of especial importance for overall “general intelligence”as defined by traditional test scores or grades on academic subjects. The dlPFC has been linked to the verbal regulation of behavior (Luria and Homskaya, 1964). For example, verbal fluency, as measured by the ability to generate words under certain stimulus constraints (e.g., letter, category, see pp. 693–697), is notably impaired in many patients with dorsolateral lesions, especially when lesions are bilateral or on the left (Benton, 1968; Stuss, Alexander, Hamer, et al., 1998). Unilateral right dorsolateral lesions may impair fluency in the nonverbal domain (Jones-Gotman and Milner, 1977), a capacity that can be measured with “design fluency”tasks (pp. 697–698) that putatively provide a nonverbal analog for verbal fluency tests. Superior medial prefrontal lobes (medial prefrontal cortex: mPFC). This region is formed by the medial walls of the hemispheres above the vmPFC sector, including the anterior cingulate cortex. Lesions here or subcortical lesions that involve pathways connecting the cortex between and just under the hemispheres with the drive and affective integration centers in the diencephalon are most apt to affect social and emotional behavior by dampening or nullifying altogether capacities for emotional experience and for drive and motivation (A.R. Damasio, Anderson, and Tranel, 2011; A.R. Damasio and Van Hoesen, 1983). The degree to which emotions and drive are compromised tends to be highly correlated, suggesting that affect and drive are two sides of the same coin: Frontally damaged patients with loss of affective capacity will have low drive states, even for such basic needs as food or drink. With only mildly muted emotionality, life-sustaining drives will remain intact
but initiation and maintenence of social or vocational activities as well as sexual interest may be reduced. Patients with severe damage can become apathetic. Overlap between the prefrontal and premotor divisions of the medial prefrontal lobes can be seen as lesions in this region and frequently involve parts of both areas. In Ken Kesey’s book, One flew over the cuckoo’s nest (1962; movie, 1975), the anti-hero, Randle McMurphy finds himself in the Oregon State Hospital for bucking authority in a prison camp for short-term offenders. He continues to buck authority in this psychiatric hospital until he is punished for his unremitting recalcitrance with a surgical undercut to his frontal lobes. The consequences are as expected: this once lively, lusty, and fiercely independent man becomes an apathetic dullard—a condition his best friend finds intolerable …
The mPFC is also closely involved in the so-called default mode network (DMN) of the brain that functional imaging research suggests is more active when the brain is at “rest"; i.e., when the individual has been instructed to “do nothing at all”(Raichle, 2009; Raichle and Snyder, 2007). In contrast, the DMN becomes less active as soon as any task is engaged. Recent investigations into the functional significance of the DMN have focused on the primary role of the mPFC in subjective, self-focused cognitive processes (Buckner, AndrewsHanna, and Schacter, 2008; Gusnard et al., 2001; Northoff et al., 2006). As a hub of the DMN, the mPFC is not only highly active at rest, but is also engaged during a variety of self-referential processing tasks. For example, mPFC activity has been consistently found in tasks assessing self-knowledge of personality traits and affective valence (W.M. Kelley et al., 2002; Moran et al., 2006), autobiographical memory retrieval (Andreasen, O’Leary, et al., 1995; Craik, Moroz, et al., 1999; Macrae et al., 2004) , self-face face recognition (J. Keenan et al., 2000; Kircher et al., 2001), first-person perspective taking (D’Argembeau et al., 2009; Vogeley et al., 2003), mind wandering (Christoff, Gordon, et al., 2009; Mason et al., 2007), and mental simulation and future thinking (Buckner and Carroll, 2007; Szpunar et al., 2007). In a more general sense, the mPFC may serve to direct attention to ongoing internal states (physiological, mental, and affective) and metacognitive processes critical for the representation of the self and self-awareness (Buckner and Carroll, 2007; Gusnard et al., 2001; Wicker et al., 2003). Anterior cingulate cortex (ACC). Functional imaging studies have implicated this part of the mPFC in various cognitive, executive, and attentional abilities, supporting clinical observations (R.A. Cohen et al., 1999; Danckert, Maruff, et al., 2000). Botvinick, Braver, and colleagues (2001) proposed a unified theory for the role of the ACC in monitoring errors and conflict resolution, suggesting that error monitoring may lead to adaptive changes in top-down attentional processes that enhance task performance. For example,
activity in the ACC increases during error commission in a go no/go task when subjects fail to withhold a prepotent response to a target stimulus (Braver, Barch, et al., 2001) . Furthermore, ACC activity following error commission is thought to signal response conflict in order to facilitate adjustments in cognitive control processes by engaging dorsolateral prefrontal cortices (Gratton et al., 1992; Koski and Paus, 2000). Other theories offer that the ACC is necessary for appropriate response selection when making comparative evaluations of outcomes based on past experience (Rushworth et al., 2004). More generally, the ACC may play a role in monitoring and evaluating outcomes by initiating top-down control mechanisms to resolve conflict by enhancing attentional processing and task performance (Botvinick, Cohen, and Carter, 2004 ; Gehring and Knight, 2000). The posterior cingulate receives most projections from the hippocampus and, as such, is part of the neural pathway for memory (Mesulam, 2000b). Orbitofrontal region. Structures involved in the primary processing of olfactory stimuli are situated in the base of the frontal lobes; hence, odor discrimination is frequently affected by lesions here. Another mechanism that can lead to impaired odor discrimination (anosmia, loss of sense of smell) is shearing or tearing injuries to the olfactory nerves running along the base of the mesial orbital pre-frontal lobes. This is fairly common in severe head injuries incurred in motor vehicle accidents, for example, when major forces (e.g., from sudden acceleration/deceleration) cause the brain to move disruptively across inner bony protrusions of the orbital surface of the skull (Costanzo and Miwa, 2006; P. Green, Rohling, et al., 2003; Wu and Davidson, 2008). Thus, anosmia frequently accompanies the behavioral disorders associated with orbitofrontal damage. Some investigators have found that the presence and degree of anosmia is a useful predictor of or proxy for the severity of brain damage, and even behavioral outcome, in this region (Callahan and Hinkebein, 1999; Dileo et al., 2008; but see Greiffenstein, Baker, and Gola, 2002, for a different conclusion). Diminished odor discrimination may also occur with lesions in the limbic system nuclei lying within the temporal lobes and with damage to temporal lobe pathways connecting these nuclei to the orbitofrontal olfactory centers (p. 83). This effect typically appears with right but not left temporal pathway lesions (Martinez et al., 1993).
Temporal lobe connections to the orbitobasal forebrain are further implicated in cognitive functioning. Patients with lesions here are similar to patients with focal temporal lobe damage in displaying prominent modalityspecific learning problems along with some diminution in reasoning abilities (Barr and Nakhutina, 2009; Salazar, Grafman, Schlesselman, et al., 1986). Lateralization of prefrontal functions
Many of the basic distinctions between left and right hemisphere functions (e.g., summarized in Table 3.1, p. 61) obtain for the prefrontal lobes as well. Although the degree of lateralization of function may not be as marked in prefrontal regions as it is in the posterior cortex, it is useful as a starting point to think of prefrontal functions in a “left-verbal,” “rightnonverbalizable”dichotomy. For example, as noted above, decreased verbal fluency and impoverishment of spontaneous speech tend to be associated with left frontal lobe lesions. Other verbal problems associated with left frontal damage (especially in general proximity to Broca’s area) involve the organization of language and include disrupted and confused narrative sequences, simplified syntax, incomplete sentences and clauses, descriptions reduced to single words and distorted by misnaming and perseveration, and a
general impoverishment of language (M.P. Alexander, Benson, and Stuss, 1989). Conversely, the ability to invent unique designs (measured by design fluency tasks) is depressed with right anterior lesions (Jones-Gotman, 1991; Jones-Gotman and Milner, 1977). Expressive language problems—albeit outside the formal domain of “aphasia"—can also affect patients with right frontal damage (Kaczmarek, 1984, 1987). Their narrative speech may show a breakdown in internal structure due to poor overall organization of the material. Stereotyped expressions are relatively common. However, Stuss and Benson (1990) emphasize that prefrontal language problems arise from selfregulatory and organizing deficits that are “neither language nor cognitive problems”(p. 43) but rather, are the product of impaired executive functions. Working memory also tends to follow basic left-right laterality principles. Functional imaging studies show preferential activation in the left dorsolateral prefrontal sector by verbal working memory tasks, and in the right dorsolateral prefrontal sector by spatial working memory tasks (Buckner and Tulving, 1995; D’Esposito, 2000b; E.E. Smith and Jonides, 1997, 1998). This pattern was demonstrated in a prototypical neuroimaging study in which participants saw a continuous stream of single letters appearing in random locations while circling around a central cross (E.E. Smith, Jonides, and Koeppe, 1996). In the verbal memory condition, participants were asked to decide whether or not each new letter matched the letter presented three stimuli previously, (i.e., “3-back”), regardless of location. In the spatial memory condition, participants were asked to decide whether or not the position of each new letter matched the position of the letter presented three stimuli previously, again “3-back,” regardless of letter identity. Prefrontal asymmetry has also been connected to distinctions between episodic and semantic memory, and between the processes of encoding and retrieval (Tulving, Kapur, et al., 1994). Tulving and his colleagues suggested that left prefrontal structures are specialized for the retrieval of general knowledge (semantic memory) and for encoding novel aspects of incoming information into episodic memory (specific unique events), whereas right prefrontal structures are specialized for episodic memory retrieval, and in particular, for retrieval “attempts”that occur in episodic mode (as when one attempts to remember a specific, unique event—e.g., “Where were you when you heard about the nine-eleven attacks?”) (Nyberg, Cabeza, and Tulving, 1996; Tulving, Markowitsch, et al., 1996). A number of studies have supported this theory in showing that the left prefrontal cortex is primarily involved in encoding and the right is preferentially activated during retrieval (Haxby, Ungerleider, et al., 1996; Ragland, Gur, et al., 2000). The validity of this
dichotomy has been challenged, however, as it is likely that differences in the roles of the left and right hemispheres depend on the particular memory demands (e.g., episodic, semantic) as well as the type of stimulus to be learned (Iidaka et al., 2000; A. Martin, Wiggs, and Weisberg, 1997). In other words, simple left-right, input-output, or episodic-semantic divisions of labor cannot explain these much more complex, interdependent, and interactive processing activities. Milner and Petrides (1984) suggested that the left pre-frontal cortex is important for control of self-generated plans and strategies and the right is important for monitoring externally ordered events. Using different cognitive tasks, E. Goldberg, Podell, and Lovell (1994) found a similar distinction. In particular, they suggest that the left prefrontal system is responsible for guiding cognitive selection by working memory-mediated internal contingencies, while the right prefrontal system makes selections based on external environmental contingencies. While their data supported this lateralization in men, women did not show a lateralized effect. Other studies have found intriguing evidence of sex-related differences in aspects of lateralized prefrontal functions. For example, Tranel, H. Damasio, Denburg, and Bechara (2005) discovered a functional asymmetry in the vmPFC that was modulated by sex of participant. Men showed impairments in social conduct, emotional regulation, and personality with unilateral damage to the right vmPFC, but not when damage was confined to the left side. The reverse pattern was seen in women—women with left-sided damage to the vmPFC showed impairments in social conduct, emotional regulation, and personality, but women with right-sided unilateral damage to the vmPFC did not. These asymmetric patterns have been interpreted to suggest that the two sexes may rely on different strategies in the domains of social conduct and emotional processing/personality. This, in turn, could reflect differing social strategies and divergent social goals. For example, the left-sided dominance observed in women may reflect a need for expertise in interpersonal relationships (and this could be related to factors such as the need to bear and rear children, maintenance of in-group cohesion), whereas the right-sided dominance observed in men could reflect a need for expertise in inter-group relations (e.g., warfare, out-group relations, leverage of critical resources) (Koscik et al., 2010). Complicating understanding of these findings are data indicating that for some frontal lobe functions and some neurotransmitter pathways, women do not show this distinctive pattern of lateralization (E. Goldberg, 2009; E. Goldberg, Podell, and Lovell, 1994; Oddo et al., 2010). Prosody may be muted or lost in patients with right prefrontal damage (e.g.,
Frisk and Milner, 1990; E.D. Ross, 1981). Picture descriptions may be faulty, mostly due to misinterpretations of elements but also of the picture as a whole. Perhaps most important is a compromised capacity to adapt to their disabilities due to a tendency for unrealistic evaluations of their condition (Jehkonen et al., 2006; Kaczmarek, 1987; Murrey, Hale, and Williams, 2005). For some of these patients, their personal and social awareness seems frozen in the time prior to the onset of brain damage. Other kinds of impaired evaluations have also been noted in these patients, such as inaccurate estimations of prices (M.L. Smith and Milner, 1984) and of event frequency (M.L. Smith and Milner, 1988). Stuss and colleagues have stressed the importance of the right frontal lobe in emotional expression, modulation, and appreciation (Shammi and Stuss, 1999; Stuss and Alexander, 1999) . In addition, the right prefrontal cortex may be a necessary component in self-recognition and self-evaluation (H.P. Keenan et al., 2000). Autobiographical memory, too, may engage networks within the right frontotemporal region (G.R. Fink et al., 1996; J.P. Keenan et al., 2000), although lateralization was not found for young women (Oddo et al., 2010). Prefrontal cortex and attention
The prefrontal cortex is among the many structures involved in attention. Significant frontal activation takes place during selective attention activities in intact subjects (Mesulam, 2000b; Swick and Knight, 1998) . Prefrontal cortex mediates the capacity to make and control shifts in attention (Mirsky, 1989). Luria (1973a) observed that the prefrontal cortex “participates decisively in the higher forms of attention,” for example, in “raising the level of vigilance,” in selectivity, and in maintaining a set (see also Marklund et al., 2007). The prefrontal cortex and anterior cingulate cortex are engaged when subjects must concentrate on solving new problems but not when attention is no longer required because the task has become automatic (Luria, 1973a; Shallice, Stuss, et al., 2008: see pp. 36–37). Vendrell and his colleagues (1995) implicated the right prefrontal cortex as important for sustained attention. Also, working memory tasks that call for temporary storage and manipulation of information involve the frontal lobes (Braver, Cohen, et al., 1997; Dubois, Levy, et al., 1995; Fuster, 1999). Prefrontal areas are involved in inhibiting distraction effects (Dolcos et al., 2007); thus it is not surprising that problems with working memory in patients with prefrontal damage appear to be due at least in part to their poor ability to withstand interference from what they may be attempting to keep in mind, whether from the environment or from their own associations (Fuster, 1985; R.T. Knight and Grabowecky, 2000; Müller and
Knight, 2006). Moreover these patients may be sluggish in reacting to stimuli, unable to maintain an attentional focus, or highly susceptible to distractions (Stuss, 1993). A specific attentional function associated with the pre- frontal cortex is “divided attention.” Patients with frontal lesions frequently have difficulty when divided attention is required, such as performing two tasks at once (Baddeley, Della Sala, et al., 1996). Difficulties on Part B of the Trailmaking Test (a timed task requiring numberletter sequencing while switching focus, pp. 422–423) occur when this capacity is impaired. Functional neuroimaging studies also support prefrontal cortex involvement in dual task performance but not when either task is performed separately (D’Esposito et al., 1995). Left visuospatial inattention can occur with right anterior lesions (Mesulam, 2000b), but is much less common with frontal than with parietal injuries (Bisiach and Vallar, 1988). Heilman, Watson, and Valenstein (2011) suggest that frontal inattention may be associated with arousal and intentional deficits. Others have interpreted this problem as reflecting involvement with one of the multiple sites in the visuoperceptual network (Mesulam, 2000b; Rizzolatti and Gallese, 1988). Some patients with frontal lesions seem almost stuporous, unless actively stimulated. Others can be so distractible as to appear hyperactive. Still other patients with frontal damage may show little or no evidence of attentional disturbances, leaving open to conjecture the contributions of subcortical and other structures in the attention impaired patients. Prefrontal cortex and memory
Disorders of memory are common in patients with prefrontal lesions. However, when carefully examined, these patients frequently turn out not to have a disorder of memory functions per se, but rather, disorders of one or more functions that facilitate memory, such as learning strategies, retrieval strategies, organizational approaches to learning and retrieval, and the many other cognitive capacities that facilitate efficient and effective acquisition, consolidation, retention, and retrieval of information. The phenomenon of “frontal amnesia”demonstrates how inertia and executive disorders in particular can interfere with cognitive processes important for memory (Darby and Walsh, 2005; Kopelman, 2002a; Stuss and Benson, 1984). Patients with frontal amnesia, when read a story or a list of words, may seem able to recall only a little—if any—of what they heard and steadfastly assert they cannot remember. Yet, when prompted or given specific questions (e.g., “Where did the story take place?” rather than “Begin at the
beginning and tell me everything you can remember”), they may produce some responses, even quite full ones, once they get going. The same patients may be unable to give their age although they know the date, their year of birth, and how to solve formally presented subtraction problems. What they cannot do, in each of these examples, is spontaneously undertake the activity that will provide the answer—in the first case, selecting the requested information from memory and, in the second case, identifying a solution set for the question and acting on it. Not being able to “remember to remember,” a capacity that has been referred to as “prospective memory,” is an aspect of frontal amnesia involving time awareness and monitoring (Kliegel et al., 2007; C.P. McFarland and Glisky, 2009). It creates serious practical problems for these patients who may forget to go to work, to keep appointments, even to bathe or change clothes as needed (Cockburn, 1996a; Kliegel et al., 2007). Frontal amnesia problems constitute a serious obstacle to the remediation of the behavioral problems associated with frontal lobe damage since, if it does not occur to trainees to remember what they were taught or supposed to do (or not do), then whatever was learned cannot be put to use. A 35-year-old mechanic sustained compound depressed fractures of the “left frontal bone”with cortical lacerations when a machine exploded in his face. Following intensive rehabilitation he was able to return home where he assumed household chores and the daytime care of his threeyear-old son. He reported that he can carry out his duties if his wife “leaves me a note in the morning of some of the things she wants done, and if she didn’t put that down it wouldn’t get done because I wouldn’t think about it. So I try to get what she’s got on her list done. And then there’re lists that I make up, and if I don’t look at the list, I don’t do anything on it.” Two years after the accident and shortly before this interview, this man’s verbal performances on the Wechsler tests were mostly within the average range excepting a borderline defective score on Similarities (which calls on verbal concepts); on the predominantly visual tests his scores were at average and high average levels. All scores on formal memory testing (Wechsler Memory Scale-Revised) were at or above the mean for his age, and 4 of the 13 listed on the Record Form were more than one standard deviation above the mean.
In providing structure and organization to stimulus encoding, the frontal lobes facilitate memory in a variety of ways (P.C. Fletcher, Shallice, and Dolan, 1998). Thus, some of these patients’ memory problems may be related to diminished capacity to integrate temporally separated events (Fuster, 1985) or to keep learning circuits open (Leon-Carrion et al., 2010). Another manifestation of such a “temporal integration”defect is difficulty in making recency judgments (e.g., “When was the last time you spoke to your mother on the phone?” (Milner, 1971; Petrides, 1989). Poor recall of contextual information associated with what they may remember—impaired source memory—is also common in patients with frontal damage (Janowsky,
Shimamura, and Squire, 1989). The patients may recall an event or a person but be unable to situate the memory in its appropriate context for time and place. Patients with frontal lesions tend not to order or organize spontaneously what they learn, although appropriate cueing may elicit adequate recall (Jetter et al., 1986; Zanini, 2008) . This may account for their proportionately better performances on recognition than on recall formats where retrieval strategies are less important (Janowsky, Shimamura, Kritchevsky, and Squire, 1989). The frontal lobes are necessary for criterion setting and monitoring during retrieval of memories, particularly on difficult tasks (P.C. Fletcher, Shallice, and Frith, 1998; Incisa della Rocchetta and Milner, 1993). Failure in these functions can lead to poor recall or false memories (Schacter, Norman, and Koustaal, 1998). Stuss and Benson (1987) showed how diminished control can affect the behavior of patients with prefrontal damage: they may be fully aware of what should be done, but in not doing it at the appropriate time, they appear to have forgotten the task (impaired prospective memory; see also Glisky, 1996). Patients with lesions in the medial basal region of the frontal lobes or with subcortical lesions in adjacent white matter may suffer a true amnestic condition that is pronounced and often accompanied by spontaneous and florid confabulation (fabrication of false and often improbable information to compensate for amnesia) (M.P. Alexander and Freedman, 1984; P. Malloy, Bihrle, et al., 1993). A 60-year-old retired teacher who had had a stroke involving the medial basal region of her left frontal lobe complained of back pain due to lifting a cow onto a barn roof. Five days later she reported having piloted a 200-passenger plane the previous day.
An intriguing aspect of time-related memory linked to the basal forebrain region immediately posterior to the orbital frontal cortices concerns the ability to situate autobiographical memories accurately in the time-line of one’s own life. Tranel and Jones (2006) studied this issue by requiring patients with basal forebrain damage to place autobiographical events on a time-line of their life; for example, patients had to indicate at what age in their life they had certain friends, pets, teachers, and the like. These patients were very impaired on this task as, on average, they misplaced information by more than five years (a much less accurate performance than that produced by patients with medial temporal lobe amnesia). Interestingly, the patients could recall the contents of autobiographical memory adequately. These findings implicate the basal forebrain in a system that provides strategic retrieval processes for proper dating of memories.
Prefrontal cortex and cognitive functions
Cognitive impairment associated with destruction or disconnection of frontal lobe tissue usually does not appear as a loss of specific skills, information, or even reasoning or problem-solving ability (Teuber, 1964). Many patients with frontal lobe lesions do not do poorly on ability tests in which another person directs the examination, sets the pace, starts and stops the activity, and makes all the discretionary decisions as is the procedure in many typical neuropsychological examinations (Brazzelli et al., 1994; Lezak, 1982a; Stuss, Benson, Kaplan, et al., 1983) . The closed-ended questions of common fact and familiar situations and the well-structured puzzles with concrete solutions that make up standard tests of cognitive abilities are not likely to present special problems for many patients with frontal lobe injuries (A.R. Damasio, Anderson, and Tranel, 2011). Perseveration or carelessness may depress a patient’s scores somewhat but usually not enough to lower them significantly to the point of formal “impairment.” The real world behavior of frontal lobe patients, however, is an entirely different story. Cognitive defects associated with frontal lobe damage tend to show up most clearly in the course of daily living and are more often observed by relatives and coworkers than by a medical or psychological examiner in a structured interview. Common complaints about such patients concern apathy, carelessness, poor or unreliable judgment, poor adaptability to new situations, and blunted social sensibility (Eslinger, Grattan, and Geder, 1995; Lezak, 1989) . However, these are not really cognitive deficits per se, but rather, defects in processing one or more aspects of behavioral integration and expression. So-called frontal lobe “syndromes”include many behavioral disorders (Mendoza and Foundas, 2008; Sohlberg and Mateer, 2001; Stuss and Benson, 1986). These are differentiable both in their appearance and in their occurrence (Cappa and Cipolotti, 2008; Van Snellenberg and Wager, 2009). Patients with prefrontal damage show an information processing deficit that reduces their sensitivity to novel stimuli and may help explain the stimulusbound phenomenon common in many of these patients (Daffner et al., 2000; R.T. Knight, 1984; see below). Difficulty with working memory and impulsivity may interfere with learning or with performing tasks requiring delayed responses (Milner, 1971; R.J.J. Roberts and Pennington, 1996). Defective abstract thinking and sluggish response shifts can result in impaired mental efficiency (Janowsky, Shimamura, Kritchevsky, and Squire, 1989; Stuss and Benson, 1984). Diminished capacity for behavioral or mental flexibility can greatly limit imaginative or creative thinking (Eslinger and Grattan, 1993).
It can also constrain volition and adaptive decision making (E. Goldberg, 2009; E. Goldberg and Podell, 2000). Some of these defects may be aspects of stimulus boundedness which, in its milder forms, appears as slowing in shifting attention from one element in the environment to another, particularly from a strong stimulus source to a weak or subtle or complex one, or from a well-defined external stimulus to an internal or psychological event. Patients who are severely stimulus-bound may have difficulty directing their gaze or manipulating objects; when the condition is extreme, they may handle or look at whatever their attention has fixed upon as if their hands or eyes were stuck to it, literally pulling themselves away with difficulty. Others, on seeing usable objects (an apple, a fork), may irresistibly respond to them: e.g., eat the apple, go through eating motions with a fork, regardless of the appropriateness of the behavior for the situation—what Lhermitte (1983) termed “utilization behavior.” In describing these kinds of behavior defects as an “environmental dependency syndrome”and a pathological kind of “imitation behavior,” Lhermitte and colleagues (1986) called attention to the almost mandatory way in which these patients are driven by environmental stimuli (see also S. Archibald et al., 2001). Perseveration, in which patients repeat a movement, a response, or an act or activity long past the point where that movement or response has stopped being appropriate and adaptive, is a related phenomenon, but the stimulus to which the patients seem bound is one that they themselves generated (E. Goldberg, 2009; Hauser, 1999; Sandson and Albert, 1987). Such repetitive behaviors can seem almost involuntary and unwitting on the part of the patient. These patients often ignore environmental cues so that their actions are out of context with situational demands and incidental learning is reduced (Vilkki, 1988). They may be unable to profit from experience, perhaps due to insufficient reactivation of autonomic states that accompanied emotionally charged (pleasurable, painful) situations (A.R. Damasio, Tranel, and H. Damasio, 1990), and thus can only make poor, if any, use of feedback or reality testing (Le Gall, Joseph, and Truelle, 1987; E.T. Rolls, 1998; Sohlberg and Mateer, 2001). Another curious problem that can emerge in patients with prefrontal damage is abnormal collecting and hoarding behavior (S.W. Anderson, H. Damasio, and Damasio, 2005). Patients with damage in mesial prefrontal cortex (including the right polar sector and anterior cingulate) may do massive pathological collecting and hoarding of useless objects—broken televisions, newspapers, tools, appliances, facial tissue, food items, and so on. This behavior can persist despite interventions and obvious negative consequences
for the patient. A right-handed man with 12 years of education underwent clipping of a ruptured anterior communicating artery aneurysm at age 27, and subsequently became, in his wife’s terms, “a packrat.” He began collecting assorted tools and materials such as scrap metal and wire, much of which he salvaged from neighbors’ garbage. He filled his basement and a garage with items that he did not use. Despite financial difficulties, he engaged in frequent impulsive buying of unneeded (and often expensive) items that attracted his attention while shopping for something entirely different. He accumulated multiple identical or near identical versions of many tools. Once purchased, he lost interest in the objects, often not even bothering to take them out of the shopping bags. Some items sat in their garage essentially untouched for over two decades, but he refused to consider discarding or selling any of his possessions. He was no longer able to find his tools or other needed items because of the volume and disarray of collected items. His collecting behavior remained consistent over 35 years following the neurologic event (Subject 2 in S.W. Anderson, H. Damasio, and Damasio, 2005).
Fragmentation or disorganization of premorbidly intact behavioral sequences and activity patterns appears to be an underlying problem for many patients with prefrontal damage (M.F. Schwartz et al., 1993; Truelle, Le Gall, et al., 1995; see also Grafman, Sirigu, et al., 1993). In some cases, patients with prefrontal damage may exhibit a dissociation between language behaviors and ongoing activity: they are less apt to use verbal cues (such as subvocalization) to direct, guide, or organize their ongoing behavior, with resultant perseveration, fragmentation, or premature termination of a response (K.H. Goldstein, 1948; Luria and Homskaya, 1964; Shallice, 1982). Activities requiring abilities to make and use sequences or otherwise organize activity are particularly prone to compromise by prefrontal lesions (Canavan et al., 1989; Zalla et al., 2001; Zanini, 2008), possibly due to reduced ability to refocus attention to alternative response strategies (Della Malva et al., 1993; Godefroy and Rousseaux, 1997; B. Levine, Stuss, Milberg, et al., 1998). For example, copying hand position sequences, especially when rapid production is required, is affected by frontal lobe lesions (Jason, 1986; Truelle, Le Gall, et al., 1995). Thus planning—which Goel and Grafman (2000) refer to as “anticipatory sequencing"—and problem solving, which require intact sequencing and organizing abilities, are frequently impaired in these patients (Shallice and Burgess, 1991; Vilkki, 1988). Defective self-monitoring and self-correcting are common problems with prefrontal lesions (Stuss and Benson, 1984). Even when simple reaction time is intact, responses to complex tasks may be slowed (Le Gall, Joseph, and Truelle, 1987). The frontal lobes have also been implicated in defects of time sense including recency judgments and timespan estimations and, in patients with bilateral frontal lobe damage, orientation in time (Benton, 1968; M.A. Butters, Kasniak, et al., 1994; Milner, Corsi, and
Leonard, 1991). These patients may make erroneous and sometimes bizarre estimates of size and number (Shallice and Evans, 1978). With all of these impediments to cognitive competency, it follows that patients with frontal lobe lesions often show little of the imagination or innovative thinking essential to creativity (Drago et al., 2011; Zangwill, 1966). Behavior problems associated with prefrontal damage
Practical and social judgment is frequently impaired in patients with prefrontal damage (S.W. Anderson et al., 2006; Dimitrov et al., 1996). For many of these patients, social disability is often the most debilitating feature (Eslinger, Grattan, and Geder, 1995; Lezak, 1989; Lezak and O’Brien, 1988, 1990; Macmillan, 2000). Behavior disorders associated with prefrontal damage tend to be supramodal. Similar problems may occur with lesions involving other areas of the brain, but in these instances they are apt to be associated with specific cognitive, sensory, or motor disabilities. The behavioral disturbances associated with frontal lobe damage can be roughly classified into five general groups. 1. Problems of starting appear as decreased spontaneity, decreased productivity, decreased rate at which behavior is emitted, or decreased or lost initiative. In its milder forms, patients lack initiative and ambition but may be able to carry through normal activities quite adequately, particularly if these activities are familiar, well-structured, or guided. A college-educated, 56-year-old woman with no prior neurological difficulties had a successful career as a technical writer, but uncharacteristically had not attempted to find work after relocating in a new town. Her children observed other gradual but substantial changes in her behavior over a period of two years. Previously active in her community, her activities gradually decreased until she rarely left the house. Other changes in her behavior included poor personal hygiene, neglect of her home, and diminished emotional responsiveness. She lived off of her savings but failed to pay her bills, resulting in the electricity and telephone service being cut off on many occasions. She previously had doted on her grandchildren but now showed no concern when told that she no longer could baby-sit them because of her careless oversight and the increasingly filthy state of her home. Her children suspected she was depressed, but the patient generally denied that anything was wrong or different about her mood or behavior. She reluctantly agreed with her physician’s recommendation of an antidepressant medication, but this had no noticeable effect on her behavior. She refused to seek further care, but her family persisted until an appropriate diagnosis of a large bilateral meningioma growing from the orbital prefrontal region was made. The meningioma was resected in its entirety, and there was great improvement in her behavior. Five years post-surgery, executive dysfunction had become relatively subtle and stable.
More severely affected patients are apt to do little beyond routine self-care and home activities. To a casual or naîve observer, and often to their family and close associates, these patients appear to be lazy. Many can “talk a good
game”about plans and projects but are actually unable to transform their words into deeds. An extreme dissociation between words and deeds has been called pathological inertia, which can be seen when a frontal lobe patient describes the correct response to a task but never acts it out. Severe problems of starting appear as apathy, unresponsiveness, or mutism, and often are associated with superior medial damage (Eslinger, Grattan, and Geder, 1995; Sohlberg and Mateer, 2001). A railway crossing accident severely injured a 25-year-old schoolteacher who became totally socially dependent. She ate only when food was set before her so she could see it. The only activities she initiated were going to the bathroom and going to bed to sleep, both prompted by body needs. Only with questioning did she make up plans for Christmas and for a party for her aunt.
2. Difficulties in making mental or behavioral shifts, whether they are shifts in attention, changes in movement, or flexibility in attitude, appear as perseveration or cognitive rigidity. Some forms of perseveration can be described as stereotypy of behavior. Perseveration may also occur with lesions of other parts of the brain, but then it typically appears only in conjunction with the patient’s specific cognitive deficits (E. Goldberg and Tucker, 1979; Gotts and Plaut, 2004). In frontal lobe patients, perseveration tends to be supramodal —to occur in a variety of situations and on a variety of tasks. Perseveration may sometimes be seen as difficulty in suppressing ongoing activities or attention to prior stimulation. On familiar tasks it may be expressed in repetitive and uncritical perpetuation of a response that was once correct but becomes an uncorrected error under changed circumstances or in continuation of a response beyond its proper end point. Perseveration may occur as a result of lesions throughout the frontal lobes but particularly with dorsolateral lesions (Eslinger, Grattan, and Geder, 1995; Darby and Walsh, 2005). Patients with frontal lobe damage tend to perseverate in simple probabilistic reversal learning tasks in which participants have to shift their responses away from an initially rewarding stimulus to a previously irrelevant stimulus following subsequent failures (Fellows and Farah, 2003; Hornak et al., 2004; E.T. Rolls et al., 1994) . Cicerone, Lazar, and Shapiro (1983) found that frontal lobe patients’ perseverations in reversal learning were not simply deficits in motor output but reflected an inability to suppress inappropriate hypotheses acquired over the initial course of learning. Patients with frontal lobe tumors were particularly defective in the ability to eliminate an irrelevant hypothesis despite being informed that it was incorrect; however, they were able to maintain a positively reinforced hypothesis throughout the task. In a broader
perspective, this result suggests that frontal lobe patients have a specific deficit in their inability to disengage from previously learned hypotheses, beliefs, or rules. It follows that patients with frontal lobe damage may also exhibit rigidity in their thinking without explicit behavioral perseveration. Asp and Tranel (2009) found that frontal lobe patients had stronger religious beliefs following their medical event, and were more inclined to religious fundamentalism, compared to nonneurologic medical patients. It was hypothesized that frontal lobe damage had disrupted the mechanism that falsifies beliefs, so that when frontal lobe patients are exposed to more extreme religious propositions, they have a bias to accept the propositions unquestioningly, resulting in increased religious beliefs. Collateral data from close friends or family supported this conclusion. A patient who had bilateral ventromedial prefrontal cortex damage following a tumor resection is a practicing Lutheran. She ranked as the most fundamentalist subject on Asp and Tranel’s (2009) fundamentalist scale. Her changes in religious beliefs are illustrated by observations from her husband of 51 years. He claimed that her belief in God was much stronger following her brain injury; she was a “new”person who is now a “strong believer in God and Heaven”and “feels overwhelmed that God did so many miracles.”
Further work examining patients holding other rigid beliefs may help determine whether/how prefrontal functions may be predisposing to dogmatisms. However, since even fairly extreme behavioral and attitudinal patterns of rigidity characterize some neurologically intact people, rigidity alone should be used cautiously as a sign of frontal lobe damage. 3. Problems in stopping—in braking or modulating ongoing behavior—show up in impulsivity, over-reactivity, disinhibition, and difficulties in holding back a wrong or unwanted response, particularly when it may either have a strong association value or be part of an already ongoing response chain. Affected patients have difficulty delaying gratification or rewards. These problems frequently come under the heading of “loss of control,” and these patients are often described as having “control problems.” Impulsivity and lack of anticipation of the future consequences of behavior are especially associated with lesions in the ventromedial prefrontal sector (Bechara, H. Damasio, and Damasio, 2000; Eslinger, Grattan, and Geder, 1995). A 49-year-old man sustained a severe closed head injury in a motor vehicle accident; his injuries included prefrontal hemorrhage. In the years following the accident, he experienced a generally good cognitive recovery, with scores gradually returning to within normal limits on a broad battery of neuropsychological tests. As the father of school-age children who were involved in basketball, volleyball, and other sports, he frequently attended school sporting events. Prior to the injury, he had been an enthusiastic and entirely appropriate supporter of his children’s athletic teams. Following the injury, he became unable to modulate his behavior during the excitement of his children’s sporting events. He was repeatedly expelled and forcibly removed from school sporting events due to his vociferous and vulgar berating of coaches, referees, and even student athletes. He would acknowledge after such events that his behavior had been inappropriate and
embarrassing to his children and their team, and would vow to sit quietly at the next sporting event, but his poor selfcontrol persisted and he was banned from all school events.
4. Deficient self-awareness results in an inability to perceive performance errors, to appreciate the impact one makes on others, to size up a social situation appropriately, and to have empathy for others (Eslinger, Grattan, and Geder, 1995; Prigatano, 1991b). When frontal damage occurs in childhood, the social deficits can be profound and may include impairments in acquiring social conventions and moral reasoning (S.W. Anderson, H. Damasio, Tranel, and Damasio, 2000; Max, 2005). Defective self-criticism is associated with tendencies of some frontal lobe patients to be euphoric and self-satisfied, to experience little or no anxiety, and to be impulsive and unconcerned about social conventions. The very sense of self—which everyday experience suggests is intrinsic to human nature—turns out to be highly vulnerable to frontal lobe damage (Stuss, 1991; Stuss and Alexander, 2000). Failure to respond normally to emotional and social reinforcers may be a fundamental deficit leading to inappropriate behavior (E.T. Rolls, Hornak, et al., 1994). Impaired selfawareness and social behavior often occur with lesions of the orbital cortex and related limbic areas (Sarazin et al., 1998). A 38-year-old former truck driver and athlete sustained a frontal injury in a motor vehicle accident. Although his cognitive test scores (on Wechsler ability and memory tests) eventually improved to the average range, he was unable to keep a job. Repeated placements failed because he constantly talked to coworkers, disrupting their ability to work. Eventually he was hired for a warehouse job that would take advantage of his good strength and physical abilities and put limited demands on cognitive skills and social competence. However, he wanted to show his coworkers that he was the best by loading trucks faster than anyone else. His speed was at the expense of safety. When he could not be persuaded to use caution, he was fired.
5. A concrete attitude or what Goldstein (1944, 1948) called loss of the abstract attitude is also common among patients with frontal lobe damage. This often appears in an inability to dissociate oneself from one’s immediate surround and see the “big picture,” resulting in a literal attitude in which objects, experiences, and behavior are all taken at their most obvious face value. The patient becomes incapable of planning and foresight or of sustaining goal-directed behavior. However, this defect is not the same as impaired ability to form or use abstract concepts. Although many patients with frontal lobe lesions do have difficulty handling abstract concepts and spontaneously generate only concrete ones, others retain high-level conceptual abilities despite a day-to-day literal-mindedness and loss of perspective. CLINICAL LIMITATIONS OF FUNCTIONAL LOCALIZATION
Symptoms must be viewed as expressions of disturbances in a system, not as direct expressions of focal loss of neuronal tissue. A. L. Benton, 1981
A well-grounded understanding of functional localization strengthens the clinician’s diagnostic capabilities so long as the limitations of its applicability in the individual case are taken into account. Common patterns of behavioral impairment associated with well-understood neurological conditions, such as certain kinds of strokes, tend to involve the same anatomical structures with predictable regularity. For example, stroke patients with right arm paralysis due to a lesion involving the left motor projection area of the frontal cortex will generally have an associated Broca’s (motor or expressive) aphasia. Yet, the clinician will sometimes find behavioral disparities between patients with cortical lesions of apparently similar location and size: some ambulatory stroke victims whose right arms are paralyzed are practically mute; others have successfully returned to highly verbal occupations. On the other hand, aphasics may present with similar symptoms, but their lesions vary in site or size (De Bleser, 1988; Basso, Capitani, Laiacona, and Zanobio, 1985). In line with clinical observations, functional imaging studies show that many different areas of the brain may be engaged during a cognitive task (Cabeza and Nyberg, 2000; D’Esposito, 2000a; Frackowiak, Friston, et al., 1997) or in emotional response (Tamietto et al., 2007). For example: for even the relatively simple task of telling whether words represent a pleasant or unpleasant concept, the following areas of the brain showed increased activation: left superior frontal cortex, medial frontal cortex, left superior temporal cortex, posterior cingulate, left parahippocampal gyrus, and left inferior frontal gyrus (K.B. McDermott, Ojemann, et al., 1999). Other apparent discontinuities between a patient’s behavior and neurological status may occur when a pattern of behavioral impairment develops spontaneously and without physical evidence of neurological disease. In such cases, “hard”neurological findings (e.g., such positive physical changes on neurological examination as primitive reflexes, unilateral weakness, or spasticity) or abnormal laboratory results (e.g., protein in the spinal fluid, brain wave abnormalities, or radiologic anomalies) may appear in time as a tumor grows or as arteriosclerotic changes block more blood vessels. Occasionally a suspected brain abnormality may be demonstrated only on postmortem examination and, even then, correlative tissue changes may not always be found (A. Smith, 1962a). Moreover, well-defined brain lesions have shown up on neuroimaging (Chodosh et al., 1988) or at autopsy of persons with no symptoms of brain disease (Crystal, Dickson, et al., 1988; Phadke and
Best, 1983). The uncertain relation between brain activity and human behavior obligates the clinician to exercise care in observation and caution in prediction, and to take nothing for granted when applying the principles of functional localization to diagnostic problems. However, this uncertain relation does not negate the dominant tendencies to regularity in the functional organization of brain tissue. Knowledge of the regularity with which brain-behavior correlations occur enables the clinician to determine whether a patient’s behavioral symptoms make anatomical sense, to know what subtle or unobtrusive changes may accompany the more obvious ones, and to guide recommendations for further diagnostic procedures.
4 The Rationale of Deficit Measurement One distinguishing characteristic of neuropsychological assessment is its emphasis on the identification and measurement of psychological—cognitive and behavioral—deficits, for it is in deficiencies and dysfunctional alterations of cognition, emotionality, and self-direction and management (i.e., executive functions) that brain disorders are manifested behaviorally. Neuropsychological assessment is also concerned with the documentation and description of preserved functions—the patient’s behavioral competencies and strengths. In assessments focused on delineating neuropsychological dysfunction—whether for the purpose of making a diagnostic discrimination, evaluating legal competency or establishing a legal claim, identifying rehabilitation needs, or attempting to understand a patient’s aberrant behavior —the examiner still has an obligation to patients and caregivers to identify and report preserved abilities and behavioral potentials. Yet brain damage always implies behavioral impairment. Even when psychological changes after a brain injury or concomitant with brain disease are viewed as improvement rather than impairment, as when there is a welcome increase in sociability or relief from neurotic anxiety, a careful assessment will probably reveal an underlying loss. A 47-year-old postal clerk with a bachelor’s degree in education boasted of having recently become an “extrovert”after having been painfully shy most of his life. His wife brought him to the neurologist with complaints of deteriorating judgment, childishness, untidiness, and negligent personal hygiene. The patient reported no notable behavioral changes other than his newfound ability to approach and talk with people. On examination, although many cognitive functions tested at a superior level, in accord with his academic history and his wife’s reports of his prior functioning, the patient performed poorly on tests involving immediate memory, new learning, and attention and concentration. The discrepancy between his best and poorest performances suggested that this patient had already sustained cognitive losses. A precociously developing Alzheimer-type dementia was suspected.
In some patients the loss, or deficit, may be subtle, becoming apparent only on complex judgmental tasks or under emotionally charged conditions. In others, behavioral evidence of impairment may be so slight or ill-defined as to be unobservable under ordinary conditions; only patient reports of vague, unaccustomed, frustrations or uneasiness suggest the possibility of an underlying brain disorder. A 55-year-old dermatologist received a blow to the head when another skier swerved onto him,
knocking him to the ground so hard that his helmet was smashed on the left side. Shortly thereafter he sought a neuropsychological consultation to help him decide about continuing to practice as he fatigued easily, had minor memory lapses, and noticed concentration problems. This highly educated man gave lower than expected performances on tests of verbal abstraction (Similarities), visual judgment (Picture Completion), and verbal recall (story and list learning), and performances were significantly poorer than expected when structuring a drawing (R-O Complex Figure) and on visual recall. Additionally, subtle deficits appeared in word search hesitations, several instances of loss of instructional set, tracking slips when concentrating on another task, and incidental learning problems which also suggested some slowed processing as delayed recall was considerably better than immediate recall (the rebound phenomenon, see p. 467). These lower than expected scores and occasionally bungled responses appeared to reflect mild acquired impairments which together were experienced as memory problems and mental inefficiency. A year later, he requested a reexamination to confirm his impression that cognitive functioning had improved. He reported an active winter of skiing which validated his feeling that balance and reflexes were normal. However, he had noticed that he missed seeing some close-at-hand objects which—when pointed out—were in plain view and usually on his left side; but he reported no difficulty driving nor did he bump into things. He wondered whether he might have a visual inattention problem. On testing, reasoning about visually presented material (Picture Completion) was now in the superior range although he had long response times, and verbal learning had improved to almost normal levels. Visual recall remained defective, but delayed visual recognition was within normal limits. on a visual scanning task (Woodcock-Johnson IIICog [WJ-III Cog], Pair Cancellation), he made eight omission errors on the left side of the page and three on the right (see Fig. 10.1, p. 4 28). When last year’s eight operation errors on printed calculation problems (Fig. 4 .1) were reviewed, it became apparent that left visuospatial inattention had obscured his awareness of the operation sign on the left of these problems, and that he continued to have a mild form of this problem. It was suspected that he had sustained a mild contre coup in the accident: mild because his acute self-awareness distinguished him from patients with large and/or deep right parietal lesions, contre coup because left visuospatial inattention implicates a right hemisphere lesion in a right-handed man.
Although the effects of brain disorders are rarely confined to a single behavioral dimension or functional system, the assessment of psychological deficit has focused on cognitive impairment for a number of reasons. First, some degree of cognitive impairment accompanies almost all brain dysfunction and is a diagnostically significant feature of many neurological disorders. Moreover, many of the common cognitive defects—aphasias, failures of judgment, lapses of memory, etc.—are likely to be noticed by casual observers and to interfere most obviously with the patient’s capacity to function independently. In addition, psychologists are better able to measure cognitive activity than any other kind of behavior, except perhaps simple psychophysical reactions and sensorimotor responses. Certainly, cognitive behavior— typically as mental abilities, skills, or knowledge—has been systematically scrutinized more times in more permutations and combinations and with more replications and controls than has any other class of behavior. Out of all these data have evolved numerous mostly reliable and well-standardized techniques for
identifying, defining, grading, measuring, and comparing the spectrum of cognitive functioning. Intelligence testing and educational testing provide the neuropsychologist with a ready-made set of operations and a well-defined frame of reference that can be fruitfully applied to deficit measurement. The deficit measurement paradigm can be used with other behavioral impairments such as personality change, reduced mental efficiency, or defective executive functioning. However, personality measurement, particularly of brain impaired individuals, has not yet achieved the community of agreement nor the levels of reliability or predictability that are now taken for granted when measuring cognitive functions. Furthermore, in clinical settings impairments in efficiency and executive functions are usually evaluated on the basis of their effect on specific cognitive activities or personality characteristics rather than studied in their own right. In the following discussion, “test”will refer only to individual tests, not batteries (such as the Wechsler Intelligence Scales [WIS]) or even those test sets, such as Digits Forward and Digits Backward, that custom has led some to think of as a single test. This consideration of individual tests comes from demonstrations of the significant intertest variability in patient performances, the strong association of different patterns of test performance with different kinds of brain pathology, the demographic and other factors which contribute to the normal range of intraindividual test score variations, and the specificity of the brain-behavior relationships underlying many cognitive functions (e.g., I. Grant and Adams, 2009, passim; Naugle, Cullum, and Bigler, 1998; G.E. Smith, Ivnik, and Lucas, 2008). Knowledge of intraindividual variations in test performances does not support the popular concept of “intelligence”as a global—or near-global—phenomenon which can be summed up in a single score (Ardila, 1999a; see p. 713), nor does it support summing scores on any two or more tests that measure different functions. Those knowledgeable about the constituent components of complex tests appreciate how combined scores can obscure the underlying data; those experienced in test performance analysis do not need combined scores.
FIGURE 4 .1 Calculations test errors (circled) made by a 55-year-old dermatologist with a contre coup from striking his head on the left. Note Figure 4 .1 inattention to operation signs on subtraction and multiplication problems. For example, WAIS-III authors (Wechsler, 1997) recommended computing a Perceptual Organization Index by combining the unweighted scores of the Block Design test which involves abstract visual analysis, visuospatial conceptualization, and a visuomotor response plus points for response speed and the WIS-A Picture Completion test— which not only has no visuospatial component and requires no manipulation by the subject but has a considerable verbal loading, calls on the ability to draw upon acculturated experience, and has a rather generous time cut-off together— with a third quite different untimed test, Matrix Reasoning, of which “Correlational analyses … suggest a strong verbal mediation element”(Dugbartey et al., 1999). The most recent edition of this battery (WAIS-IV, PsychCorp, 2008) recommends combining the scores of Block Design (with response speed credits) and Matrix Reasoning (still untimed) with a rather generously timed test of visuospatial analysis to determine a composite Perceptual Reasoning scaled score.
Summary scores that are created by averaging individual test scores in a battery may be within some average range, but deviations between tests can be substantial, even within the typically developing, healthy population (L.M. Binder, Iverson, and Brooks, 2009; B.L. Brooks, Strauss, et al., 2009; Schretlen, Testa, et al., 2008). Accordingly, if one only relies on examining test scores and their deviations without taking into consideration all of the relevant clinical, historical, and observational data in evaluating a patient, misclassification can become a considerable problem (B.L. Brooks, Iverson, and White, 2007; G.E. Smith, Ivnik, and Lucas, 2008). One last caveat: Twenty-first century neuropsychologists have many tests and assessment techniques at their disposal. Commercially available tests are
often updated and renormed making it impossible for authors of a book such as this to review all of the most recently updated published tests. Fortunately, in most cases earlier versions of the test are very similar—if not identical—to the latest version so that a review and comments on earlier versions have direct relevance for the most current one. Unfortunately, some new test revisions may carry the same name but with significant item, scoring, or norming differences; and newly published batteries may include some tests quite different from those in previous editions while omitting others (Loring and Bauer, 2010). These changes—sometimes, subtle, sometimes not—make it incumbent upon test users to compare and recognize when test data may be interchangeable and when they are not. COMPARISON STANDARDS FOR DEFICIT MEASUREMENT The concept of behavioral deficit presupposes some ideal, normal, or prior level of functioning against which the patient’s performance may be measured. This level, the comparison standard, may be normative (derived from an appropriate population) or individual (derived from the patient’s history or present characteristics), depending on the patient, the behavior being evaluated, and the assessment’s purpose(s). Neuropsychological assessment uses both normative and individual comparison standards for measuring deficit, as appropriate for the function or activity being examined and the purpose of the examination. Examiners need to be aware of judgmental biases when estimating premorbid abilities (Kareken, 1997).
Normative Comparison Standards The population average
The normative comparison standard may be an average or middle (median) score. For adults, the normative standard, or “norm,” for many measurable psychological functions and characteristics is a score representing the average or median performance of some more or less well-defined population, such as white women or college graduates over 40. For many cognitive functions, variables of age and education or vocational achievement may significantly affect test performance. With test developers’ growing sophistication, these variables are increasingly taken into account in establishing test norms for adults. The measurement of children’s behavior is concerned with abilities and traits that change with age, so the normative standard may be the average age or grade at which a given trait or function appears or reaches some criterion
level of performance (e.g., Binet and Simon, 1908). Because of the differential rate of development for boys and girls, children’s norms are best given separately for each sex. Since so many tests have been constructed for children in education and training programs, normative standards based on either average performance level or average age when performance competence first appears are available for a broad range of cognitive behaviors: from simple visuomotor reaction time or verbal mimicry to the most complex activities involving higher mathematics, visuospatial conceptualization, or sophisticated social judgments (Urbina, 2004; see, e.g., normative tables in Woodcock-Johnson III [Woodcock, McGrew, and Mather, 2001c]). Norms based on averages or median scores have also been derived for social behaviors, such as frequency of church attendance or age for participation in team play; for vocational interests, such as medicine or truck driving; or for personality traits, such as assertiveness or hypochondria. In neuropsychological assessment, population norms are most useful in evaluating basic cognitive functions that develop throughout childhood. They can be distinguished from complex mental abilities or academic skills when examined as relatively pure functions. Many tests of memory, perception, and attention and those involving motor skills fall into this category (e.g., see Dodrill, 1999; J.M. Williams, 1997). Typically, performances of these capacities do not distribute normally; i.e., the proportions and score ranges of persons receiving scores above and below the mean are not statistically similar as they are in normal distributions (e.g., Benton, Hamsher, and Sivan, 1994; B. Johnstone, Slaughter, Schopp, et al., 1997; Stuss, Stethem, and Pelchat, 1988). Moreover, the overall distribution of scores for these capacities tends to be skewed in the substandard direction as a few persons in any randomly selected sample can be expected to perform poorly, while nature has set an upper limit on such aspects of mental activity as processing speed and short-term storage capacity. Functions most suited to evaluation by population norms also tend to be age-dependent, particularly from the middle adult years onward, necessitating the use of age-graded norms (Baltes and Graf, 1996; Lezak, 1987a). Education also contributes to performance on these tests and needs to be taken into consideration statistically, clinically, or both (e.g., Heaton, Ryan, and Grant, 2009; Mitrushina, Boone, and D’Elia, 1999, passim). Population norms may be applicable to tests that are relatively pure (and simple) measures of the function of interest (e.g., see Hannay, 1986): As the number of different kinds of variables contributing to a measure increases, the more likely will that measure’s distribution approach normality (Siegel, 1956).
The distributions of the WIS-A summed IQ scores (for the Verbal Scale [VSIQ], the Performance Scale [PSIQ], and both scales together, i.e., the Full Scale [FSIQ]) or scores on tests involving a complex of cognitive functions (e.g., Raven’s Progressive Matrices) demonstrate this statistical phenomenon. Species-wide performance expectations
The norms for some psychological functions and traits are actually specieswide performance expectations for adults, although for infants or children they may be age or grade averages. This is the case for all cognitive functions and skills that follow a common course of development, that are usually fully developed long before adulthood, and that are taken for granted as part and parcel of the normal adult behavioral repertory. Speech is a good example. The average two-year-old child speaks in two- and three-word phrases. The ability to communicate verbally most needs and thoughts is expected of fourand five-year-olds. Seventh- and eighth-grade children can utter and comprehend word groupings in all the basic grammatical forms and their elaborations. Subsequent speech development mainly involves more variety, elegance, abstractness, or complexity of verbal expression. Thus, the adult norm for speech is the intact ability to communicate effectively by speech, which all but a few adults can do. Some other skills that almost all neurologically intact adults can perform are counting change, drawing a recognizable person, and using simple construction tools or cooking utensils. Each of these skills is learned, improves with practice, has a common developmental history for most adults, and is sufficiently easy that its mastery or potential mastery is taken for granted. Anything less than an acceptable performance in an adult raises the suspicion of impairment. Many species-wide capacities, although not apparent at birth, are manifested relatively early and similarly in all intact persons. Their development appears to be essentially maturational and relatively independent of social learning, although training may enhance their expression and aging may dull it. These include capacities for motor and visuomotor control and coordination; basic perceptual discriminations—e.g., of color, pattern, and form; of pitch, tone, and loudness; and of orientation to personal and extrapersonal space. Everyday life rarely calls upon the pure expression of these capacities. Rather, they are integral to the complex behaviors that make up the normal activities of children and adults alike. Thus, in themselves these capacities are usually observed only by deliberate examination. Other species-wide normative standards involve components of behavior so rudimentary that they are not generally thought of as psychological functions
or abilities. Binaural hearing, or the ability to localize a touch on the skin, or to discriminate between noxious and pleasant stimuli are capacities that are an expected part of the endowment of each human organism, present at birth or shortly thereafter. These capacities are not learned in the usual sense, nor, except when impaired by accident or disease, do they change over time and with experience. Some of these species-wide functions, such as fine tactile discrimination, are typically tested in the neurological examination (e.g., Ropper and Samuels, 2009; Simon, Greenberg, and Aminof, 2009; Strub and Black, 2000). Neuropsychological assessment procedures that test these basic functions possessed by all intact adults usually focus on discrete acts or responses and thus may identify the defective components of impaired cognitive behavior (e.g., A.-L. Christensen, 1979; Luria, 1999). However, examinations limited to discrete components of complex functions and functional systems provide little information about how well the patient can perform the complex behaviors involving component defects. Moreover, when the behavioral concomitants of brain damage are mild or subtle, particularly when associated with widespread or diffuse rather than well-defined lesions, few if any of these rudimentary components of cognitive behavior will be demonstrably impaired on the basis of species-wide norms. Customary standards
A number of assumed normative standards have been arbitrarily set, usually by custom. Probably the most familiar of these is the visual acuity standard: 20/20 vision does not represent an average but an arbitrary ideal, which is met or surpassed by different proportions of the population, depending on age. Among the few customary standards of interest in neuropsychological assessment is verbal response latency—the amount of time a person takes to answer a simple question— which has normative values of one or two seconds for informal conversation in most Western cultures. Applications and limitations of normative standards
Normative comparison standards are useful for most psychological purposes, including the description of cognitive status for both children and adults, for educational and vocational planning, and for personality assessment. In the assessment of persons with known or suspected adult-onset brain pathology, however, normative standards are appropriate only when the function or skill or capacity that is being measured is well within the capability of all intact
adults and does not vary greatly with age, sex, education, or general mental ability. Thus, the capacity for meaningful verbal communication will be evaluated on the basis of population norms. In contrast, vocabulary level, which correlates highly with both social class and education (Heaton, Ryan, and Grant, 2009; Rabbitt, Mogapi, et al., 2007; Sattler, 2001), needs an individual comparison standard. When it is known or suspected that a patient has suffered a decline in cognitive abilities that are normally distributed in the adult population, a description of that patient’s functioning in terms of population norms (i.e., by standard test scores) will, in itself, shed no light on the extent of impairment unless there was documentation of premorbid cognitive levels (in school achievement tests or army placement examinations, for example). For premorbidly dull patients, low average scores would not indicate a significant drop in the level of examined functions. In contrast, an average score would represent a deficit for a person whose premorbid ability level had been generally superior (see p. 136 fors a statistical interpretation of ability categories). Moreover, comparisons with population averages do not add to the information implied in standardized test scores, for standardized test scores are themselves numerical comparisons with population norms. Thus, when examining patients for adult-onset deficits, only by comparing present with prior functioning can the examiner identify real losses. The first step in measuring cognitive deficit in an adult is to establish—or estimate, when direct information is not available—the patient’s premorbid performance level for all of the functions and abilities being assessed. For those functions with species-wide norms, this task is easy. Adults who can no longer name objects or copy a simple design or who appear unaware of one side of their body have an obvious deficit. For normally distributed functions and abilities for which the normative standard is an average, however, only an individual comparison provides a meaningful basis for assessing deficit. A population average is not an appropriate comparison standard since it will not necessarily apply to the individual patient. By definition, one-half of the population will achieve a score within the average range on any wellconstructed psychological test which generates a normal distribution of scores; the remainder perform at many different levels both above and below the average range. Although an average score may be, statistically, the most likely score a person will receive, statistical likelihood is a far cry from the individual case.
Individual Comparison Standards As a rule, individual comparison standards are called for whenever a psychological trait or function that is normally distributed in the intact adult population is evaluated for change. This rule applies to both deficit measurement and the measurement of behavioral change generally. When dealing with functions for which there are species-wide or customary norms— such as finger-tapping rate or accuracy of auditory discrimination—normative standards are appropriate for deficit measurement. Yet even these kinds of abilities change with age and, at some performance levels, differ for men and women, thus requiring demographic norming. Moreover, there will always be exceptional persons for whom normative standards are not appropriate, as when evaluating finger tapping speed of a professional pianist after a mild stroke. The use of individual comparison standards is probably most clearly exemplified in rate of change studies, which depend solely on intraindividual comparisons. Here the same set of tests is administered three times (three data points are needed to establish a trajectory) or more at spaced intervals, and the differences between chronologically sequential pairs of test scores are compared. In child psychology the measurement of rate of change is necessary for examining the rate of development. Rate of change procedures also have broad applications in neuropsychology (Attix et al., 2009). Knowledge of the rate at which the patient’s performance is deteriorating can contribute to the accuracy of predictions of the course of a degenerative disease (e.g., see M. Albert et al., 2007; Mickes et al., 2007). For purposes of rehabilitation, the rate at which cognitive functions improve following cerebral insult may not only aid in predicting the patient’s ultimate performance levels but also provide information about the effectiveness of rehabilitative efforts (Babikian and Asarnow, 2009; Leclercq and Sturm, 2002; van Balen et al., 2002). Further, rate of change studies contribute to understanding the long-range effects of brain injury on mental abilities (see Attix et al., 2009). THE MEASUREMENT OF DEFICIT For most abilities and skills that distribute normally in the population at large, determination of deficits rests on the comparison between what can be assumed to be the patient’s characteristic premorbid level of cognitive functioning as determined from historical data (including old test scores when available) and the obtained test performance scores and qualitative features of the test
performance evaluated in the context of presenting problems, recent history, patient behavior, and knowledge of patterns of neuropsychological impairment (see pp. 175–177). Thus, much of clinical neuropsychological assessment involves intraindividual comparisons of the abilities, skills, and relevant behaviors under consideration.
Direct Measurement of Deficit Deficit can be assessed directly when the behavior in question can be compared against normative standards. The extent of the discrepancy between the level of performance expected for an adult and the level of the patient’s performance (which may be given in terms of the age at which the average child performs in a comparable manner) provides one measure of the amount of deficit the patient has sustained. For example, the average six-year-old will answer 22 to 26 items correctly on the Verbal Comprehension test of the WoodcockJohnson-III Tests of Cognitive Abilities (WJ-III Cog). The test performance of an adult who completed high school but can do no better could be reported as being “at the level of a six-year-old”on word knowledge. Determination of whether such a low score represents a neurologically based deficit or occurred on some other basis will depend on the overall pattern of test scores and how they fit in with known history and clinical observations. Direct deficit measurement using individual comparison standards can be a simple, straightforward operation: The examiner compares premorbid and current examples of the behavior in question and evaluates the discrepancies. Hoofien, Vakil, and Gilboa’s (2000) study of cognitive impairment following brain injuries (mostly due to trauma) illustrates this procedure. They compared the scores that army veterans made on tests taken at the time of their induction into service with scores obtained on the Wechsler Adult Intelligence ScaleRevised (WAIS-R) postinjury approximately 13 years later. The findings of this direct comparison provided unequivocal evidence of cognitive impairment. Baade and Schoenerg (2004) recommend using standardized group test data that often can be found in school records. Because circumstances in children’s lives (e.g., parental discord, a new foster home) and short-lived events (e.g., a cold on test day) can significantly affect children’s performances, I use the cluster of highest scores on academic subjects to aid in estimating premorbid ability (mdl). The direct method using individual comparison standards requires the availability of premorbid test scores, school grades, or other relevant
observational data. In many cases, these will be nonexistent or difficult to obtain. Therefore, more often than not, the examiner must use indirect methods of deficit assessment from which individual comparison standards can be inferred.
Indirect Measurement of Deficit In indirect measurement, the examiner compares the present performance with an estimate of the patient’s original ability level. This estimate may be drawn from a variety of sources. It is the examiner ’s task to find meaningful and defensible estimates of the pretraumatic or premorbid ability levels to serve as comparison standards for each patient. Different methods of inferring the comparison standard for each patient have been applied with varying degrees of success (Axelrod, Vanderploeg, and Schinka, 1999; M.R. Basso, Bornstein, Roper, and McCoy, 2000; Hoofien, Vakil, and Gilboa, 2000; B. Johnstone, Slaughter, et al., 1997; R.T. Lange and Chelune, 2007; McFarlane et al., 2006). Historical and observational data are obvious sources of information from which estimates of premorbid ability may be drawn directly. Estimates based on these sources will be more or less satisfactory depending on how much is known of the patient’s past, and whether what is known or can be observed is sufficiently characteristic to distinguish this patient from other people. For example, if all that an examiner knows about a brain injured, cognitively impaired patient is that he was a logger with a ninth-grade education and his observed vocabulary and interests seem appropriate to his occupation and education, then the examiner can only estimate a barely average ability level as the comparison standard. If the patient had been brighter than most, could reason exceptionally well, could tell stories cleverly, or had been due for a promotion to supervisor, this information would probably not be available to the examiner, who would then have no way of knowing from history and observations alone just how bright this particular logger had been. Premorbid ability estimates inferred from historical and observational data alone can also be spuriously low. Moreover, some patient self-reports may be inflated (Greiffenstein, Baker, and Johnson-Greene, 2002), invoking what has been referred to as the “Good Old Days”bias (Iverson, Lange, et al., 2010). Yet the need for accurate estimates has increasingly become apparent, especially in evaluating complaints of mental deterioration in older persons (Almkvist and Tallberg, 2009; Starr and Lonie, 2008; Yuspeh, Vanderploeg, and Kershaw,
1998). In response to this need, neuropsychologists have devised a number of distinctive methods for making these estimates. The most techniques for indirect assessment of premorbid ability rely on cognitive test scores, on extrapolation from current reading ability, on demographic variables, or on some combination of these. In reviewing these methods it is important to appreciate that, without exception, the comparison standard for evaluating them has been the three WIS-A IQ scores or just the FSIQ. That the FSIQ as a criterion has its own problems becomes apparent when subjects’ cognitive functioning is not impaired, yet they have a significant neurobehavioral disorder (e.g., P.W. Burgess, Alderman, Volle, et al., 2009). In these cases, when the estimate is derived only from the several highest Wechsler test scores, the average of all test scores (i.e., the FSIQ) will of necessity be lower than the derived estimate (excepting, of course, when the test score range covers no more than two points). Moreover, the FSIQ will necessarily underrepresent the premorbid level of functioning when patients have cognitive compromise in areas tested by the WIS-A. Mental ability test scores for estimating premorbid ability
A common feature of estimation techniques based on test scores is that the premorbid ability level is estimated from the scores themselves. For many years a popular method for estimating premorbid ability level from test performance used a vocabulary score as the single best indicator of original intellectual endowment (Yates, 1954). This method was based on observations that many cognitively deteriorating patients retained old, well-established verbal skills long after recent memory, reasoning, arithmetic ability, and other cognitive functions were severely compromised. Moreover, of all the Wechsler tests, Vocabulary correlates most highly with education, which also can be a good indicator of premorbid functioning (Heaton, Ryan, and Grant, 2009; B. Johnstone, Slaughter, et al., 1997; Tremont et al., 1998) . An example of this method uses the Shipley Institute of Living Scale (SILS) which contains a multiple-choice (testing recognition rather than recall) vocabulary section and verbal reasoning items (Shipley and Burlingame, 1941). The SILS authors expected that mentally deteriorated persons would show marked discrepancies between their vocabulary and reasoning scores (see p. 735). A large-scale study of 889 persons 60–94 years old provides reference data on cumulative percentile ranks, normalized T scores, and WASI-R (see p. 734) equivalent FSIQ scores for SILS Vocabulary test scores from 19 or less to a maximum score of 40. Their conclusion was that the SILS Vocabulary scores provided a reasonable estimate of premorbid ability in evaluations with elderly
individuals, including those with suspect mild or moderate dementia. David Wechsler and others used the same principle to devise “deterioration ratios,” which mostly compared scores on vocabulary and other verbally weighted scores with performance on tests sensitive to attentional deficits and visuomotor slowing (see p. 423). On the assumption that certain cognitive skills will hold up for most brain damaged persons, McFie (1975)—and later, Krull and colleagues (1995)—proposed that the sturdiest tests in Wechsler ’s scales are Vocabulary and Picture Completion, both involving verbal skills. The average of the scores, or the highest score of the two should one of them be markedly depressed, becomes the estimated premorbid IQ score when evaluated with demographic data (Krull et al., 1995, see p. 95; also see Axelrod, Vanderploeg, and Schinka, 1999). Vanderploeg and Schinka (1995) pointed out the obvious when observing that Verbal Scale tests predict VSIQ best and that Performance Scale tests predict PSIQ best: in a series of regression equations combining the individual WAIS-R tests with demographic data (age, sex, race, education, occupation) Information and Vocabulary estimated VSIQ and FSIQ best; and Block Design, Picture Completion, and Object Assembly gave the best estimates of PSIQ. General Ability Index-Estimate (GAI-E). These formulas were originally derived on the WAIS-III standardization population to estimate premorbid GAI scores (p. 714; Prifitera et al., 2008; Tulsky, Saklofse, Wilkins, et al., 2001). A set of regression algorithms developed for Canadian users from demographic variables (age, education, ethnicity, country region, and gender) and pairs of WAIS-III test scores found that Matrix Reasoning combined with either Vocabulary (VO) or Information (IN) produced the best estimate of the WAISIII GAI; without Matrix Reasoning (MR), either Verbal Scale test, combined with the demographic data, generated equally high correlations with the Verbal Comprehension Index: the algorithm for Matrix Reasoning alone had a lower but best predictive value for the Perceptual Organization Index (R.T. Lange, Schoenberg, Duff, et al., 2006). These findings held for a sample of 201 “neurological dysfunction”patients (of whom 44 were diagnosed as schizophrenic) when VO or IN were greater than MR (Schoenberg et al., 2006). Larrabee, Largen, and Levin (1985) found that other Wechsler tests purported to be resilient (e.g., Information and Picture Completion) were as vulnerable to the effects of dementia as those Wechsler regarded as sensitive to mental deterioration (see also Loring and Larrabee, 2008). Moreover, the Similarities test, which Wechsler (1958) listed as vulnerable to brain dysfunction, held up best (in both WAIS and WAIS-R versions) when given to
neuropsychologically impaired polysubstance abusers (J.A. Sweeney et al., 1989). Vocabulary and related verbal skill scores sometimes do provide the best estimates of the general premorbid ability level (R.T. Lange, Schoenberg, et al., 2006). However, vocabulary tests such as in the Wechsler batteries require oral definitions and thus tend to be more vulnerable to brain damage than verbal tests that can be answered in a word or two, require only recognition, or call on practical experience. Further, many patients with left hemisphere lesions suffer deterioration of verbal skills, which shows up in relatively lower scores on more than one test of verbal function. Aphasic patients have the most obvious verbal disabilities; some are unable to use verbal symbols at all. Some patients with left hemisphere lesions are not technically aphasic, but their verbal fluency is sufficiently depressed that vocabulary scores do not provide good comparison standards. Word reading tests for estimating premorbid ability
National Adult Reading Test (NART).1 This test sought to improve on vocabulary-based methods of estimating the cognitive deterioration of patients with diffusely dementing conditions, H.E. Nelson (1982; H. E. Nelson and Willison, 1991) and Crawford (with Parker, Stewart, et al., 1989; with Deary et al., 2001) proposed that scores on the NART can reliably estimate the comparison standard; i.e., premorbid ability level (see review by Bright et al., 2002). The NART requires oral reading of 50 phonetically irregular words, varying in frequency of use (Table 13.6, p. 562). Of course, this technique can only be used with languages, such as English or French in which the spelling of many words is phonetically irregular (Mackinnon and Mulligan, 2005) . In essence, these word reading tests provide an estimate of vocabulary size. Correlations of NART-generated IQ score estimates with the WAIS and the WAIS-R (British version) FSIQ have run in the range of .72 (H.E. Nelson, 1982) to .81 (Crawford, Parker, Stewart, et al., 1989). VSIQ correlations with the British WAIS-R are a little higher, PSIQ correlations are considerably lower—a pattern seen in all subsequent studies using word test performance for estimating premorbid ability. The NART and the British WAIS-R were given to 179 77-year-olds who, at age 11, had taken a “group mental ability test”(presumably paper-and-pencil administration) (Crawford, Deary, et al., 2001). The NART IQ score estimates were in the same range as the early test scores (r = .73). As a cautionary note, Schretlen, Buffington, and colleagues (2005), while replicating the NART-IQ score relationships, show that NART correlations with other cognitive domains are significantly lower than with IQ
scores, limiting the usefulness of NART estimates for abilities such as executive, memory, visuospatial, and perceptual-motor functions. North American Adult Reading Test (NAART).2 This format was developed for U.S. and Canadian patients (E. Strauss, Sherman, and Spreen, 2006). It has been examined in several clinical populations (S.L. Griffin et al., 2002; B. Johnstone, Callahan, et al., 1996; Uttl, 2002). The 61-word list contains 35 of the original NART words (Table 4.1). While the NAART scores correlate reasonably well with the WAIS-R VSIQ (r = .83), correlation with the FSIQ (r = .75) leaves a great deal of unaccounted variance and “the test … is relatively poor at predicting PIQ”(E. Strauss, Sherman, and Spreen, 2006, p. 196). It is of interest that for this verbal skill test the mean number of words correctly pronounced steadily increased from 38.46 ± 9.29 at ages 18–25 to 43.55 ± 8.84 at 70–80 (E. Strauss, Sherman, and Spreen, 2006, p. 194). TABLE 4.1 North American Adult Reading Test (NAART): Word List
Source. From Spreen and Strauss (1998).
American National Adult Reading Test (ANART). A 50-word version of the NART was developed to be more appropriate for the ethnically heterogeneous U.S. population (Gladsjo, Heaton, et al., 1999). It shares 28 words with the
North American Adult Reading Test (NAART). The ANART enhanced premorbid estimates for predominantly verbal tests to a limited degree, but made no useful contribution to estimates of either the PSIQ or scores of other tests with relatively few verbal components. AMNART. This 45-word “American version”of the NART has proven sensitive to the developing semantic deficits of patients with early Alzheimertype dementia (Storandt, Stone, and LaBarge, 1995; E. Strauss, Sherman, and Spreen, 2006). Mayo norms (Mayo’s Older American Normative Studies) for 361 healthy persons in 11 age ranges from 56 to 97 included AMNART data (Ivnik, Malec, Smith, et al., 1996). In contrast to preclinical decline in memory and executive functions, AMNART remains stable in the preclinical stages of Alzheimer ’s disease (Grober, Hall, et al., 2008); but at clinical stages it reflects the semantic decline associated with degenerative disease (K.I. Taylor et al., 1995). Wide Range Achievement Test-Word Reading (WRAT-READ). The Word Reading section of the WRAT-4 presents 55 words that are not all phonetically irregular (Wilkinson and Robertson, 2006; see p. 563). It was developed on the same principle as the NART tests, with more to less frequently used words to evaluate reading level. The 4th edition is sufficiently similar to older ones (e.g., Wilkinson, 1993) to allow the assumption that much of the past research with earlier versions will apply to the most current. Likewise, this use of the WRAT-READ in neuropsychology applies regardless of which version is used because of the stability of reading performance in normal, typically developing or aging individuals (Ashendorf et al., 2009). For African Americans in the 56 to 94 age range, the Mayo group has published WRAT-3 norms (Lucas, Ivnik, Smith, et al., 2005). WRAT-READ has been effective in estimating premorbid abilities for patients with TBI (B. Johnstone, Hexum, et al., 1995), drug abuse (Ollo, Lindquist, et al., 1995), schizophrenia (Weickert et al., 2000) , and persons with Huntington’s disease (J.J. O’Rourke et al., 2011). Studies of its effectiveness in estimating premorbid mental ability have produced findings similar to those for the NART and its variants (B. Johnstone and Wilhelm, 1996; Kareken, Gur, and Saykin, 1995), including the NART-R (K.B. Friend and Grattan, 1998). In comparisons of NART-R and WRATREAD, Wiens, Bryan, and Crossen (1993) reported that the former test best estimated their cognitively intact subjects whose FSIQ scores were in the 100– 109 range while consistently overestimating those whose FSIQ scores fell below 100 and underestimating the rest; WRAT-READ’s estimations were more accurate in predicting lower FSIQ scores but underestimations of
average and better FSIQ scores were even greater than for the NART-R. This pattern was confirmed in a subsequent study using WRAT-READ and the North American Adult Reading Test (B. Johnstone, Callahan, et al., 1996). For neurologically impaired patients, a comparison of NAART and WRAT-READ found that while both “are appropriate estimates of premorbid verbal intelligence,” NAART had standardization and range limitations while WRATREAD provided a better estimate of the lower ranges of the VSIQ, making WRAT-READ more applicable to the population “at higher risk for TBI” (B. Johnstone, Callahan, et al., 1996). J.D. Ball and colleagues (2007) caution that the WRAT-3 Reading Test can be used as an estimate of premorbid ability as long as it is not applied to persons with learning disabilities or for providing estimations in the superior range. Wechsler Test of Adult Reading (WTAR). This list of 50 phonetically irregular words was developed by the Wechsler enterprise for estimating premorbid “intellectual functioning,” using the same norm set as the WAIS-III and WMS-III (The Psychological Corporation, 2001). The performance of the WTAR has been examined with TBI patients and with elderly persons at varying levels of cognitive competence. R.E.A. Green and coworkers (2008) reported that the WTAR score was stable for 24 severely injured persons at two and five months postinjury and closely approximated premorbid ability estimates based on demographic variables. However, another study found that severely injured TBI patients’ WTAR scores were significantly lower than those with mild or moderate injuries, suggesting that WTAR scores underestimate premorbid ability (Mathias et al., 2007). In a comparison of the WTAR with the NART, Spot-the-Word (a lexical decision task; see pp. 110111), a test of contextual reading, and demographic estimates, premorbid estimates based on scores for the phonetically irregular word tests were lower than those for Spot-the-Word (McFarlane et al., 2006). Word reading tests as predictors of premorbid ability: variables and validity issues. Correlations between these word reading tests and the criterion tests (mostly WIS-A IQ scores) tend to be directly related to education level (Heaton, Ryan, and Grant, 2009; B. Johnstone, Slaughter, et al., 1997; Maddrey et al., 1996). Some studies that dealt with subjects in the early to middle adult years reported insignificant NART/NAART X age correlations (e.g., Blair and Spreen, 1989; Wiens, Bryan, and Crossen, 1993). However, when subjects’ age range extends across several age cohorts into old age, age effects emerge (Heaton, Ryan, and Grant, 2009; E. Strauss, Sherman, and Spreen, 2006). Age effects just barely reached significance (r = -.18) for a broad subject sample (ages 17–88); yet when the much stronger correlations for education (r = .51)
and social class (r = -.36) were partialled out, the small age effects were nullified (Crawford, Stewart, Garthwaite, et al., 1988). Kareken, Gur, and Saykin (1995) reported significant correlations between race (whites, African Americans) and all three WAIS-R IQ scores and WRAT-READ scores. They questioned whether “quality of education may be a mitigating factor,” but did not consider the pronunciation differences between “Black English”and standard American English. By and large, the findings of studies on this technique have shown that when attempting to predict VSIQ and FSIQ scores of cognitively intact persons from their reading level, these tests are fairly accurate (Crawford, Deary, et al., 2001; J.J. Ryan and Paolo, 1992; Wiens, Bryan, and Crossen, 1993). Regardless of which WIS-A edition is used, correlations between NART/NAART or WRAT-READ scores and VSIQ tend to be highest, FSIQ correlations are typically a little lower but still account for a large portion of the variance, while PSIQ correlations are too low for the reading test scores to be predictive of anything. Moreover, the greater the actual IQ score deviation from 100, the more discrepant are estimates by the NART or one of its variants: “there is truncation of the spread of predicted IQs on either end of the distribution leading to unreliable estimates for individuals at other than average ability levels”(E. Strauss, Sherman, and Spreen, 2006, p. 195). Furthermore, reading test scores tend to decline when given to dementing patients (J.R. Crawford, Millar, and Milne, 2001; B. Johnstone, Callahan, et al., 1996; McFarlane et al., 2006) but typically less than IQ scores (Maddrey et al., 1996). This method has been questioned as underestimating the premorbid ability of dementia patients—the degree of underestimation being fairly directly related to the severity of dementia (Stebbins, Wilson et al., 1990), of mildly demented patients with linguistic deficits (Stebbins, Gilley, et al., 1990), and of those more severely demented (E. Strauss, Sherman, and Spreen, 2006). For 20 elderly and neurologically impaired patients whose mean education was 8.8 ± 3 years, all three WAIS-R IQ scores (78.8 to 83.7) were significantly lower than NART estimates (from 93 to 95.2) (J.J. Ryan and Paolo, 1992). Yet, despite “mild”declines in NART-R scores, Maddrey and his colleagues (1996) recommended its use for dementing patients, even those whose deterioration is “more advanced.” However, Schretlen, Buffington, and their coworkers (2005) caution against generalizing NART-R findings as a premorbid estimate of other cognitive abilities, as the relationships of NART-R to many premorbid cognitive measures (e.g., tests of memory and learning, visuomotor tracking efficiency, fluency) is weaker than the NART-R relationship to premorbid Wechsler IQ scores.
Correlations of the NART with the three Wechsler IQ scores were a little lower for an English-speaking South African population than for U.K. subjects (Struben and Tredoux, 1989). This discrepancy suggests that a language test standardized on one population may not work as well with another in which small differences in language have evolved over time. Other word-based tests for estimating premorbid ability. Appreciating that many elderly persons, especially those suffering stroke or early stage dementia, are limited in their ability for oral reading, some examiners have turned to reading recognition tests to aid in the assessment of premorbid ability. The most commonly cited test, Spot-the-Word (STW), is one of two tests in The Speed and Capacity of Language Processing Test developed to evaluate cognitive slowing following brain damage (Baddeley, Emslie, and NimmoSmith, 1993; pp. 110–111). The subject’s task is to identify the real word in each of 60 pairings of word and nonword (e.g., primeval-minadol). The test manual provides norms up to age 60. Crowell and his colleagues (2002) computed cumulative percentiles for 466 persons in the 60 to 84 age range. Yuspeh and Vanderploeg (2000) reported significant correlations with other tests used for estimating premorbid ability (AMNART, r = .56; SILS Voc, r = .66; WAIS-R Voc, r = .57) while correlations with a word learning test and the Symbol Digit Modalities Test were insignificant. Both studies found significant effects for education and none for gender. Crowell’s group reported a significant but small effect for age; Yuspeh and Vanderploeg’s (2000) small sample (61 healthy elderly) generated no age effects. Mackinnon and Christensen (2007) review STW for its clinical utility. A more recent alternative to oral reading tests, the Lexical Orthographic Familiarity Test (LOFT), also uses a paired forced-choice format (Leritz et al., 2008), but the choice here is between words on the Wechsler Test of Adult Reading (WTAR) list and same-length archaic and very unfamiliar English words. (e.g., aglet, paletot). A comparison of the performances of 35 aphasic patients on the WTAR and the LOFT found that the patients scored higher on the LOFT than the WTAR. For a healthy control group, both tests correlated significantly with education, but for the aphasic group only the LOFT’s correlation with education was significant. The authors especially recommend this test for language-impaired persons. Demographic variable formulas for estimating premorbid ability
One problem with word-reading scores is their vulnerability to brain disorders, especially those involving verbal abilities; one advantage of demographic variables is their independence from the patient’s
neuropsychological status at the time of examination. In questioning the use of test score formulas for estimating premorbid ability (specifically, WIS-A FSIQ scores), R.S. Wilson, Rosenbaum, and Brown (1979; also in Rourke, Costa, et al., 1991) devised the first formula using demographic variables (age, sex, race, education, and occupation) to make this estimation. This formula predicted only two-thirds of 491 subjects’ WAIS FSIQ scores within a ten-point error range; most of the larger prediction errors occurred at the high and low ends of their sample, overpredicting high scores and underpredicting low ones (Karzmark, Heaton, et al., 1985; also in Rourke, Costa, et al., 1991). Recognizing the need for ability estimates geared to the WAIS-R, Barona, Reynolds, and Chastain (1984) elaborated on Wilson’s work by incorporating the variables of geographic region, urban-rural residence, and handedness into the estimation formula. They devised three formulas for predicting each of the WAIS-R IQ scores. These authors did not report the amount and extent of prediction errors produced by their formulas but cautioned that, “where the premorbid Full Scale IQ was above 120 or below 69, utilization of the formuli [sic] might result in a serious under- or over-estimation, respectively”(p. 887). Other studies evaluating both the Wilson and the Barona estimation procedures found that at best they misclassified more than one-half of the patients (Silverstein, 1987), or “both formulas perform essentially at chance levels”(Sweet, Moberg, and Tovian, 1990). An elaboration of the Barona procedure (Barona and Chastain, 1986) improved classification to 80% and 95% of patients and control subjects, respectively. Helmes (1996) applied the 1984 Barona equations in a truly large-scale study (8,660 randomly selected elderly Canadians—excluding three women in their 100s). The three IQ score means calculated from this formula appeared to produce reasonably accurate estimates. Main effects for sex and education were significant. However, another study comparing estimation techniques found that the 1984 Barona method generated the lowest correlation of estimated FSIQ with actual FSIQ (r = .62) (Axelrod, Vanderploeg, and Schinka, 1999). In a study of the predictive value of demographic variables, Crawford and Allan (1997) found that occupation provided the best estimate of the three WAIS-R IQ scores with correlations of –.65, –.65, and –.50 for FSIQ, VSIQ, and PSIQ, respectively. As might be expected, occupation and education correlated relatively highly (r = .65). When age and education were added in, the multiple regression results accounted for 53%, 53%, and 32% of the variance for the three IQ scores, respectively. Like most other studies, the contribution of age was negligible. This demographic formula joins word reading tests in not predicting PSIQ effectively.
Demographic variables combined with test scores for estimating premorbid ability
Further efforts to improve estimates of premorbid ability have generated formulas that combine word recognition test scores with demographic variables. Strong relationships showed up between scores generated by equations combining NART scores with demographic variables and scores on individual WAIS tests: the greatest factor loadings were on the highly verbal tests (in the .76–.89 range), with almost as strong relationships (.71 and .72) occurring between the equationgenerated scores and the Block Design and Arithmetic tests, respectively (J.R. Crawford, Cochrane, Besson, et al., 1990). These workers interpreted the findings as indicating that an appropriate combination of the NART score and demographic variables provides a good measure of premorbid general ability. However, another study examining different subject groups (e.g., Korsakoff’s syndrome, Alzheimer ’s disease) found that NART (and NART-R) alone correlated better with WIS-A FSIQ than did either of two demographic formulas, nor did combining NART and demographic data enhance NART estimates (Bright et al., 2002). The Oklahoma Premorbid Intelligence Estimation (OPIE). Another method for developing formulas to enhance the accuracy of premorbid estimations from current test performance combines WIS-A test scores with demographic data (Krull et al., 1995). Formulas for predicting VSIQ, PSIQ, and FSIQ included Vocabulary and Picture Completion scores of the WAIS-R standardization population along with its age, education, occupation, and race data. Predicted and actual correlations were high (r = .87, .78, .87 for V-, P-, and FSIQ scales, respectively). OPIE formulas for predicting FSIQ were evaluated on a patient data base using raw scores for Vocabulary, Picture Completion, both tests, or the raw score for whichever of these two tests had the highest non-age-corrected scaled score (BEST method, J.G. Scott et al., 1997). FSIQ BEST method predictions most closely approximated the normative distribution’s mean and standard deviation, a finding interpreted as indicating that the BEST method gave the best estimation. The formula using both Vocabulary and Picture Completion scores produced the least appropriate FSIQ approximations. A more recent version based on test scores and demographic data of the WAIS-III standardization population—OPIE-3—generated the formula OPIE-3 (Best) based on Vocabulary or Matrix Reasoning or their combined raw scores (Schoenberg, Scott, et al., 2003). An additional five formulas for calculating premorbid estimates, based on combinations of WAIS-R test raw scores or individual test raw scores, are given with their prediction errors (WAIS-III FSIQ—OPIE-3) for the 13 WAIS-III age groups (Schoenberg, Duff, et al.,
2006). Besides OPIE-3 (Best), the formulas using only the Vocabulary or only the Matrix Reasoning score gave the closest estimations. Comparisons between methods for estimating premorbid ability
With so many estimation procedures to choose from, it is natural to wonder which works best. M.R. Basso, Bornstein, and their colleagues (2000), after testing the Barona, revised Barona, OPIE, and BEST-3, concluded that none of the methods based on regression formulas were satisfactory. They pointed out that the phenomenon of regression to the mean affected all these methods, most significantly the Barona (i.e., purely demographic) methods. Scores at the extremes of the IQ range were most vulnerable to estimation errors. The prediction accuracy of other studies (see below) tends to vary with the demographic characteristics of the samples tested. For each of the three WAIS-R IQ scores, Kareken and his colleagues (1995) compared formulas that included parental education level and race with WRAT-R reading scores to estimations derived from the original Barona equation. While the average discrepancy between these two estimates was “moderate,” the reading + parental education technique generated higher scores and a broader range of estimated scores than did Barona estimates or the reading score range. The two methods shared variances of only moderate size (for V-, P-, and FSIQ scores, r = .46, .61, and .55, respectively) indicating that each method “tap[s] different aspects of variance.” In a comparison of WRAT-R impairment estimates with impairment estimates based on education and using TBI patient data, education level produced larger estimates of impairment for the WAIS-R FSIQ score and also for two noncognitive tests: Grip Strength and Finger Tapping (B. Johnstone, Slaughter, et al., 1997). Impairment estimations based on WRAT-R exceeded those predicted by education for each of the two trials of the Trail Making Test. The authors wisely concluded that “different methods of estimating neuropsychological impairment produce very different results”and suggest that neither of these methods is appropriate for estimating premorbid levels of motor skills. A comparison of five methods for predicting premorbid ability level used as a criterion how closely the estimated FSIQ of brain impaired patients approximated the actual FSIQ score of matched control subjects (J.G. Scott et al., 1997). Four methods were based on a combination of WAIS-R test scores and demographic data: three OPIE variants and a procedure using the OPIE equation that generated the highest score (BEST-3); a fifth was the demographically based Barona procedure. The demographically based method
produced the smallest discrepancy between the clinical sample and the matched control group; although it had the highest rate of group classification (based on estimated – obtained scores), all five methods had “an equal degree of overall classification accuracy.” The Barona score had the lowest correlation by far with the subjects’ actual FSIQ scores (r = .62; all others were in the .84 to .88 range). The authors point out discrepancies between these findings and those of previous studies in concluding that the four methods using OPIE equations were “equally effective,” while expressing puzzlement over the Barona method’s history of good performance in predicting FSIQ scores and in classifying subjects. Comparing the Barona and OPIE methods with two reading tests (NAART, WRAT-3), S.L. Griffin and her coworkers (2002) reported that the Barona method was least useful, overestimating WAIS-R “below average”and “average”FSIQ scores and underestimating those in the “above average”ranges. OPIE overestimated the “average”FSIQ scores, NAART overestimated “below average”and “average”FSIQ, and the WRAT-R underestimated both “below average”and “above average”FSIQ. A more recent comparison of Barona formulas with algorithms based on WTAR and demographic data and with WRAT-3 Reading reported that oral reading is a “reasonable measure of premorbid ability”excepting persons of superior intellectual ability or those with learning disabilities (J.D. Ball et al., 2007). For those of superior ability, the Barona formula predicted most accurately. With premorbid ability scores for 54 neurologically impaired patients, Hoofien, Vakil, and Gilboa (2000) compared two estimation procedures that combine demographic data either with formulas using the highest predicted WAIS-R score(s) (BEST-10) generated from 30 prediction equations (see Vanderploeg and Schinka, 1995) or with scores of the two traditional WIS-A “hold”tests, Vocabulary and Picture Completion (BEST-2). BEST-10 provided the closest estimates to the premorbid scores, but the authors’ caution that, since “some isolated skills or abilities”can lead to overestimates, clinical judgment is also required. None of these methods satisfies the clinical need for a reasonably accurate estimate of premorbid ability. All of them, however, show the value of extra test data and the penalties paid for restricting access to any particular kind of information when seeking the most suitable comparison standards for a cognitively impaired patient. THE BEST PERFORMANCE METHOD
A simpler method utilizes test scores, other observations, historical data, and clinical judgment. This is the best performance method, in which the level of the best performance—whether it be the highest score or set of scores, nonscorable behavior not necessarily observed in a formal testing situation, or evidence of premorbid achievement—serves as the best estimate of premorbid ability. Once the highest level of functioning has been identified, it becomes the standard against which all other aspects of the patient’s current performance are compared. The best performance method rests on a number of assumptions that guide the examiner in its practical applications. Basic to this method is the assumption that, given reasonably normal conditions of physical and mental development, there is one performance level that best represents each person’s cognitive abilities and skills generally. This assumption follows from the welldocumented phenomenon of the transituational consistency of cognitive behavior. According to this assumption, the performance level of most normally developed, healthy persons on most tests of cognitive functioning probably provides a reasonable estimate of their performance level on most other cognitive tasks (see B.D. Bell and Roper, 1998, for a discussion of this phenomenon at the high average ability level; Dodrill, 1999, gives an example at the low average level). This assumption allows the examiner to estimate a cognitively impaired patient’s premorbid general ability level from one or, better yet, several current test scores while also taking into account other indicators such as professional achievement or evidence of a highly developed skill. Intraindividual differences in ability levels may vary with a person’s experience and interests, perhaps with sex and handedness, and perhaps on the basis of inborn talents and deficiencies. Yet, by and large, persons who perform well in one area perform well in others; and the converse also holds true: a dullard in arithmetic is less likely to spell well than is someone who has mastered calculus. This assumption does not deny its many exceptions, but rather speaks to a general tendency that enables the neuropsychological examiner to use test performances to make as fair an estimate as possible of premorbid ability in neurologically impaired persons with undistinguished school or vocational careers. A corollary assumption is that marked discrepancies between the levels at which a person performs different cognitive functions or skills probably give evidence of disease, developmental anomalies, cultural deprivation, emotional disturbance, or some other condition that has interfered with the full expression of that person’s cognitive potential. An analysis of the WAIS-R normative population into nine average score
“core”profiles exemplifies this assumption as only one profile, accounting for 8.2% of this demographically stratified sample, showed a variation of as much as 6 scaled score points, and one that includes 6.2% of the sample showed a 5point disparity between the average high and low scores (McDermott et al., 1989). The rest of the scatter discrepancies are in the 0–4 point range. However, as Schretlen et al. (2009) and L.M. Binder, Iverson, and Brooks (2009) have shown, large discrepancies do occur in healthy controls, again emphasizing why the clinician needs to take multiple factors into consideration when making a determination about whether a particular neuropsychological performance reflects actual impairment or some normal variation. Another assumption is that cognitive potential or capacity of adults can be either realized or reduced by external influences; it is not possible to function at a higher level than biological capacity and developmental opportunity will permit. Brain injury—or cultural deprivation, poor work habits, or anxiety— can only depress cognitive abilities (A. Rey, 1964). An important corollary to this assumption is that, for cognitively impaired persons, the least depressed abilities may be the best remaining behavioral representatives of the original cognitive potential (see Axelrod, Vanderploeg, and Schinka, 1999; Hoofien, Vakil, and Gilboa, 2000; Krull et al., 1995; J.G. Scott et al., 1997). The phenomenon of overachievement (people performing better than their general ability level would seem to warrant) appears to contradict this assumption; but in fact, overachievers do not exceed their biological/developmental limitations. Rather, they expend an inordinate amount of energy and effort on developing one or two special skills, usually to the neglect of others. Academic overachievers generally know their material mostly by rote and reveal their limitations on complex mental operations or highly abstract concepts enjoyed by people at superior and very superior ability levels. A related assumption is that few persons consistently function at their maximum potential, for cognitive effectiveness can be compromised in many ways: by illness, educational deficiencies, impulsivity, test anxiety, disinterests —the list could go on and on (Shenk, 2010). A person’s performance of any task may be the best that can be done at that time but still only indicates a floor, not the ceiling, of the level of abilities involved in that task. Running offers an analogy: no matter how fast the runner, the possibility remains that she could have reached the goal even faster, if only by a fraction of a second. Another related assumption is that within the limits of chance variations, the ability to perform a task is at least as high as a person’s highest level of performance of that task. It cannot be less. This assumption may not seem to be
so obvious when a psychologist is attempting to estimate a premorbid ability level from remnants of abilities or knowledge. In the face of a generally shabby performance, examiners may be reluctant to extrapolate an estimate of superior premorbid ability from one or two indicators of superiority, such as a demonstration of how to use a complicated machine or the apt use of several abstract or uncommon words, unless they accept the assumption that prerequisite to knowledge or the development of any skill is the ability to learn or perform it. A patient who names Grant as president of the United States during the Civil War and says that Greece is the capital of Italy but then identifies Einstein and Marie Curie correctly is demonstrating a significantly higher level of prior intellectual achievement than the test score suggests. The poor responses do not negate the good ones; the difference between them suggests the extent to which the patient has suffered cognitive deterioration. It is also assumed that a patient’s premorbid ability level can be reconstructed or estimated from many different kinds of behavioral observations or historical facts. Material on which to base estimates of original cognitive potential may be drawn from interview impressions, reports from family and friends, test scores, prior academic or employment level, school grades, army rating, or an intellectual product such as a letter or an invention. Information that a man had earned a Ph.D. in physics or that a woman had designed a set of complex computer programs is all that is needed to make an estimate of very superior premorbid intelligence, regardless of present mental dilapidation. Except in the most obvious cases of unequivocal high achievement, the estimates should be based on information from as many sources as possible to minimize the likelihood that significant data have been overlooked, resulting in an underestimation of the patient’s premorbid ability level. Verbal fluency can be masked by shyness, or a highly developed graphic design talent can be lost to a motor paralysis. Such achievements might remain unknown without careful testing or inquiry. The value of the best performance method depends on the appropriateness of the data on which estimates of premorbid ability are founded. This estimation method places on the examiner the responsibility for making an adequate survey of the patient’s accomplishments and residual abilities. This requires sensitive observation with particular attention to qualitative aspects of the patient’s test performance; good history taking, including—when possible and potentially relevant— contacting family, friends, and other likely sources of information about the patient such as schools and employers; and enough testing to obtain an overview of the patient’s cognitive abilities in each major functional domain.
The best performance method has very practical advantages. Perhaps most important is that a broad range of the patient’s abilities is taken into account in identifying a comparison standard for evaluating deficit. By looking at the whole range of cognitive functions and skills for a comparison standard, examiners are least likely to bias their evaluations of any specific group of patients, such as those with depressed verbal functions. Moreover, examiners using this method are not bound to one battery of tests or to tests alone for they can base their estimates on nontest behavior and behavioral reports as well. For patients whose general functioning is too low or too spotty for them to complete a standardized adult test, or who suffer specific sensory or motor defects, children’s tests or tests of specific skills or functions used for career counseling or job placement provide opportunities to demonstrate residual cognitive abilities. In general, the examiner should not rely on a single high test score for estimating premorbid ability unless history or observations provide supporting evidence. The examiner also needs to be alert to overachievers whose highest scores are generally on vocabulary, general information, or arithmetic tests, as these are the skills most commonly inflated by parental or school pressure on an ordinary student. Overachievers frequently have high memory scores, too. They do not do as well on tests of reasoning, judgment, original thinking, and problem solving, whether or not words are involved. One or two high scores, on memory tests should not be used for estimating the premorbid ability level since, of all the cognitive functions, memory is the least reliable indicator of general cognitive ability. Dull people can have very good memories; some extremely bright people have been notoriously absentminded. It is rare to find only one outstandingly high score in a complete neuropsychological examination. Usually even severely impaired patients produce a cluster of relatively higher scores in their least damaged area of functioning so that the likelihood of overestimating the premorbid ability level from a single, spuriously high score is slight. The examiner is much more likely to err by underestimating the original ability level of the severely brain injured patient who is unable to perform well on any task and for whom little information is available. In criticizing this method as prone to systematic overestimates of premorbid ability, Mortensen and his colleagues (1991) give some excellent examples of how misuse of the best performance method can result in spurious estimates. Most of their “best performance”estimates were based solely on the highest score obtained by normal control subjects on a WIS-A battery. What
they found, of course, was that the highest score among tests contributing to a summation score (i.e., an IQ score) is always higher than the IQ score since the IQ score is essentially a mean of all the scores, both higher and lower. Therefore, in cognitively intact subjects, the highest WIS-A test score is not an acceptable predictor of the WIS-A IQ score. Moreover, in relying solely on the highest score, the Mortensen study violated an important directive for identifying the best performance: that the estimate should take into account as much information as possible about the patient and not rely on test scores alone. In most cases, the best performance estimate will be based on a cluster of highest scores plus information about the patient’s education and career, and when possible, it will include school test data (Baade and Schoenberg, 2004). Thus, developing a comparison standard using this method is not a simple mechanical procedure but calls upon clinical judgment and sensitivity to the many different conditions and variables that can influence a person’s test performances. THE DEFICIT MEASUREMENT PARADIGM Once the comparison standard has been determined, whether directly from population norms, premorbid test data, or historical information, or indirectly from current test findings and observation, the examiner may assess deficit. This is done by comparing the level of the patient’s present cognitive performances with the expected level—the comparison standard. Discrepancies between the expected level and present functioning are then evaluated for statistical significance (see pp. 721-723). A statistically significant discrepancy between expected and observed performance levels for any cognitive function or activity indicates a probability that this discrepancy reflects a cognitive deficit. This comparison is made for each test score. For each comparison lacking premorbid test scores, the comparison standard is the estimate of original ability. By chance alone, a certain amount of variation (scatter) between test scores can be expected for even the most normal persons (L.M. Binder, Iverson, and Brooks, 2009). Although these chance variations tend to be small (The Psychological Corporation, 2008), they can vary with the test instrument and with different scoring systems. If significant discrepancies occur for more than one test score, a pattern of deficit may emerge. By comparing any given pattern of deficit with patterns known to be associated with specific neurological or psychological conditions, the examiner may be able to identify etiological and remedial possibilities for the patient’s problems. When
differences between expected and observed performance levels are not statistically significant, deficit cannot be inferred on the basis of just a few higher or lower scores. For example, it is statistically unlikely that a person whose premorbid ability level was decidedly better than average cannot solve fourth- or fifthgrade arithmetic problems on paper or name at least 16 animals in one minute. If the performance of a middle-aged patient whose original ability is estimated at the high average level fails to meet these relatively low performance levels, then an assessment of impairment of certain arithmetic and verbal fluency abilities can be made with confidence. If the same patient performs at an average level on tests of verbal reasoning and learning, that discrepancy is not significant even though performance is somewhat lower than expected. These somewhat lowered scores need to be considered in any overall evaluation in which significant impairment has been found in other areas. However, when taken by themselves, average scores obtained by patients of high average mental competence do not indicate impairment, since they may be due to normal score fluctuations. In contrast, just average verbal reasoning and learning scores achieved by persons of estimated original superior endowment do represent a statistically significant discrepancy, so that in very bright persons, average scores can indicate deficit. With increasing availability of not only normative data, but also deficit performance data from patient groups with specific diseases like multiple sclerosis (Parmenter et al., 2010) or mixed groups of neurologically and/or neuropsychiatrically impaired persons (Crawford, Garthwaite, and Slick, 2009), new neuropsychological data can now be incorporated into data bases that provide improved comparison information. Indeed, the field of neuroinformatics (see Jagaroo, 2009) is beginning to influence clinical neuropsychology with ever-expanding historical, genetic, normative, and clinical information for the clinician to take into consideration when determining whether a deficit is present. Establishing a premorbid baseline and then following the patient with neuropsychological procedures provides an ideal strategy for categorizing the neurocognitive and neurobehavioral consequences of diseases and disorders of the brain (B.L. Brooks, Strauss, et al., 2009). Identifiable patterns of cognitive impairment can be demonstrated by the deficit measurement method. Although the discussion here has focused on assessment of deficit where a neurological disorder is known or suspected, this method can be used to evaluate the cognitive functioning of psychiatrically disabled or educationally or culturally deprived persons as well because the
evaluation is conducted within the context of the patient’s background and experiences, taking into account historical data and the circumstances of the patient’s present situation (Gollin et al., 1989; W.G. Rosen, 1989). Some of these same principles can be applied to estimating premorbid functioning in children while keeping in mind that the interaction between the age when the brain injury occurred and the continuing development of the child’s brain makes predictions more difficult (Schoenberg, Lange, Saklofske, et al., 2008). Yet the evaluation of children’s cognitive disorders follows the same model (Baron, 2004, 2008; Pennington, 2009; Sattler, 2001; E.M. Taylor, 1959). It is of use not only as an aid to neurological or psychiatric diagnosis but also in educational and rehabilitation planning.
1 Manual out of print; See word list p. 562. 2 See E. Strauss, Sherman, and Spreen (2006) for the pronunciation guide (p. 191) and formulas for estimating WAIS-R IQ scores from NAART scores (p. 193).
5 The Neuropsychological Examination: Procedures Psychological testing is a … process wherein a particular scale is administered to obtain a specific score … In contrast, psychological assessment is concerned with the clinician who takes a variety of test scores, generally obtained from multiple test methods, and considers the data in the context of history, referral information, and observed behavior to understand the person being evaluated, to answer the referral questions, and then to communicate findings to the patient, his or her significant others, and referral sources. G.J. Meyers, S.E. Finn, L.D. Eyde, et al., 2001
Two rules should guide the neuropsychological examiner: (1) treat each patient as an individual; (2) think about what you are doing. Other than these, the enormous variety of neurological conditions, patient capacities, and examination purposes requires a flexible, open, and creative approach. General guidelines for the examination can be summed up in the injunction: Tailor the examination to the patient’s needs, abilities, and limitations, and to special examination requirements. By adapting the examination to the patient in a sensitive and resourceful manner rather than the other way around, the examiner can answer the examination questions most fully at the least cost and with the greatest benefit to the patient. The neuropsychological examination can be individually tailored in two ways. Examiners can select examination techniques and tests for their appropriateness to the patient and for their relevancy to those diagnostic or planning questions that prompted the examination and that arise during its course. Ideally, the examiner will incorporate both selection goals in each examination, as tests and time permit. So many assessment tools are available that an important step is to sort through them to select those that are expected to yield the fullest measure of information. The examiner can also adapt test procedures to a patient’s condition when this is necessary to gain a full measure of information. CONCEPTUAL FRAMEWORK OF THE EXAMINATION
Purposes of the Examination
Neuropsychological examinations may be conducted for any number of purposes: to explain behavior, to aid in diagnosis; to help with management, care, and planning; to evaluate the effectiveness of a treatment technique; to provide information for a legal matter; or to do research. In many cases, an examination may be undertaken for more than one purpose. In order to know what kind of information should be obtained in the examination, the examiner must have a clear idea of the reasons for which the patient is being seen. Although the referral question usually defines the chief purpose for examining the patient, the examiner needs to evaluate its appropriateness. Since most referrals for neuropsychological assessment come from persons who do not have expertise in neuropsychology, it is not surprising that questions may be poorly formulated or beside the point. Thus, the referral may ask for an evaluation of the patient’s capacity to return to work after a stroke or head injury when the patient’s actual need is for a rehabilitation program and an evaluation of mental capacity to handle funds. Frequently, the neuropsychological assessment will address several issues, each important to the patient’s welfare, although the referral may have been concerned with only one. Talking to the referral source often is the best way to clarify all the issues. When that is not possible, the neuropsychologist must decide the content and direction of the neuropsychological examination based on the history, the interview, and the patient’s performance in the course of the examination.
Examination Questions The purpose(s) of the examination should determine its overall thrust and the general questions that need to be asked. Examination questions fall into one of two categories. Diagnostic questions concern the nature of the patient’s symptoms and complaints in terms of their etiology and prognosis; i.e., they ask whether the patient has a neuropsychologically relevant condition and, if so, what it is. Descriptive questions inquire into the characteristics of the patient’s condition; i.e., they ask how the patient’s problem is expressed. Serial studies question whether the condition has changed from a previous examination. Within these two large categories are specific questions that may each be best answered through somewhat different approaches. Diagnostic questions
Diagnostic questions are typically asked when patients are referred for a neuropsychological evaluation following the emergence of a cognitive or
behavioral problem without an established etiology. Questions concerning the nature or source of the patient’s condition are always questions of differential diagnosis. Whether implied or directly stated, these questions ask which of two or more diagnostic pigeonholes best suits the patient’s behavior. In neuropsychology, diagnostic categorization may rely on screening techniques to distinguish probable “neurological impairment” from a “psychiatric or emotional disturbance,” or require a more focused assessment to discriminate a dementing illness from an age-related decline, or determine whether a patient’s visual disorder stems from impaired spatial abilities or impaired object recognition. In large part, diagnostic evaluations depend on syndrome analysis (C.L. Armstrong, 2010; Heilman and Valenstein, 2011; Mesulam, 2000c). The behavioral consequences of many neurological conditions have been described and knowledge about an individual patient (history, appearance, interview behavior, test performance) can be compared to these well-described conditions. In other cases, an unusual presentation might be analyzed on the basis of a theoretical understanding of brain-behavior relationships (e.g., Darby and Walsh, 2005; Farah and Feinberg, 2000; Ogden, 1996). In looking for neuropsychological evidence of brain disease, the examiner may need to determine whether the patient’s level of functioning has deteriorated. Thus, a fundamental question will be, “How good was the patient at his or her best?” When the etiology of a patient’s probable brain dysfunction is unknown, risk factors for brain diseases should be taken into account, such as predisposing conditions for vascular disease, exposure to environmental toxins, a family history of neurological disease, or presence of substance abuse. Differential diagnosis can sometimes hinge on data from the personal history, the nature of the onset of the condition, and circumstances surrounding its onset. In considering diagnoses the examiner needs to know how fast the condition is progressing and the patient’s mental attitude and personal circumstances at the time problems emerged. The examination addresses which particular brain functions are compromised, which are intact, and how the specific deficits might account for the patient’s behavioral anomalies. The examiner may also question whether a patient’s pattern of intact and deficient functions fits a known or reasonable pattern of brain disease or fits one pattern better than another. The diagnostic process involves the successive elimination of alternative possibilities, or hypotheses (see also pp. 130–131). Rarely does the examiner have no information from which to plan an assessment. The examiner can usually formulate the first set of hypotheses on the basis of the referral
question, information obtained from the history or informants, and the initial impression of the patient. Each diagnostic hypothesis is tested by comparing what is known of the patient’s condition with what is expected for that particular diagnostic classification. As the examination proceeds, the examiner can progressively refine general hypotheses (e.g., that the patient is suffering from a brain disorder) into increasingly specific hypotheses (e.g., that the disorder most likely stems from a progressive dementing condition; that this progressive disorder is more likely to be an Alzheimer ’s type of dementia, a frontotemporal dementia, or a multi-infarct dementia). Neuropsychologists do not make neurological diagnoses, but they may provide data and diagnostic formulations that contribute to the diagnostic conclusions. However, when history, simple observation, or well-established laboratory techniques clearly demonstrate a neurological disorder, neuropsychological testing is not needed to document brain damage (see also Holden, 2001). Descriptive questions
When a diagnosis is established, many questions typically call for behavioral descriptions. Questions about specific capacities frequently arise in the course of vocational and educational planning. They become especially important when planning involves withdrawal or return of normal adult rights and privileges, such as a driving license or legal mental capacity. In these cases, questions about the patient’s competencies may be at least as important as those about the patient’s deficits, and the neuropsychological examination may not be extensive, but rather will focus on the relevant skills and functions. Questions also may arise about the patient’s rehabilitation potential and the best approach to use. The effectiveness of remediation techniques and rehabilitation programs depends in part on accurate appraisals of what the candidate patient can and cannot do (Clare et al., 2004; Ponsford, 2004, passim; Sohlberg and Mateer, 2001). Foremost, rehabilitation workers must know how aware their patients are of their condition and the patients’ capacity to incorporate new information and skills (Clare et al., 2004; Eslinger, Grattan, and Geder, 1995; Prigatano, 2010). As the sophistication of these programs increases, accurate and appropriate behavioral descriptions can reduce much of the time spent in figuring out a suitable program for the patient. Competent assessment can enable rehabilitation specialists to set realistic goals and expend their efforts efficiently (Ponsford, 2004, passim; Wrightson and Gronwall, 1999). Longitudinal studies involving repeated measures over time are needed when monitoring the course of disease progression, assessing improvement from an acute event such as head injury or stroke, or documenting treatment
effectiveness. In such cases, a broad range of functions usually comes under regular neuropsychological review. An initial examination, consisting of a full-scale assessment of each of the major functions, sometimes called a baseline study, provides the first data set against which the findings of later examinations will be compared. Regularly repeated assessments give information about the rate and extent of improvement or deterioration and about relative rates of change between functions. Most examinations address more than one question. Few examinations should have identical questions and procedures. An examiner who does much the same thing with almost every patient may not be attending to the specific referral question, to the patient’s individuality and needs, or to the aberrations seen during the examination that point to specific defects and particular problems. On-size-fits-all examinations often are unduly lengthy and costly. CONDUCT OF THE EXAMINATION
Examination Foundations Evidence-based practice is the integration of clinical expertise with the best research evidence and patient values (Chelune, 2010; Sackett et al., 2000). The integration of these three components in the neuropsychological examination has the highest likelihood of achieving the most accurate and appropriate conclusions about the patient and the most useful recommendations. The examiner’s background
The knowledge base in medicine, psychology, and the basic sciences is expanding at an increasing rate making it difficult to be a well-rounded clinician. Clinicians are thus becoming more and more specialized as their practices incorporate a decreasing portion of clinical and research knowledge. Clinicians cannot help but bring their own biases and preconceptions to the diagnostic process based on their knowledge, experiences and views, and even personal life events. Clinicians therefore have an ethical responsibility to update their knowledge and to be aware of their professional biases and of the impact of these and their personal experiences on the assessment process. Since a clinician can be an expert only in a relatively small area of knowledge, it is important to try to “know what you do not know” and thus, when to refer to someone with that knowledge. In order to conduct neuropsychological assessments responsibly and
effectively, the examiner must have a strong background in neurological sciences. Familiarity with neuroanatomy, neurophysiological principles, and neuropathology is a prerequisite for knowing what questions to ask, how particular hypotheses can be tested, or what clues or hunches to pursue. The neuropsychological examiner ’s background in cognitive psychology should include an understanding of the complex, multifaceted, and interactive nature of cognitive functions. Studies in clinical psychology are necessary for knowledge of psychiatric syndromes and of test theory and practice. Even to know what constitutes a neuropsychologically adequate review of the patient’s mental status requires a broad understanding of brain function and its neuroanatomical correlates. Moreover, the examiner must have had enough clinical training and supervised “hands on” experience to know how to conduct an interview and what extratest data (e.g., personal and medical history items, school grades and reports) are needed to make sense out of any given set of observations and test scores, to weigh all of the data appropriately, and to integrate them in a theoretically meaningful and practically usable manner. These requirements are spelled out in detail in the Policy Statement of the Houston Conference on Specialty Education and Training in Clinical Neuropsychology (Hannay, Bieliauskas, Crosson, et al., 1998, pp. 160–165). Further information about examiner qualifications can be found in J.T. Barth, Pliskin, et al. (2003), Bush and Drexler (2002, passim), and Johnson-Greene and Nisley, 2008. The patient’s background
In neuropsychological assessment, few if any single bits of information are meaningful in themselves. A test score, for example, takes on diagnostic or practical significance only when compared with other test scores, with academic or vocational accomplishments or aims, or with the patient’s interview behavior. Even when the examination has been undertaken for descriptive purposes only, as after a head injury, it is important to distinguish a low test score that is as good as the patient has ever done from a similarly low score when it represents a significant loss from a much higher premorbid performance level. Thus, in order to interpret the examination data properly, each bit of data must be evaluated within a suitable context (Darby and Walsh, 2005; Vanderploeg, 1994) or it may be misinterpreted. For example, cultural experience and quality of education influence how older African Americans approach testing and how adjustments for these variables may improve interpretation of neuropsychological data (Fyffe et al., 2011; Manly, Byrd, et al., 2004).
The relevant context will vary for different patients and different aspects of the examination. Usually, therefore, the examiner will want to become informed about many facets of the patient’s life. Some of this information can be obtained from the referral source, from records, from hospital personnel working with the patient, or from family, friends, or people with whom the patient works. Patients who can give their own history and discuss their problems reasonably well will be able to provide much of the needed information. Having a broad base of data about the patient will not guarantee accurate judgments, but it can greatly reduce errors. The more examiners know about their patients prior to the examination, the better prepared will they be to ask relevant questions and choose tests that are germane to the presenting problems. Context for interpreting the examination findings may come from any of five aspects of the patient’s background: (1) social history, (2) present life circumstances, (3) medical history and current medical status, (4) circumstances surrounding the examination, and (5) cultural background. Sometimes the examiner has information about only two or three of them. Many dementia patients, for example, cannot give a social history or tell much about their current living situation. However, with the aid of informants and records as possible sources, the examiner should check into each of these categories of background information. The practice of blind analysis—in which the examiner evaluates a set of test scores without benefit of history, records, or ever having seen the patient—may be useful for teaching or reviewing a case but is particularly inappropriate as a basis for clinical decisions. 1. Social history. Information about the patient’s educational and work experiences may be the best source of data about the patient’s original cognitive potential. When reviewing educational and work history, it is important to know the person’s highest level of functioning and when that was. Unexpected findings do occur, as when someone of low educational background performs well above the average range on cognitive tests. Social history will often show that these bright persons had few opportunities or little encouragement for more schooling. In cases where patients come from marginal or inadequate schools, quality of education, not years of education, may be the best indication of educational experience (Manly, Jacobs, Touradji, et al., 2002). Military service history may contain important information, too. Military service gave some blue-collar workers their only opportunity to display their natural talents. A discussion of military service experiences may
also unearth a head injury or illness that the patient had not thought to mention to a less experienced or less thorough examiner. Attention should be paid to work and educational level related to the medical history. A 4 5-year-old longshoreman, admitted to the hospital for seizures, had a long history of declining occupational status. He had been a fighter pilot in World War II, had completed a college education after the war, and had begun his working career in business administration. Subsequent jobs were increasingly less taxing mentally. Just before his latest job he had been a foreman on the docks. Angiographic studies displayed a massive arteriovenous malformation (AVM) that presumably had been growing over the years. Although hindsight allows us to surmise that his slowly lowering occupational level reflected the gradual growth of this space displacing lesion, it was only when his symptoms became flagrant that his occupational decline was appreciated as symptomatic of the neuropathological condition.
Knowing the socioeconomic status of the patient’s family of origin as well as current socioeconomic status is often necessary for interpreting cognitive test scores—particularly those measuring verbal skills, which tend to reflect the parents’ social class as well as academic achievement (Sattler, 2008a,b). In most cases, the examiner should ask about the education of parents, siblings, and other important family members. Educational and occupational background may also influence patients’ attitudes about their symptoms. Those who depend largely on verbal skills in their occupation become very distressed by a mild word finding problem, while others who are not accustomed to relying much on verbal skills may be much less disturbed by the same kind of impairment or may even be able to disregard it. The patient’s personal—including marital—history may provide relevant information, such as the obvious issues of number of spouses, partners, or companions; length of relationship(s); and the nature of the dissolution of each significant alliance. The personal history may tell a great deal about the patient’s long-term emotional stability, social adjustment, and judgment. It may also contain historical landmarks reflecting neuropsychologically relevant changes in social or emotional behavior. Information about the spouse or most significant person in the patient’s life frequently is useful for understanding the patient’s behavior (e.g., anxiety, dependency) and is imperative for planning and guidance. This information may include health, socioeconomic background, current activity pattern, and appreciation of the patient’s condition. Knowledge about the patient’s current living situation and of the spouse’s or responsible person’s condition is important both for understanding the patient’s mood and concerns—or lack of concern—about the examination and the disorder that prompted it, and for gauging the reliability of the informant closest to the patient. Other aspects of the patient’s background should also be reviewed. When
antisocial behavior is suspected, the examiner will want to inquire about confrontations with the law. A review of family history is obviously important when a hereditary condition is suspected. Moreover, awareness of family experiences with illness and family attitudes about being sick may clarify many of the patient’s symptoms, complaints, and preoccupations. If historical data are the bricks, then chronology is the mortar needed to reconstruct the patient’s history meaningfully. For example, the fact that the patient has had a series of unfortunate marriages is open to a variety of interpretations. In contrast, a chronology-based history of one marriage that lasted for two decades, dissolved more than a year after the patient was in coma for several days as a result of a car accident, and then was followed by a decade filled with several brief marriages and liaisons suggests that the patient may have sustained a personality change secondary to the head injury. Additional information that the patient had been a steady worker prior to the accident but since has been unable to hold a job for long gives further support to that hypothesis (e.g., for the classic example of a good worker whose head injury made him unemployable, see Macmillan’s An Odd Kind of Fame. Stories of Phineas Gage, [2000]). As another example, an elderly patient’s complaint of recent mental slowing suggests a number of diagnostic possibilities: that the slowing followed the close occurrence of widowhood, retirement, and change of domicile should alert the diagnostician to the likelihood of depression. 2. Present life circumstances. When inquiring about the patient’s current life situation, the examiner should go beyond factual questions about occupation, income and indebtedness, family statistics, and leisure activities to find out the patient’s views and feelings about these issues. The examiner needs to know how long a working patient has held the present job, what changes have taken place or are expected at work, whether the work is enjoyed, and whether there are problems on the job. The examiner should attempt to learn about the quality of the patient’s family life and such not uncommon family concerns as troublesome in-laws, acting-out adolescents, and illness or substance abuse among family members. New sexual problems can appear as a result of brain disease, or old ones may complicate the patient’s symptoms and adjustment to a dysfunctional condition. Family problems, marital discord, and sexual dysfunction can generate so much tension that symptoms may be exacerbated or test performance adversely affected. 3. Medical history and current medical status. Information about the patient’s medical history will usually come from a treating physician, a review of
medical charts when possible, and reports of prior examinations as well as the patient’s reports. Discrepancies between patients’ reports of health history and medical records may give a clue to the nature of their complaints or to the presence of a neuropsychological disorder. When enough information is available to integrate the medical history with the social history, the examiner can often get a good idea of the nature of the condition and the problems created by it. Medication records may prove significant in understanding the patient’s health and functioning. Some aspects of the patient’s health status that are frequently overlooked in the usual medical examination may have considerable importance for neuropsychological assessment. These include visual and auditory defects that may not be documented or even examined, motor disabilities, or mental changes. In addition, sleeping and eating habits may be overlooked in a medical examination, although sleep loss can impair cognition (Waters and Bucks, 2011). Poor or too much sleep or change in eating habits can be important symptoms of depression or brain disease. 4. Circumstances surrounding the examination. Test performance can be evaluated accurately only in light of the reasons for referral and the relevance of the examination to the patient. The patient’s values and needs will determine the patient’s expectations and response to the evaluation. For example, does the patient stand to gain money or lose a custody battle as a result of the examination? May a job or hope for earning a degree be jeopardized by the findings? Only by knowing what the patient believes may be gained or lost as a result of the neuropsychological evaluation can the examiner appreciate how the patient perceives the examination.
Examination Procedures Patients’ cooperation in the examination process is extremely important, and one of neuropsychologist’s main tasks is to enlist such cooperation. A.-L. Christensen, 1989 Referral
The way patients learn of their referral for neuropsychological assessment can affect how they view the examination, thus setting the stage for such diverse responses as cooperation, anxiety, distrust, and other attitudes that may modify test performance (J.G. Allen et al., 1986; Bennett-Levy, Klein-Boonschate, et al., 1994). Ideally, referring persons explain to patients, and to their families
whenever possible, the purpose of the referral, the general nature of the examination with particular emphasis on how this examination might be helpful or, if it involves a risk, what that risk might be, and the patient’s choice in the matter (Armengol, 2001) . Neuropsychologists who work with the same referral source(s), such as residents in a teaching hospital, a neurosurgical team, or a group of lawyers, can encourage this kind of patient preparation. When patients receive no preparation and hear they are to have a “psychological” evaluation, some may come to the conclusion that others think they are emotionally unstable or crazy. Often it is not possible to deal directly with referring persons. Rather than risk a confrontation with a poorly prepared and negativistic or fearful patient, some examiners routinely send informational letters to new patients, explaining in general terms the kinds of problems dealt with and the procedures the patient can anticipate (see J. Green, 2000; Kurlychek and Glang, 1984, for examples of such a letter). Asking the patients at the beginning of the evaluation what they have been told about the reason for the referral helps determine their understanding and clarify what information should be provided at the outset. Patient’s questions
Establishing what the patient, or the family when appropriate, hopes to learn from the examination will help guide procedures. The patient’s questions may not match those of the referral source or the examiner. Nevertheless, they should be incorporated into the examination planning as much as possible. For example, the referral source may want to know a diagnosis while the patient may want to know whether returning to work is possible. The examiner should educate the patient about how the examination may answer these questions or, if necessary, help the patient reformulate the questions into ones that might reasonably answered. When to examine
Sudden onset conditions; e.g., trauma, stroke. Within the first few weeks or months following a sudden onset event, a brief examination may be necessary for several reasons: to ascertain the patient’s ability to comprehend and follow instructions; to evaluate mental capacity when the patient may require a guardian; or to determine whether the patient can retain enough new information to begin a retraining program. Early on, the examiner can use brief evaluations to identify areas of impaired cognition that will be important
to check at a later time. A subtle neuropsychological deficit is easier to recognize when it has previously been observed in full flower. Acute or postacute stages. As a general rule, a full assessment should not be undertaken during this period. Typically, up to the first six to 12 weeks following the event, changes in the patient’s neuropsychological status can occur so rapidly that information gained one day may be obsolete the next. Moreover, fatigue overtakes many of these early stage patients very quickly and, as they tire, their mental efficiency plummets, making it impossible for them to demonstrate their actual capabilities. Many patients continue to be mentally sluggish for several months after an acute event. Both fatigue and awareness of poor performances can feed the depressive tendencies experienced by many neuropsychologically impaired patients. Patients who were aware of performing poorly when their deficits were most pronounced may be reluctant to accept a reexamination for fear of reliving that previously painful experience. After the postacute stage. When the patient’s sensorium has cleared and stamina has been regained— usually within the third to sixth month after the event—an initial comprehensive neuropsychological examination can be given. In cases of minor impairment or rapid improvement, the goal may be to see how soon the patient can return to previous activities and, if so, whether temporary adaptations—such as reduced hours or a quiet environment—will be required (e.g., see Bootes and Chapparo, 2010; Wolfenden and Grace, 2009). When impairment is more severe, a typical early assessment may have several goals: e.g., to identify specific remediation needs and the residual capacities that can be used for remediation; to make an initial projection about the patient’s ultimate levels of impairment and improvement—and psychosocial functioning, including education and career potential; and to reevaluate competency when it had been withdrawn earlier.
Long-term planning. Examinations—for training and vocation when these seem feasible, or for level of care of patients who will probably remain socially dependent—can be done sometime within one to two years after the event. Most younger persons will benefit from a comprehensive neuropsychological examination. Shorter examinations focusing on known strengths and weaknesses may suffice for patients who are retired and living with a caregiver. Evolving conditions, e.g., degenerative diseases, tumor. Early in the course of an evolving condition when neurobehavioral problems are first suspected, the neuropsychological examination can contribute significantly to diagnosis (Feuillet et al., 2007; Gómez-Isla and Hyman, 2003; Howieson, Dame, et al., 1997; Wetter, Delis, et al., 2006). Repeated examinations may then become necessary for a variety of reasons. When seeking a definitive diagnosis and early findings were vague and suggestive of a psychological rather than a neurological origin, a second examination six to eight months after the first may answer the diagnostic questions. With questions of dementia, after 12 to 18 months the examination is more likely to be definitive (J.C. Morris, McKeel, Storandt, et al., 1991). In evaluating rate of decline as an aid to counseling and rational planning for conditions in which the rate of deterioration varies considerably between patients, such as multiple sclerosis
or Huntington’s disease, examinations at one- to two-year intervals can be useful. Timing for evaluations of the effects of treatment will vary according to how long the treatment takes and whether it is disruptive to the patient’s mental status, such as treatments by chemotherapy, radiation, or surgery for brain tumor patients. Initial planning
The neuropsychological examination proceeds in stages. In the first stage, the examiner plans an overall approach to the problem. The first hypotheses to be tested and the techniques used to test them will depend on the examiner ’s understanding and evaluation of the referral questions and on the accompanying information about the patient. Preparatory interview
The initial interview and assessment make up the second stage. Here the examiner tentatively determines the range of functions to be examined, the extent to which psychosocial issues or emotional and personality factors should be explored, the level—of sophistication, complexity, abstraction, etc. —at which the examination should be conducted, and the limitations set by the patient’s handicaps. Administrative issues, such as fees, referrals, and formal reports to other persons or agencies, should also be discussed with the patient at this time. The first 15–20 minutes of examination time are usually used to evaluate the patient’s capacity to take tests and to ascertain how well the purpose of the examination is understood. The examiner also needs time to prepare the patient for the assessment procedures and to obtain consent. This interview may take longer than 20 minutes, particularly with anxious or slow patients, those who have a confusing history, or those whose misconceptions might compromise their cooperation. The examiner may spend the entire first session preparing a patient who fatigues rapidly and comprehends slowly, reserving testing for subsequent days when the patient feels comfortable and is refreshed. On questioning 129 examinees—mostly TBI and stroke patients—following their neuropsychological examination, Bennett-Levy, Klein-Boonschate, and their colleagues (1994) found that the participation of a relative in interviews, both introductory and for feedback, not only provided more historical information but helped clarify issues for the patient. Conversely, separate interviews are helpful in some cases, as some spouses of patients with dementia do not want to appear critical in front of their loved one and some patients are unlikely to
speak freely with a family member in the room. At least seven topics must be covered with competent patients before testing begins if the examiner wants to be assured of their full cooperation.1 (1) The purpose of the examination: Do they know the reasons for the referral, and do they have questions about it? (2) The nature of the examination: Do patients understand that the examination will be primarily concerned with cognitive functioning and that being examined by a neuropsychologist is not evidence of craziness? (3) The use to which examination information will be put: Patients must have a clear idea of who will receive a report and how it may be used. (4) Confidentiality: Competent patients must be reassured not only about the confidentiality of the examination but also that they have control over their privacy except (i) when the examination has been conducted for litigation purposes and all parties to the dispute may have access to the findings, (ii) when confidentiality is limited by law (e.g., reported intent of harm to self or a stated person), or (iii) when insurance companies paying for the examination are entitled to the report. (5) Feedback to the patient: Patients should know before the examination begins who will report the test findings and, if possible, when. (6) How the patient feels about taking the tests: This can be the most important topic of all, for unless patients feel that taking the tests is not shameful, not degrading, not a sign of weakness or childishness, not threatening their job or legal status or whatever else may be a worry, they cannot meaningfully or wholeheartedly cooperate. Moreover, the threat can be imminent when a job, or competency, or custody of children is at stake. It is then incumbent upon the examiner to give patients a clear understanding of the possible consequences of noncooperation as well as full cooperation so that they can make a realistic decision about undergoing the examination. (7) A brief explanation of the test procedures: Many patients are reassured by a few words about the tests they will be taking. I’ll be asking you to do a number of different kinds of tasks. Some will remind you of school because I’ll be asking questions about things you’ve already learned or I’ll give you arithmetic or memory problems to do, just like a teacher. Others will be different kinds of puzzles and games. You may find that some things I ask you to do are fun; some of the tests will be very easy and some may be so difficult you won’t even know what I’m talking about or showing you; but all of them will help me to understand better how your brain is working, what you are doing well, what difficulties you are having, and how you might be helped.
In addition, (8) when the patient is paying for the services, the (estimated in some cases) amount, method of payment, etc. should be agreed upon before the examination begins. Following principles for ethical assessment—and now, in the United States,
following the law—the neuropsychologist examiner will want to obtain the patient’s informed consent before beginning the examination (American Psychological Association, no date; S.S. Bush and Drexler, 2002; M.A. Fisher, 2008). While the patient’s cooperation following a review of these seven—or eight—points would seem to imply informed consent, many patients for whom a neuropsychological examination is requested have a limited or even no capacity to acquiesce to the examination. Others take the examination under various kinds of legal duress, such as inability to pursue a personal injury claim, threat of losing the right to make financial or medical decisions, or the risk of receiving a more severe punishment when charged with a criminal act. Moreover, the examiner can never guarantee that something in the examination or the findings will not distress the patient (e.g., a catastrophic reaction, identification of an early dementing process), nor is the examiner able to predict a priori that such an event may occur during the examination or such an outcome. Thus, in neuropsychology, informed consent is an imperative goal to approach as closely as possible. In the individual case, the neuropsychologist examiner must be cognizant of any limitations to realizing this goal and able to account for any variations from standards and requirements for informed consent. The introductory interview should include questions about when and how the problems began and changes in problems over time. Valuable information sometimes is gained by asking whether there is anything else the patient thinks the examiner should know. A young man was referred for a neuropsychological evaluation by a neurologist because of a history of cognitive problems and seizures of unknown etiology. When the patient was asked whether he had ever been told why he had seizures, he quickly responded “because I have neurofibromatosis.” He had not told the referring neurologist, who obviously did not give the patient a complete physical examination or obtain an adequate family history, because the neurologist had never specifically asked this question.
It is also important to learn whether the patient has had a similar examination and when it occurred. This information may determine if retesting is too soon or guide the decision of whether the same or alternative versions of tests should be used. Patients whose mental functioning is impaired may not be able to take an active, effective role in the interview. In such cases it may be necessary for a family member or close friend to participate. The patient and others need to feel free to express their opinions and to question the assumptions or conclusions voiced by the clinician. When this occurs the clinician must heed what is said since faulty assumptions and the conclusions on which they are
based can lead to misdiagnosis and inappropriate treatment, sometimes with negligible but sometimes with important consequences. The patient’s contribution to the preliminary discussion will give the examiner a fairly good idea of the level at which to conduct the examination. When beginning the examination with one of the published tests that has a section for identifying information that the examiner is expected to fill out, the examiner can ask the patient to answer the questions of date, place, birth date, education, and occupation on the answer sheets, thereby getting information about the patient’s orientation and personal awareness while doing the necessary record keeping and not asking questions for which, the patient knows, answers are in the patient’s records. In asking for the date, be alert to the patient wearing a watch that shows the date. Ask these patients not to look at their watch when responding to date questions. (I ask patients to sign and date —again without checking their watch—all drawings, thus obtaining several samples of time orientation [mdl]). Patients who are not competent may be unable to appreciate all of the initial discussion. However, the examiner should make some effort to see that each topic is covered within the limits of the patient’s comprehension and that the patient has had an opportunity to express concerns about the examination, to bring up confusing issues, and to ask questions. Observations
Observation is the foundation of all psychological assessment. The contribution that psychological—and neuropsychological—assessment makes to the understanding of behavior lies in the evaluation and interpretation of behavioral data that, in the final analysis, represent observations of the patient. Indirect observations. These consist of statements or observations made by others or of examples of patient behavior, such as letters, constructions, or artistic productions. Grades, work proficiency ratings, and other scores and notes in records are also behavioral descriptions obtained by observational methods, although presented in a form that is more or less abstracted from the original observations. Direct observations. The psychological examination offers the opportunity to learn about patients through two kinds of direct observation. Informal observations, which the examiner registers from the moment the patient appears, provide invaluable information about almost every aspect of patient behavior: how they walk, talk, respond to new situations and new faces—or familiar ones, if this is the second or third examination—and leave-taking. Patients’ habits of dressing and grooming may be relevant, as are their attitudes about people generally, about themselves and the people in their lives. Informal observation can focus on patients’ emotional status to find out how and when they express their feelings and what is emotionally important to them. The formal—test-based—examination provides a different kind of opportunity for informal observation, for here examiners can see how patients deal with prestructured situations in which the range of available responses is restricted, while observing their interaction with
activities and requirements familiar to the examiner.
Nontest observations can be recorded, either by a checklist developed as an aid for organizing or by one of the questionnaires that have been developed for this purpose (see as examples, Armengol, 2001; E. Strauss, Sherman, and Spreen, 2006, p. 57; R.L. Tate, 2010). Use of these methods can help the examiner to guard against overlooking some important area needing questioning. Psychological tests are formalized observational techniques. They are simply a means of enhancing (refining, standardizing) clinical observations. If used properly, they enable the examiner to learn much and more quickly about a person’s psychological and neuropsychological status. When tests are misused as substitutes for rather than extensions of clinical observation, they can give at best a one-dimensional view of the patient: without other information about the patient, test scores alone will necessarily limit and potentially distort examination conclusions. Test selection
Selection of tests for a particular patient or purpose will depend on a number of considerations. Some have to do with the goal(s) of the examination, some involve aspects of the tests, and then there are practical issues that must be addressed. The examination goals. The goal(s) of the examination will obviously contribute to test selection. A competency evaluation may begin and end with a brief mental status rating scale if it demonstrates the patient’s incompetency. At the other extreme, appropriate assessment of a premorbidly bright young TBI candidate for rehabilitation may call for tests examining every dimension of cognitive and executive functioning to determine all relevant areas of weakness and strength. For most people receiving a neuropsychological assessment, evaluation of their emotional status and how it relates to neuropathology and/or their psychosocial functioning is a necessary component of the examination. Validity and reliability. The usefulness of a neuropsychological test depends upon its psychometric properties, normative sample(s), distribution of scores, and measurement error (B.L. Brooks, Strauss, et al., 2009). Tests of cognitive abilities are getting better at both meeting reasonable criteria for validity and reliability and having appropriate norms. Many useful examination techniques that evolved out of clinical experience or research now have published score data from at least small normal control groups (Mitrushina, Boone, et al.,
2005; E. Strauss, Sherman, and Spreen, 2006). Validity is the degree to which the accumulated evidence supports the specific interpretations that the test’s developers, or users, claim (Mitrushina, Boone, et al., 2005; Urbina, 2004). However, the tests used by neuropsychologists rarely measure one cognitive skill or behavior so that different interpretations show up in the literature. For example, a digit-symbol coding task often used to measure processing speed also measures visual scanning and tracking, accurate reading of numbers and symbols, and the ability to grasp the abstract concept that two apparently unrelated items are related for the purpose of this test. One only needs to examine a patient with moderate dementia to appreciate the cognitive demands of this test. Moreover, validity will vary with the use to which a test is put: A test with good predictive validity when used to discriminate patients with Alzheimer ’s disease from elderly depressed persons may not identify which young TBI patients are likely to benefit from rehabilitation (Heinrichs, 1990). Besides the usual validity requirements to ensure that a test measures the brain functions or mental abilities it purports to measure, two kinds of evidence for validity hold special interest for neuropsychologists: Face validity, the quality of appearing to measure what the test is supposed to measure, becomes important when dealing with easily confused or upset patients who may reject tasks that seem nonsensical to them. In memory rehabilitation programs, tasks that appear relevant to patients’ needs facilitate learning, perhaps because of the beneficial effects of motivational and emotional factors (Ehlhardt et al., 2008). Ecological validity is the degree to which a measure predicts behavior in everyday situations, such as ability to return to work or school, benefit from rehabilitation, live independently, or manage finances. Tests and techniques used for neuropsychological assessment are meant to have real world validity but there are many obstacles that limit the degree to which they achieve this (Chaytor and Schmitter-Edgecombe, 2003). For example, testing in a quiet environment may not reveal the problems that patients have with concentration or memory in their natural work or home environment with their numerous distractions. Many studies have explored how well neuropsychological tests can predict real life behavior. A meta-analysis of the ecological validity of neuropsychological tests to predict ability to work found that impairments on measures of executive functioning, intellectual functioning, and memory were the best predictors of employment status (Kalechstein et al., 2003). Another example is the usefulness of neuropsychological tests for predicting driving
difficulties of persons with dementia as some Alzheimer patients have preserved driving skills early in the course of the illness. Performances on visuospatial and attention/concentration tests were the best predictors of onroad driving ability in this group (Reger et al., 2004). Some instruments have been developed specifically for measuring real life situations. The Rivermead Behavioural Memory Test (B.A. Wilson, Greenfield, et al., 2008), designed to simulate everyday demands on memory, is one of the most commonly used. Rabin and his colleagues (2007) offer a list of many of these tests and techniques. Reliability of a test—the regularity with which it generates the same score under similar retest conditions or the regularity with which different parts of a test produce similar findings—can be ascertained only with normal control subjects. When examining brain damaged patients with cognitive deficits, test reliability becomes an important feature: repeated test performances by cognitively intact persons must be similar if that test can measure with any degree of confidence the common kinds of change that characterize performances of brain impaired persons (i.e., improvement, deterioration, instability, fatigue effects, diurnal effects, etc.). In choosing a test for neuropsychological assessment, the test’s vulnerability to the vagaries of the testing situation must also be taken into account. For example, differences in the speed at which the examiner reads a story for recall can greatly affect the amount of material a patient troubled by slowed processing retains (Shum, Murray, and Eadie, 1997). Many examiners believe that longer tests are more reliable than shorter tests. Adaptive tests where items are individually selected for a person’s ability level can be more reliable than the longer normal-range test (Embretson, 1996). The fifth edition of the Stanford-Binet (SB5) was structured with this feature in mind (Roid, 2003). A midlevel difficulty item begins the test and the examiner proceeds forward or backward according to how the child responds. Neuropsychological tests intended for adults have not often taken advantage of adaptive features. Although the WAIS-IV has expanded the number of items preceding the standard start item, this change has increased the number of very easy items rather than move the start item nearer the ability level of most adults. Experienced examiners will often use an adaptive approach even when the test manual does not call for it (e.g., see p. 128). Reliability of test performances by some patients with brain disorders may be practically nonexistent, given the changing course of many disorders and the vulnerability of many brain impaired patients to daily—sometimes even hourly—alterations in their level of mental efficiency (e.g., Bleiberg et al.,
1997). Because neuropsychological assessment is so often undertaken to document differences over time—improvement after surgery, for example, or further deterioration when dementia is suspected—the most useful tests can be those most sensitive to fluctuations in patient performances. Moreover, many “good” tests that do satisfy the usual statistical criteria for reliability may be of little value for neuropsychological purposes. Test batteries that generate summed or averaged scores based on a clutch of discrete tests provide another example of good reliability (the more scores, the more reliable their sum) of a score that conveys no neuropsychologically meaningful information unless it is either so low or so high that the level of the contributing scores is obvious (Darby and Walsh, 2005; Lezak, 1988b). Sensitivity and specificity. A test’s sensitivity or specificity for particular conditions makes it more or less useful, depending on the purpose of the examination. The sensitivity of a test is the proportion of people with the target disorder who have a positive result. Sensitivity is useful in ruling out a disorder. For general screening, as when attempting to identify persons whose mentation is abnormal for whatever reason, a sensitive test such as Wechsler ’s Digit Symbol will be preferred. However, since poor performance on this test can result from a variety of conditions—including a carpal tunnel syndrome or inferior education—such a test will be of little value to the examiner hoping to delineate the precise nature of a patient’s deficits. Rather, for understanding the components of a cognitive deficit, tests that examine specific, relatively pure, aspects of neuropsychological functions—i.e., that have high specificity—are required. Specificity is the proportion of people without the target disorder whose test scores fall within the normal range; this proportion is useful for confirming a disorder. A reading test from an aphasia examination is easily passed by literate adults and has high specificity when failed. A test sensitive to unilateral inattention, when given to 100 healthy adults, will prove to be both reliable and valid, for the phenomenon is unlikely to be elicited at all. Giving the same test to patients with documented left visuospatial inattention may elicit the phenomenon in only some of the cases. If given more than once, the test might prove highly unreliable as patient’s responses to this kind of test can vary from day to day.
Positive predictive value takes into consideration both sensitivity and specificity by determining the probability that a person with a positive (i.e., abnormal) test performance has a target condition. Positive predictive value is the calculation of the change from the pretest probability that the person has the target disorder—given the prevalence of the disorder for persons with the relevant characteristics (e.g., age)—to the actual test data. As an example, the
usefulness of a VIQ-PIQ performance discrepancy in identifying left hemisphere brain damage was rejected by calculating the sensitivity, specificity, and positive predictive test values for patients who had lateralized lesions (Iverson, Mendrick, and Adams, 2004). Negative predictive value is useful for calculating the probability that a negative (within normal limits) test performance signifies the absence of a condition. Other useful calculations of the likelihood of an event are odds ratios and relative risk (Chelune, 2010). The odds ratio is the ratio of the odds of the disorder for one group (e.g., experimental group) over the odds of the disorder for the other group (e.g., control). This ratio calculates how much more likely it is that someone in the experimental group will develop the outcome as compared to someone who is in the control group. Relative risk involves a similar conceptual procedure in which the probability of an event in each group is compared rather than the odds. G.E. Smith, Ivnik, and Lucas (2008) give the equations for calculating the probabilities of a test’s predictive accuracy. Parallel forms. Perhaps more than any other area of psychological assessment, neuropsychology requires instruments designed for repeated measurements as so many examinations of persons with known or suspected brain damage must be repeated over time—to assess deterioration or improvement, treatment effects, and changes with age or other life circumstances. As yet, few commercially available tests have parallel forms suitable for retesting or come in a format that withstands practice effects reasonably well, including the Wechsler tests. Several reports (Beglinger et al., 2005; Lemay et al., 2004; McCaffrey, Duff, and Westervelt 2000a,b; Salinsky et al., 2001) have addressed this problem by publishing test–retest data for most of the tests in more or less common use by neuropsychologists. While such tables do not substitute for parallel forms, they do provide the examiner with a rational basis for evaluating retest scores. Time and costs. Not least of the determinants of test selection are the practical ones of administration time (which should include scoring and report writing time as well) and cost of materials (Lezak, 2002). Prices put some tests out of reach of many neuropsychologists; when the cost is outrageously high for what is offered, the test deserves neglect. If the examiner shops around, often appropriate tests can be found in the public domain.1 Just because a test in the public domain has been offered for sale by a publisher does not mean that this test must be purchased; if it is in the public domain it can be copied freely. Administration time becomes an increasingly important issue as neuropsychological referrals grow while agency and institutional money to
pay for assessments does not keep pace or may be shrinking. Moreover, patients’ time is often valuable or limited: many patients have difficulty getting away from jobs or family responsibilities for lengthy testing sessions; those who fatigue easily may not be able to maintain their usual performance level much beyond two hours. These issues of patient time and expense and of availability of neuropsychological services together recommend that examinations be kept to the essential minimum. Computer tests. Since the early days of computer testing (e.g., R. Levy and Post, 1975; see also Eyde, 1987), an expanding interest in its applications has resulted in an abundance of available tests. Computer tests offer the advantages of uniformity of administration and measurement of behavioral dimensions not possible with manual administration, most notably getting the exact measure of response latencies. Computer based tests offer the potential for adaptive testing whereby the computer changes the difficulty of the next item presented or presentation rate of a task such as the Paced Auditory Serial Addition Test according to the patient’s performance (Letz, 2003; Royan et al., 2004). Some but not all are designed to be self-administered or administered by office staff, thereby saving professional time. Many computer tests offer automatic scoring as well. A number of neuropsychological tests have been converted to a computerized form, such as the Wisconsin Card Sorting Test (e.g., R.K. Heaton and PAR Staff, 2003, see also p. 739, 757, 760). Other commonly used tests do not readily transfer to computers without further development of computer interfaces. For example, most traditional memory tests rely on free recall measures while most computer-based memory tests use a recognition format. Implementation of voice recognition capability may allow computers to capture free recall performance as well (Poreh, 2006). One of the most common applications is as an aid to the diagnosis of dementia at an early stage (Wild, Howieson, et al., 2008). Despite the many potential advantages of computerized tests, truly selfadministered tests do not capture qualitative aspects of test performance that may have clinical relevance. Moreover, the absence of an examiner may decrease motivation to perform at one’s best (Letz, 2003; Yantz and McCaffrey, 2007). Technical challenges include variability in precision of timing across computers and operating systems for reaction time measurement and the relatively rapid obsolescence of programs due to short hardware and software production runs (Letz, 2003). The decision of whether to use computer tests will depend on many factors, including what cognitive function the examiner plans to address, the patient’s reaction to the computer format, and such practical considerations as test cost.
Many batteries of computer tests for cognitive testing are available. Some are general purpose batteries, such as the Cambridge Neuropsychological Test Automated Battery (CANTAB) (Robbins et al., 1994), the Neurobehavioral Evaluation System 3 (NES3), (Letz, Dilorio, et al., 2003) and the Automated Neuropsychological Assessment Metrics (ANAM), (Bleiberg et al., 2000), to name a few. Nonstandardized assessment techniques. Occasionally a patient presents an assessment problem for which no well-standardized test is suitable (B. Caplan and Shechter, 1995). Improvising appropriate testing techniques can then tax the imagination and ingenuity of any conscientious examiner. Sometimes a suitable test can be found among the many new and often experimental techniques reported in the literature. Some of them are reviewed in this book. These experimental techniques are often inadequately standardized, or they may not test the functions they purport to test. Some may be so subject to chance error as to be undependable. Patient data may be insufficient for judging the test’s utility. However, these experimental and relatively unproven tests may be useful in themselves or as a source of ideas for further innovations. Rarely can clinical examiners evaluate an unfamiliar test’s patient and control data methodically, but with experience they can learn to judge reports and manuals of new tests well enough to know whether the tasks, the author ’s interpretation, the reported findings, and the test’s reliability are reasonably suitable for their purposes. When making this kind of evaluation of a relatively untried test, clinical standards need not be as strict as research standards. A 38-year-old court reporter, an excellent stenographer and transcriber, sustained bilateral parietal bruising (seen on magnetic resonance imaging) when the train she was on derailed with an abrupt jolt. She had been sleeping on her side on a bench seat when the accident occurred. She was confused and disoriented for the next several days. When she tried to return to work, along with the more common attentional problems associated with TBI, she found that she had great difficulty spelling phonetically irregular words and mild spelling problems with regular ones. To document her spelling complaints, she was given an informal spelling test comprising both phonologically regular and irregular words. Evaluation of her responses—39% misspellings— was consistent with other reports of well-educated patients with lexical agraphia (Beauvois and Dérousné, 1981; Roeltgen, 2003; see Fig. 5.1, p. 129). Since the issue concerned the proportion of misspellings of common words and the difference between phonetically regular and irregular words and not the academic level of spelling, this was an instance in which an informal test served well to document the patient’s problem. Beginning with a basic test battery
Along with the examination questions, the patient’s capacities and the examiner ’s test repertory determine what tests and assessment techniques will
be used. In an individualized examination, the examiner rarely knows exactly which tests will be given before the examination has begun. Many examiners start with a basic battery that touches upon the major dimensions of cognitive behavior (e.g., attention, memory and learning, verbal functions and academic skills, visuoperception and visual reasoning, construction, concept formation, executive functions, self-regulation and motor ability, and emotional status). They then drop some tests or choose additional tests as the examination proceeds. The patient’s strengths, limitations, and specific handicaps will determine how tests in the battery are used, which must be discarded, and which require modifications to suit the patient’s capabilities.
FIGURE 5.1 An improvised test for lexical agraphia.
As the examiner raises and tests hypotheses regarding possible diagnoses, areas of cognitive dysfunction or competence, and psychosocial or emotional contributions to the behavioral picture, it usually becomes necessary to go beyond a basic battery and use techniques relevant to this patient at this time. Many neuropsychologists use this flexible approach as needed and use routine groups of tests for particular types of disorders (Sweet, Nelson, and Moberg, 2006). Uniform minimum test batteries have been recommended for several neurological disorders, e.g., multiple sclerosis (Benedict, Fischer, et al., 2002) and Alzheimer ’s disease (J.C. Morris, Weintraub, et al., 2006). When redundancy in test selection is avoided, such a battery of tests will generally take three to four hours when given by an experienced examiner. They can usually be completed in one session, depending on the subject’s level of cooperation and stamina, but can be given in two sittings—preferably on two different days, if the patient fatigues easily. Some referral questions take longer to answer, particularly in the case of forensic evaluations when the examiner wants to be able to answer a wide range of potential questions (Sweet, Nelson, Moberg, 2006). This book reviews a number of paper-and-pencil tests that patients can take by themselves. These tests may be given by clerical or nursing staff; some of them may have computerized administrations available. Some of these tests were developed as timed tests: time taken can provide useful information. However, sometimes it is more important to find out what the patient can do regardless of time; the test can then be taken either untimed or the person proctoring the test can note how much was done within the time limit but allow the patient to proceed to the end of the test. For outpatients who come from a distance or may have tight time schedules, it is often impractical to expect them to be available for a lengthy examination. One time saving device is to mail a background questionnaire to the patient with instructions to bring it to the examination. In some cases the interview time can be cut in half. In deciding when to continue testing with more specialized assessment techniques or to discontinue, it is important to keep in mind that a negative (i.e., within normal limits, not abnormal) performance does not rule out brain pathology; it only demonstrates which functions are at least reasonably intact. However, when a patient’s test and interview behavior are within normal limits, the examiner cannot continue looking indefinitely for evidence of a deficit that may not be there. Rather, a good history, keen observation, a well-founded understanding of patterns of neurological and psychiatric dysfunction, and common sense should tell the examiner when to stop—or to keep looking.
Test selection for research
Of course, when following a research protocol, the examiner is not free to exercise the flexibility and inventiveness that characterize the selection and presentation of test materials in a patient-centered clinical examination. For research purposes, the prime consideration in selecting examination techniques is whether they will effectively test the hypotheses or demonstrate the phenomenon in question (e.g., see Fischer, Priore, et al., 2000). Other important issues in developing a research battery include practicality, time, and the appropriateness of the instruments for the population under consideration or when participants will be examined repeatedly. Since the research investigator cannot change instruments or procedures in midstream without losing or confounding data, selection of a research battery requires a great deal of care. In developing the Minimal Assessment of Cognitive Function in Multiple Sclerosis (MACFIMS), the working group noted the importance of flexibility to allow for supplanting the less satisfactory tests with newly developed tests that may be more suitable (Fischer, Rudick, et al., 1999). Just as a basic battery can be modified for individuals in the clinical examination, so too tests can be added or subtracted depending on research needs. Moreover, since a research patient may also be receiving clinical attention, tests specific for the patient’s condition can be added to a research battery as the patient’s needs might require. A note on ready-made batteries
The popularity of ready-made batteries attests to the need for neuropsychological testing and to a lack of knowledge among neuropsychologically inexperienced psychologists about how to do it (Lezak, 2002; Sweet, Moberg, and Westergaard, 1996). The most popular batteries extend the scope of the examination beyond the barely minimal neuropsychological examination (which may consist of one of the Wechsler Intelligence Scale batteries, a drawing test, and parts or all of a published memory battery). They offer normative data from similar populations across a number of different tests (e.g., see Mitrushina, Boone, et al., 2005). Readymade batteries can be invaluable in research programs requiring well-standardized tests. When batteries are used as directed, most patients undergo more testing than is necessary but not enough to satisfy the examination questions specific to their problems. Also, like most psychological tests, readymade batteries are not geared to patients with handicaps. The patient with a significant perceptual
or motor disability may not be able to perform major portions of the prescribed tests, in which case the functions normally measured by the unusable test items remain unexamined. However, these batteries do acquaint the inexperienced examiner with a variety of tests and with the importance of evaluating many different behaviors when doing neuropsychological testing. They can provide a good starting place for some newcomers to the field who may then expand their test repertory and introduce variations into their administration procedures as they gain experience and develop their own point of view. A ready-made battery may also seem to confer neuropsychological competence on its users. A questionable or outmoded test that has been included in a popular battery can give false complacency to naive examiners, particularly if it has accrued a long reference trail (e.g., see pp. 547–548 regarding the Aphasia Screening Test, which the author—Joseph Wepman— repudiated in the 1970s). No battery can substitute for knowledge—about patients, medical and psychological conditions, the nature of cognition and psychosocial conduct, and how to use tests and measurement techniques. Batteries do not render diagnostic opinions or behavioral descriptions, clinicians do. Without the necessary knowledge, clinicians cannot form reliably valid opinions, no matter what battery they use. Hypothesis testing
This stage of the examination usually has many steps. It begins as the data of the initial examination answer initial questions, raise new ones, and may shift the focus from one kind of question to another or from one set of impaired functions that at first appeared to be of critical importance in understanding the patient’s complaints to another set of functions. Hypotheses can be tested in one or more of several ways: by bringing in the appropriate tests (see below), by testing the limits, and by seeking more information about the patient’s history or current functioning. Hypothesis testing may also involve changes in the examination plan, in the pace at which the examination is conducted, and in the techniques used. Changes in the procedures and shifts in focus may be made in the course of the examination. At any stage of the examination the examiner may decide that more medical or social information about the patient is needed, that it would be more appropriate to observe rather than test the patient, or that another person should be interviewed, such as a complaining spouse or an intact sibling, for adequate understanding of the patient’s condition. This flexible approach enables the examiner to generate multistage, serial hypotheses for identifying subtle or discrete dysfunctions or to make fine
diagnostic or etiologic discriminations. Without knowing why a patient has a particular difficulty, the examiner cannot predict the circumstances in which it will show up. Since most neuropsychological examination techniques in clinical use elicit complex responses, the determination of the specific impairments that underlie any given lowered performance becomes an important part of neuropsychological evaluations. This determination is usually done by setting up a general hypothesis and systematically testing it for each relevant function. If, for example, the examiner hypothesizes that a patient’s slow performance on the Block Design test of the Wechsler Intelligence Scales (WIS-A) battery was due to general slowing, other timed performances must be examined to see if the hypothesis holds. A finding that the patient is also slow on all other timed tests would give strong support to the hypothesis. It would not, however, answer the question of whether other deficits also contributed to the low Block Design score. Thus, to find out just what defective functions or capacities entered into the impaired performance requires additional analyses. This is done by looking at the component functions that might be contributing to the phenomenon of interest in other parts of the patient’s performance (e.g., house drawing, design copying, for evidence of a problem with construction; other timed tests to determine whether slowing occurs generally) in which one of the variables under examination plays no role and all other conditions are equal. If the patient performs poorly on the second task as well as the first, then the hypothesis that poor performance on the first task is multiply determined cannot be rejected. When the patient does well on the task used to examine the alternative variable (e.g., visuospatial construction), the hypothesis that the alternative variable also contributes to the phenomenon of interest can be rejected.
This example illustrates the method of double dissociation for identifying which components of complex cognitive activities are impaired and which are preserved (E. Goldberg, 2001, p. 52; Weiskrantz, 1991, see also p. 171). A double dissociation exists when two functions are found to be independently affected, such as general slowing and visuospatial constructions in this example. These conceptual procedures can lead to diagnostic impressions and to the identification of specific deficits. In clinical practice, examiners typically do not formalize these procedures or spell them out in detail but apply them intuitively. Yet, whether used wittingly or unwittingly, this conceptual framework underlies much of the diagnostic enterprise and behavioral analysis in individualized neuropsychological assessment. Selection of additional tests
The addition of specialized tests depends on continuing formulation and reformulation of hypotheses as new data answer some questions and raise others. Hypotheses involving differentiation of learning from retrieval, for instance, will dictate the use of techniques for assessing learning when
retrieval demands are minimal, such as with recognition formats. Finergrained hypotheses concerning the content of the material to be learned—e.g., meaningful vs. meaningless or concrete vs. abstract or the modality in which it is presented—will require different tests, modifications of existing tests, or the innovative use of relevant materials in an appropriate test format. Every function can be examined across modalities and in systematically varied formats. In each case the examiner can best determine what particular combinations of modality, content, and format are needed to test the pertinent hypotheses. The examination of a 40-year-old unemployed nursing assistant illustrates the hypothesis testing approach. While seeing a psychiatrist for a sleep disorder, she complained of difficulty learning and remembering medical procedures. She had an aborted suicide three years earlier, attempting it with carbon monoxide. She worked only sporadically after this. The question of a residual memory impairment due to CO poisoning prompted referral for neuropsychological assessment. The planned examination focused on memory and learning. In the introductory interview she said that her mind seemed to have “slowed down” and she often felt so disoriented that she had become dependent on her husband to drive her to unfamiliar places. She also reported two head injuries, one as a child when struck by a boulder without loss of consciousness. Recently, while hyperventilating, she fell on an andiron and was “knocked out.” She performed well on every verbal (span, stories, word list, working and incidental memory) and visual memory (design recall) test. However, span of immediate word recall was decreased and she had difficulty subtracting serial threes which, in light of her complaints of mental slowing, suggested a mild attentional problem. The original hypothesis of memory disorder was not supported; her complaints and failures called for another hypothesis to be tested. A review of her performances showed that, despite average scores on verbal skill tests and a high average score on a visual reasoning task (Picture Completion), her Block Design scores were in the low average range and her copy of the Complex Figure was defective due to elongation, one omitted line, and poor detailing (although both recall trials were at an average level). These poor performances, taken with her complaints of spatial disorientation, suggested a visuospatial problem. To explore this hypothesis further testing was required. The originally planned examination, which had included a test of verbal retrieval and one for sequential digit learning was discontinued. Instead, several other tests specific for visuospatial deficits were given. Scores on these tests ranged from low average to borderline defective. Her free drawing of a house was childishly crude, perspective was markedly distorted. Thus a deficit pattern emerged that contrasted with her excellent memory and learning abilities and generally average to high average scores on tests not requiring visuo-spatial competence. The available history offered no conclusive etiology for her attentional and visuospatial deficits but, given her reports of head injury, TBI was a likely candidate. An aid to test selection: a compendium of tests and assessment techniques, Chapters 9–20
In the last 12 chapters of this book, most tests of cognitive functions and personality in common use, and many less common tests, are reviewed. These are tests and assessment techniques that are particularly well suited for clinical neuropsychological examination. Clinical examiners can employ the assessment techniques presented in these chapters for most neuropsychological
assessment purposes in most kinds of work settings. Most of these tests have been standardized or used experimentally so that reports of the performances of control subjects are available (see Mitrushina, Boone, et al., 2005; E. Strauss, Sherman, and Spreen, 2006). However, the normative populations and control groups for many of these tests may differ from individual patients on critical variables such as age, education, or cultural background, requiring caution and a good deal of “test-wiseness” on the part of the examiner who attempts to extrapolate from unsuitable norms. In addition to English language tests, this book reviews some tests in Spanish and French because of their common use in North American. Concluding the examination
The final stage, of course, has to do with concluding the examination as hypotheses are supported or rejected, and the examiner answers the salient diagnostic and descriptive questions or explains why they cannot be answered (e.g., at this time, by these means). When it appears that assessment procedures are making patients aware of deficits or distressing patients because they assume—rightly or wrongly—that they performed poorly, the examiner can end the examination with a relatively easy task, leaving the patient with some sense of success. The conclusions should also lead to recommendations for improving or at least making the most of the patient’s condition and situation and for whatever follow-up contacts may be needed. The examination is incomplete until the findings have been reported. Ideally, two kinds of reports are provided: one as feedback to patients and whoever they choose to hear it; the other one written for the referral source and, if the examination is performed in an institution such as a hospital, for the institution’s records. The interpretive interview. A most important yet sometimes neglected part of the neuropsychological examination is the follow-up interview to provide patients with an understanding of their problems and how their neuropsychological status relates to their future, including recommendations about how to ameliorate or compensate for their difficulties. Feedback generally is most useful when patients bring their closest family member(s) or companion(s), as these people almost always need understanding of and seek guidance for dealing with the patient’s problems. This interview should take place after the examiner has had time to review and integrate the examination findings (which include interview observations) with the history, presenting problems, and examination objectives. Patients who have been provided an interpretation of the examination findings are more likely to view the
examination experience positively than those not receiving it (Bennett-Levy, Klein-Boonschate, et al., 1994). By briefly describing each test, discussing the patient’s performance on it, indicating that individuals who have difficulty on some test might experience a particular everyday problem, and asking if that is the case for the patient, the clinician can elicit useful validating information. This interview can also help patients understand the events that brought them to a neuropsychological examination. The interpretive interview can in itself be part of the treatment process, a means of allaying some anxieties, conveying information about strengths as well as weaknesses to the patient, and providing directions for further diagnostic procedures if necessary or for treatment. Interpretations of the patient’s performance(s) that are not validated by the patient or family members may lead the clinician in a new direction. In either case, useful information has been obtained by the clinician while the patient has been given the opportunity to gain insight into the nature of the presenting problems or— at the very least—to understand why the various tests were given and what to do next. Often counseling will be provided in the course of the interpretive interview, usually as recommendations to help with specific problems. For example, for patients with a reduced auditory span, the examiner may tell the patient, “When unsure of what you’ve heard, ask for a repetition or repeat or paraphrase the speaker”(giving examples of how to do this and explaining paraphrasing as needed). Recommending that, “In a dispute over who said what in the course of a family conversation, your recall is probably the incorrect one,” can help reduce the common minor conflicts and mutual irritations that arise when one family member processes ongoing conversation poorly. For family members the examiner advises, “Speak slowly and in short phrases, pause between phrases, and check on the accuracy of what the patient has grasped from the conversation.” Occasionally, in reviewing examination data, the examiner will discover some omissions—in the history, in following to completion a line of hypothesis testing—and will use some of this interview time to collect the needed additional information. In this case, and sometimes when informal counseling has begun, a second or even a third interpretive interview will be necessary. Most referral sources—physicians, the patient’s lawyer, a rehabilitation team—welcome having the examiner do this follow-up interview. In some instances, such as referral from a clinician already counseling the patient or treating a psychiatric disorder, referring persons may want to review the
examination findings with their patients themselves. Neuropsychological examiners need to discuss this issue with referring clinicians so that patients can learn in the preparatory interview who will report the findings to them. Some other referrals, such as those made by a personal injury defense attorney, do not offer a ready solution to the question of who does the followup: An examiner hired by persons viewed by the patient as inimical to his or her interests is not in a position to offer counsel or even, in some instances, to reveal the findings. In these cases the examiner can ask the referring attorney to make sure that the patient’s physician or the psychologist used by the patient’s attorney receive a copy of the report with a request to discuss the findings, conclusions, and recommendations with the patient. This solution is not always successful. It is an attempt to avoid what I call “hit-and-run” examinations in which patients are expected to expose their frailties in an often arduous examination without receiving even an inkling of how they did, what the examiner thought of them, or what information came out that could be useful to them in the conduct of their lives [mdl]. The report
Like the examination, the written report needs to be appropriate for the circumstances. A brief bedside examination may require nothing more than a chart note. A complex diagnostic problem on which a patient’s employment depends would require a much more thorough and explanatory report, always geared to the intended audience. Communication style. The examination report is the formal communication and sometimes the sole record concerning a patient’s neuropsychological status. Its importance cannot be overstated. Significant decisions affecting the patient’s opportunities, health, civil status, even financial well-being, may rest on the observations and conclusions given in the report. Moreover, in many cases, people of varying levels of sophistication and knowledgeability will be acting on their understanding of what the report communicates. Thus, more than most other documents, the writing style must be readily comprehensible and to the point. Three rules can lead to a clear, direct, understandable communication style. (1) The grandmother rule asks the examiner, in so far as possible, to use words and expressions “your grandmother would understand.” This rule forces the examiner to avoid professional/clinical jargon and technical expressions. When technical terms are necessary, they can first be defined; e.g., “Mr. X has diminished awareness of objects on the left side of space (left homonymous hemianopsia).” (2) The Shakespeare rule advises that by using commonly
understood words and expressions any behavior, emotion, or human condition can be aptly described; Shakespeare did it and so can you. (3) Don’t overwrite. If one word can do the work of two, use one; if a two-syllable word means the same as a three- or four-syllable word, use the shorter word—it will more likely be understood by more people. Report content. In addition to the subject’s name, age, sex, and relevant identifying data (e.g., Social Security # if applying for Social Security benefits; patient record # if in a medical center, etc.), all reports must provide the examination date, the name of the examiner, the test and procedures used in the examination, and who administered the tests, if a technician was used. As a general rule, the report should include the purpose of the examination and the referral source—the exception being reports of research or repeated examinations. Although these directives would seem obvious to most examiners, not infrequently a report will be missing one or more of these necessary data bits. Following the introductory paragraph, most reports will have six distinct sections: (1) review of the patient’s history; (2) summary of the patient’s complaints; (3) description of the patient as observed by the examiner; (4) description of test performances; (5) integrated summary of all examination data with conclusions (diagnostic, prognostic, evaluative, as relevant); and (6) recommendations—which can be for care or treatment, for further study, regarding family or employment issues, for case disposition, and about what kind of feedback and to whom. Some neuropsychologists also include diagnostic codes, using either the psychiatric system (American Psychiatric Association: Diagnostic and Statistical Manual of Mental Disorders [DSM], 2000) or the ICD-9-CM medical system for neurologists (American Academy of Neurology, 2004). A seventh section providing test raw scores may be added in some circumstances (see pp. 135–136). Brief reports documenting a research examination, a screening examination, or repeated testing for treatment evaluation or tracking the course of a disorder may omit many of these sections, especially when, for example, an initial examination report contained the history, or when test scores are the only data needed for a research project. However, recipients of all reports—including research—will benefit from at least a brief description of the subject (e.g., alert? careless? moody?) and test-taking attitude. All clinical reports, not excepting repeat examinations, should include current recommendations, even if they are identical to those given in the previous examination. A report contains what needs to be known about the examination of a
particular person. Its length and scope will be mostly determined by (1) the purpose of the examination; (2) the relevant examination issues; and (3) who will be reading the report (see Armengol et al., 2001, for an in-depth presentation of neuropsychological report writing). Examination purpose. More than any other aspect of the examination, its purpose will determine the report’s length which, in turn, depends on its breadth and depth of detail. When the patient’s history and current situation have previously been documented, the reports may be short answers to simple, focused questions. Thus, the findings of a dementia reevaluation, or a treatment follow-up can usually be briefly described and summarized. The longest reports will be those prepared for litigation, most usually for a civil suit claiming compensation for neuropsychological impairment due to an injury. In these cases, the report will probably be scrutinized by adverse experts, and may be subjected to cross-examination in court (Derby, 2001; Greiffenstein and Cohen, 2005). All information on which the examiner ’s conclusions and recommendations are based need to be reported in appropriate detail. Thus these reports should include all relevant historical and medical/psychiatric information, and a full description of the claimant’s current situation including—again, as relevant—activities, limitations, responsibilities, and relationships. Test performances and anomalous behaviors observed during the examination on which conclusions are based should, as possible, be described so they are comprehensible to the lay person. In summarizing the findings—which include nontest data from observations, history, the patient’s file(s) as well as test data—the examiner builds the foundation for the conclusions. Relevance is key to readable, usable reports. When cluttered with much unneeded information, what is relevant to the case can be obscured or dismissed. Relevance also helps trim reports by reducing repetition. Examiners preparing a report on someone involved in litigation will usually have received a great deal of information about that person, such as medical records, school and work histories, and—particularly in criminal cases—a wealth of psychosocial information. Some examiners pad their reports with detailed summaries of all the medical and other reports they have received, regardless of their relevance to the case issues. Yet these data will also have been provided to all other interested parties which makes this part of these reports not only redundant but also distracts from an easy reading of the relevant neuropsychological issues. When preparing a report for persons who already have the same set of medical, social, occupational, etc. files as the examiner (e.g., opposing counsel,
other expert witnesses), the examiner can state, for example, that, “the patient’s [social, medical, occupational, etc.] history is in [specified] records, or reported by [specified] and will not be repeated here.” This saves time for the examiner and money for the client—or the tax-payer, when the examination is paid by an indigent defense fund—while producing a more user-friendly document. When the reader is referred to the patient’s file or prior examination reports for most of the background information, the examiner is free to dwell on those specific issues in the patient’s history or experiences which provide the context for understanding the examination findings and conclusions. The length of most strictly clinical reports falls within these two extremes as most clinical purposes—diagnostic? postdiagnosis planning?—require a report which produces conclusions and recommendations and provides the basis for these. Yet, since it is unlikely that the report will be subject to hostile confrontation, the level of detailing can be lower while the amount of referencing to already existing documents can be higher. The relevant issues. Many referrals will be centered around one issue: e.g., return to school or work? early dementia? concussion residuals? candidate for rehabilitation? Others may ask two or more questions: e.g., does this person suffer residual damage from a TBI and, if so, to what extent will it compromise work capacity? What are this MS patient’s cognitive deficits and whether/how do they contribute to family problems? While the examination may be planned and focused on answering the referral question, it is incumbent on the examiner to identify and examine as possible other issues affecting the patient’s well-being and functioning. Thus a report may include both the neuropsychological findings requested in the referral, and also a description and discussion of the patient’s ability to continue working, to live independently, or to cope with a depressive reaction, although this information was not originally requested. What is relevant for the report will also depend on the patient’s situation and condition, as evaluated by the examiner ’s judgment. An early childhood head injury needs to be documented and taken into account when examining a teenager having difficulty adapting to high school but early childhood history is irrelevant for understanding an elderly stroke patient who had a successful career and stable marriage. However, should the elderly patient have led an erratic existence, in and out of relationships, low-level jobs, and the county jail, knowledge of the early head injury may help make sense of post-stroke behavior and deserves mention in the report. Who reads the report? It is important to appreciate who—all—will have access to a report. Although it is typically sent to the referral source, it may be
shared with persons of more or less psychological sophistication, including the subject. The examiner can usually determine where and how the report will be used from the purpose for the examination and the referral source. In anticipating the potential audience for any given report, the examiner can present its substance—and especially the summary, conclusions, and recommendations—at a level of succinctness or elaboration, of conceptualization or practicality, and generality or detail that will best suit both the intended and the potentially unintended audience. Finn and his colleagues (2001) present the findings of an extensive survey of lawyers, physician specialists (e.g., pediatricians, psychiatrists), and clinical neuropsychologists regarding what each professional group looks for in a neuropsychological report. Only a few referring persons are likely to be familiar with neuropsychological terms and concepts. These include physicians in neurological or rehabilitation specializations, rehabilitation therapists, and lawyers who specialize in brain damage cases. Neuropsychologists cannot assume that other referring physicians, psychologists, or education specialists will have an accurate understanding of neuropsychological terminology or concepts although the general level of neuropsychological sophistication among these professionals is rapidly rising. Moreover, neuropsychologists must be aware that, in many instances, reports may be given to patients and their families and—with patient or guardian agreement—to educators, employers, mental health workers, relatively untrained persons working in care facilities, etc. For cases in civil litigation, consent to release of the report may be implied so that it goes not only to persons specifically identified in a release signed by the patient, but may be seen by many others including judge and jury, opposing counsel, and a host of professional experts. The range of potential readers can be even broader in some criminal cases, as all of the above may be assumed plus social workers, criminal investigators, mitigation experts, and others brought in by counsel. The potential readership should determine the extent to which technical data and terms are used. A report for use only within a dementia clinic, for example, can be written at a highly technical level. A report from this clinic sent to a community care facility or nurse practitioner would include few if any technical terms and, if the report is to be useful, technical terms would be defined in everyday language. If the examiner is in doubt about how technical the writing should be—when providing a report for a legal proceeding, for example—this question can usually be resolved in a discussion with the referring person. When the report may be available to unknown persons who
could have decision-making responsibilities for the patient, full descriptions in everyday language should substitute for technical terms and concepts. Reporting test performances. Most clinical reports will include both descriptions of test performances, as pertinent, and data of test performances. The usefulness of each kind of information about the test performance will vary with the test as well as the purpose of the examination. For most clinical purposes, how the subject goes about responding to test instructions, performing tasks, and reacting to the situation, can provide useful information that may aid in reaching a diagnosis, help with planning for the patient, or even clarify family or workplace problems. Clinical judgment can best determine what and how much descriptive information is called for and may be useful. With respect to reporting test data, disagreement among neuropsychologists centers around the question of including scores in reports. Freides (1993) initially raised the issue when he opined that scores should be appended to reports, a position countered by Naugle and McSweeney (1995, 1996). In 2001, Pieniadz and Kelland reported that, of 78 neuropsychologists, 64% did not “routinely append test data” to their reports. They concluded that, “The decision about whether and how to report scores should be based on the complex interaction of several factors” including the source and nature of the referral, the examiner ’s “theoretical bias,” and test standardization characteristics (p. 139)—excepting, of course, when the neuropsychologist is required to release them by court order. The usefulness of reported scores is limited to persons sufficiently knowledgeable about them as to understand both what information they convey and what they do not convey. Reported scores will be most useful to knowledgeable clinicians who do assessments, treatment, planning, and consulting on behalf of their patients. Appended test scores are especially useful to clinicians following a patient’s course with repeated examinations, or when data needs to be shared with another neuropsychologist on the patient’s behalf. However, because these reports will often be available to patients, their families, and other interested but not knowledgeable persons, they can easily be misinterpreted (see below). For this reason, when appending scores to a report they can be given as raw scores, or other raw data—such as seconds to completion or number of taps per minute [dbh]. While meaningful to knowledgeable clinicians, test data in this form reduces the likelihood of misinterpretation by lay persons. Some neuropsychologists question the practice of appending scores to a report because scores can be confusing and misleading for the many recipients of test reports who are teachers, guidance counselors, physicians, and lawyers
lacking training in the niceties of psychometrics. One important source of faulty communication is variability in the size of assigned standard deviations (see Fig. 6.3, p. 166: note how the Army General Classification Test [AGCT] and Wechsler Deviation IQ scores differ at different levels). Thus, a score of 110 is at the 75th %ile (at the low edge of the high average range) when SD = 15, but when SD = 10 the same score will be at approximately the 84th %ile (high in the high average range). Unless the persons who receive the test report are statistically sophisticated and knowledgeable about the scaling idiosyncrasies of test makers, it is unlikely that they will notice or appreciate these kinds of discrepancies. Another difficulty in reporting scores lies in the statistically naive person’s natural assumption that if one measurement is larger than another, there is a difference in the quantity of whatever is being measured. Unfortunately, few persons unschooled in statistics understand measurement error; they do not realize that two different numbers need not stand for different quantities but may be chance variations in the measurement of the same quantity. Laymen who see a report listing a WIS-A Similarities score of 9 and an Arithmetic score of 11 are likely to draw the probably erroneous conclusion that the subject does better in mathematics than in verbal reasoning. Since most score differences of this magnitude are chance variations, it is more likely that the subject is equally capable in both areas. Further, there has been a tendency, both within school systems and in the culture at large, to reify test scores (Lezak, 1988b). In many schools, this has too often resulted in the arbitrary and rigid sorting of children into different parts of a classroom, into different ability level classes, and onto different vocational tracks. In its extreme form, reification of test scores has provided a predominant frame of reference for evaluating people generally. It is usually heard in remarks that take some real or supposed IQ score to indicate an individual’s personal or social worth. “Sam couldn’t have more than an ‘IQ’ of 80,” means that the speaker thinks Sam is socially incompetent. “My Suzy’s ‘IQ’ is 160!” is a statement of pride. Although these numerical metaphors presumably are meaningful for the people who use them, the meanings are not standardized or objective, nor do they bear any necessary relationships to the meaning test-makers define for the scores in their scoring systems. Thus, the communication of numerical test scores, particularly if the test-taker has labeled them “IQ” scores, becomes an uncertain business since the examiners have no way of knowing what kind of meaning their readers have already attached to mental test scores. The many difficulties inherent in test score reporting can be avoided by
writing about test performances in terms of the commonly accepted classification of ability levels (PsychCorp, 2008b; Wechsler, 1997a). In the standard classification system, each ability level represents a statistically defined range of scores. Both percentile scores and standard scores can be classified in terms of ability level (see Table 5.1). Test performances communicated in terms of ability levels have generally accepted and relatively clear meanings. When in doubt as to whether such classifications as average, high average, and so on make sense to the reader, the examiner can qualify them with a statement about the percentile range they represent, for the public generally understands the meaning of percentiles. For example, in reporting Wechsler test scores of 12 and 13, the examiner can say, “The patient’s performance on [the particular tests] was within the high average ability level, which is between the upper 75th and 91st percentiles, approximately.” One caveat to the use of percentiles should be mentioned. The terms percent (as in percent correct) and percentile (rank) are not interchangeable and sometimes not clearly distinguished conceptually by the public. When being deposed, a lawyer essentially made the statement “Mr. X performed at the 50th percentile on this test and you said that was an average performance. If I’d got 50% on any test in school that would have been considered poor performance.”
What the lawyer failed to realize is that percent correct on a test is related to variables such as the difficulty of the items and the test-taker ’s knowledge and psychological and physical state at the time of administration. If a test is easy, 80% correct could be the 50th %ile with half of the class scoring at this level or above. If a test is difficult, 25% correct could be at the 50th %ile with only half of the class making 25% or more correct responses. Percentile (rank) refers to the position of the score in the distribution of scores. On every test, regardless of the test and test-taker variables, the 50th percentile is always the middle score (or median [Mdn]) in the distribution. TABLE 5.1 Classification of Ability Levels
Converting scores to ability levels also enables the examiner to report clusters of scores that may be one or two—or, in the case of tests with finegrained scales, several—score points apart but that probably represent a normal variation of scores around a single ability level. Thus, in dealing with the performance of a patient who receives scaled scores of 8, 9, or 10 on each Wechsler test involving verbal skills, the examiner can report that, “The patient’s verbal skill level is average.” Significant performance discrepancies can also be readily noted. Should a patient achieve average scores on verbal tests but low average to borderline scores on constructional tasks, the examiner can note both the levels of the different clusters of test scores and the likelihood that discrepancies between these levels approach or reach significance. PROCEDURAL CONSIDERATIONS IN NEUROPSYCHOLOGICAL ASSESSMENT
Testing Issues Order of test presentation
In an examination tailored to the patient’s needs, the examiner varies the testing sequence to ensure the patient’s maximum productivity (e.g., see Benedict, Fischer, et al., 2002). A relatively easy test rather than an anxiety producing test at the beginning is a good way to help the patient feel comfortable. However, tests that the examiner suspects will be difficult for a particular patient can be given near the beginning of a testing session when the patient is least fatigued;
or a test that has taxed or discouraged the patient can be followed by one on which the patient can relax or feel successful so that the patient does not experience one failure after another. Overall, order of presentation does not have a large effect. Neuger and his colleagues (1981) noted a single exception to this rule when they gave a battery containing many different tests. A slight slowing occurred on a test of manual speed, Finger Tapping, when administered later in the day. No important effects appeared when both WAIS-III and the Wechsler Memory Scale-III (WMS-III) batteries were given in different order; the most pronounced score difference was on Digit-Symbol Coding when the WAIS-III was given last, an effect that could be due to fatigue (Zhu and Tulsky, 2000). However, an examiner who is accustomed to a specific presentation sequence may feel somewhat uncomfortable and less efficient if it is varied. An important consideration in sequencing the tests is the need to keep the patient busy during the interval preceding delayed trials on learning tests. A format which makes the most economical use of examination time varies succeeding tasks with respect to modalities examined and difficulty levels while filling in these delay periods. The choice of these interval tasks should rest in part on whether high or low levels of potential interference are desired: if the question of interference susceptibility is important, the examiner may select a vocabulary or verbal fluency test as an interference test for word list learning; otherwise, selection of a word generating task should be avoided at this point of the examination. Testing the limits
Knowledge of the patient’s capacities can be extended by going beyond the standard procedures of a test. The WIS-A oral Arithmetic questions provide a good example. When patients fail the more difficult items because of an auditory span, concentration, or mental tracking problem—which becomes obvious when patients ask to have the question repeated or repeat question elements incorrectly—the examiner still does not know whether they understand the problem, can perform the calculations correctly, or know what operations are called for. If the examiner stops at the point at which these patients fail the requisite number of items without further exploration, any conclusion drawn about the patient’s arithmetic ability is questionable. In cases like this, arithmetic ability can easily be tested further by providing pencil and paper and repeating the failed items. Some patients can do the problems once they have written the elements down, and still others do not perform any better with paper than without it but provide written documentation of the nature of their difficulty.
Testing the limits does not affect the standard test procedures or scoring. It is done only after the test or test item in question has been completed according
to standard test instructions; it serves as a guide to clinical interpretation. This method not only preserves the statistical and normative meaning of the test scores but it also can afford interesting and often important information about the patient’s functioning. For example, a patient who achieves a Wechsler Arithmetic score in the borderline ability range on the standard presentation of the test and who solves all the problems quickly and correctly at a superior level of functioning after writing down the elements of a problem, demonstrates a crippling auditory span or mental tracking problem with an intact capacity to handle quite complex computational problems as long as they can be seen. From the test score alone, one might conclude that the patient’s competency to handle sizeable sums of money is questionable; on the basis of the more complete examination of arithmetic ability, the patient might be encouraged to continue bookkeeping and other arithmetic-dependent activities.
Testing the limits can be done with any test. The limits should be tested whenever there is suspicion that an impairment of some function other than the one under consideration is interfering with an adequate demonstration of that function. Imaginative and careful limit testing can provide a better understanding of the extent to which a function or functional system is impaired and the impact this impairment may have on related functional systems (R.F. Cohen and Mapou, 1988). Techniques that Edith Kaplan and her colleagues devised can serve as models for expanded assessments generally (E. Kaplan, 1988; E. Kaplan, Fein, et al., 1991). Practice effects
Repeated neuropsychological testing is common in clinical practice when questions occur about the progression of a disease or improvement in a condition. Repeated assessments are also necessary for longitudinal research projects, sometimes over decades. Healthy subjects especially, but also many brain injured patients, are susceptible to practice effects with repeated testing. By and large, tests that have a large speed component, require an unfamiliar or infrequently practiced mode of response, or have a single solution— particularly if it can be easily conceptualized once it is attained—are more likely to show significant practice effects (M.R. Basso, Bornstein, and Lang, 1999; Bornstein, Baker, and Douglass, 1987; McCaffrey, Ortega, et al., 1993). This phenomenon appears on the WIS-A tests as the more unfamiliar tasks on the Performance Scale show greater practice effects than do the Verbal Scale tests (Cimino, 1994; see p. 598 below regarding practice effects on the Block Design test). Practice effects have also been visualized in PET studies as shifts in activation patterns with repeated practice of a task (Démonet, 1995). The problem of practice effects is particularly important in memory testing
since repeated testing with the same tests leads to learning the material by all but seriously memory-impaired patients (Benedict and Zgaljardic, 1998; B.A. Wilson, Watson, et al., 2000). Unavailability of appropriate alternative test forms is a common limitation on retesting for most tests, especially memory tests, used in neuropsychological assessments. Unfortunately, few tests have well-standardized alternate parallel forms that might help reduce practice effects. Numerous studies have also shown a general test-taking benefit in which enhanced performance may occur after repeated examinations, even with different test items (Benedict and Zgaljardic, 1998; B.A. Wilson, Watson, et al., 2000). The patient often is less anxious the second or third time around because of familiarity with the examiner and procedures. The use of alternate forms may attenuate practice effects, but they still may occur on novel tests or those in which the patient learns to use an effective test-taking strategy or has acquired “test-wiseness”(Beglinger et al., 2005). For many tests the greatest practice effects are likely to occur between the first and second examinations (Benedict and Zgaljardic, 1998; Ivnik, Smith, Lucas, et al., 1999). To bypass this problem, a frequently used research procedure provides for two or more baseline examinations before introducing an experimental condition (Fischer, 1999; McCaffrey and Westervelt, 1995). Moreover, longitudinal studies have shown that between 7 and 13 years must elapse before the advantage of the prior assessment is eliminated for some tests (Salthouse et al., 2004). When a brain disorder renders a test, such as Block Design, difficult to conceptualize, the patient is unlikely to improve with practice alone (Diller, Ben-Yishay, et al., 1974). Then patients’ improvements attributable to practice tend to be minimal, but this varies with the nature, site, and severity of the lesion and with the patient’s age. Test characteristics also determine whether brain injured patients’ performances will improve with repetition (B.A. Wilson, Watson, et al., 2000). McCaffrey, Duff, and Westervelt’s (2000a,b) comprehensive and well-organized review of the hundreds of studies using repeated testing of both control and specified patient groups makes clear which tests are most vulnerable to practice effects and which patient groups tend to be least susceptible. Except for single solution tests and others with a significant learning component, large changes between test and retest are not common among normal persons (C.M. Bird et al., 2004; Dikmen, Machamer, et al., 1990). On retest, WIS-A test scores have proven to be quite robust (see McCaffrey, Duff, and Westervelt, 2000a). Score stability when examined in healthy subjects can vary with the nature of the test: verbal knowledge and skills tend to be most
stable over a period of years; retention scores show the greatest variability (Ivnik, Smith, Malec, et al., 1995). Age differentials with respect to tendencies to practice effects have been reported, but no clear pattern emerges. On WIS-A tests a greater tendency for practice effects among younger subjects was noted (Shatz, 1981), but there was little difference between younger (25–54) and older (75 +) age groups, except for a significant effect for Digit Span (J.J. Ryan, Paolo, and Brungardt, 1992). Moreover, on one test of attention (Paced Auditory Serial Addition Test), a practice effect emerged for the 40–70 age range with little effect for ages 20– 39; and another (Trail Making Test B) produced a U-shaped curve with greatest effects in the 20s and 50s and virtually none in the 30s and 40s (Stuss, Stethem, and Poirier, 1987). Comparing subjects ranging in age from 52 to 80, no age difference for practice effects was found on selected tests of attention and executive function except that younger subjects showed a greater improvement on simple reaction time scores upon retesting (Lemay et al., 2004). Practice effects occurred for adults 65–79 years old on the WMS-R Logical Memory test administered once a year for 4 years but not for subjects 80 and older (Hickman, Howieson, et al., 2000). The lack of a practice effect on memory (Howieson, Carlson, et al., 2008) and category fluency (D.B. Cooper, Lackritz, et al., 2004) performance has been identified as early indicators of mild cognitive impairment in an older person. Absence of practice effects on tests when the effect is expected may be clinically meaningful in other populations. For patients who have undergone temporal lobectomy, scoring on retest at levels similar to preoperative scores may reflect an actual decrement in learning ability; a small decrement after surgery may indicate a fairly large loss in learning ability (Chelune, Naugle, et al., 1993). One solution for minimizing the practice effect is to use alternate forms. Where available, we present data on alternate forms of tests discussed in Chapters 9–17. The number of tests with alternate forms is limited, perhaps because of the need to produce tests with demonstrated interform reliability. If alternate forms do not have an equal level of difficulty, then changing forms may introduce more unwanted variance than practice effects (see Benedict and Zgaljardic, 1998). Use of technicians
Reliance on technicians to administer and score tests has a long history (DeLuca, 1989; Puente, Adams, et al., 2006). Most neuropsychologists who use technicians have them give the routine tests; the neuropsychologist conducts
the interviews and additional specialized testing as needed, writes reports, and consults with patients and referral sources. Some neuropsychologists base their reports entirely on what the technician provides in terms of scores and observations. The advantages of using a technician are obvious: Saving time enables the neuropsychologist to see more patients. In research projects, in which immutable test selection judgments have been completed before any subjects are examined and qualitative data are usually irrelevant, having technicians do the assessments is typically the best use of everyone’s time and may contribute to objective data collection (NAN Policy and Planning Committee, 2000b). Moreover, as technicians are paid at one-third or less the rate of a neuropsychologist, a technician-examiner can reduce costs at savings to the patients or a research grant. When the technician is a sensitive observer and the neuropsychologist has also conducted a reasonably lengthy examination with the patient, the patient benefits in having been observed by two clinicians, thus reducing the likelihood of important information being overlooked. However, there are disadvantages as well. They will be greatest for those who write their reports on the basis of “blind analysis,” as these neuropsychologists cannot identify testing errors, appreciate the extent to which patients’ emotional status and attitudes toward the examination colored their test performances, or have any idea of what might have been missed in terms of important qualitative aspects of performance or problems in major areas of cognitive functioning that a hypothesis-testing approach would have brought to light. In referring to the parallel between blind analysis in neuropsychology and laboratory procedures in medicine, John Reddon observed that, “some neuropsychologists think that a report can be written about a patient without ever seeing the patient because Neuropsychology is only concerned with the brain or CNS …. Urine analysts or MRI or CT analysts do not see their patients before interpreting their test results so why should neuropsychologists?” He then answered this question by pointing out that neuropsychological assessment is not simply a medical procedure but requires “a holistic approach that considers the patient as a person … and not just a brain that can be treated in isolation”(personal communication, 1989 [mdl]). Moreover, insensitive technicians who generate test scores without keeping a record of how the patient performs, or whose observations tend to be limited by inadequate training or lack of experience, can only provide a restricted data base for those functions they examine. Prigatano (2000) pointed out that when most of the patient’s contact is with a technician who simply tests in a lengthy
examination, and the neuropsychologist—who has seen the patient only briefly, if at all—seems more interested in the test scores than in the patient, the patient is more likely to come away unhappy about the examination experience. The minimal education and training requirements for technicians are spelled out in the report of the Division 40 (American Psychological Association) Task Force on Education, Accreditation, and Credentialing, 1989 (see also Bornstein, 1991) and have been further elaborated in an American Academy of Clinical Neuropsychology policy statement on “The use, education, training and supervision of neuropsychological test technicians (psychometrists) in clinical practice”(Puente, Adams, et al., 2006). These psychometric technicians, psychometrists, and other psychologist-assistants, as well as trainees enrolled in formal educational and training programs typically hold nondoctoral degrees in psychology or related fields and should have a minimum of a bachelor ’s degree. Their role has been clearly defined as limited to administering and scoring tests under the supervision of a licensed neuropsychologist whose responsibility it is to select and interpret the tests, do the clinical interviews, and communicate the examination findings appropriately (American Academy of Clinical Neuropsychology, 1999; see also McSweeny and Naugle, 2002; NAN Policy and Planning Committee, 2000b).
Examining Special Populations Patients with sensory or motor deficits
Visual problems. Many persons referred for neuropsychological assessment will have reduced visual acuity or other visual problems that could interfere with their test performance; e.g., multiple sclerosis patients (Feaster and Bruce, 2011). M. Cohen and colleagues (1989) documented defective convergence— which is necessary for efficient near vision—in 42% of traumatically brain injured patients requiring rehabilitation services. These authors noted that other visual disturbances were also common after head injury, mostly clearing up during the first postinjury year. Defective visual acuity is common in elderly persons and may be due to any number of problems (Matjucha and Katz, 1994; Rosenbloom, 2006). Other age-related changes include decreased spatial vision in conditions of low light, reduced contrast, or glare. Reduced stereopsis and decreased color discrimination also are common (Haegerstrom-Portnoy et al., 1999). The major causes of significant visual impairment and blindness in the elderly are age-related cataracts and age-
related macular degeneration (Renzi and Johnson, 2007). A visual problem that can occur after a head injury, stroke, or other abrupt insult to the brain, or that may be symptomatic of degenerative disease of the central nervous system, is eye muscle imbalance resulting in double vision (diplopia) with impaired binocular function (Cockerham et al., 2009; Kapoor and Ciuffreda, 2002). Patients may not see double at all angles or in all areas of the visual field and may experience only slight discomfort or confusion with the head tilted a certain way. For others the diplopia may compromise their ability to read, write, draw, or solve intricate visual puzzles altogether. Young, well-motivated patients with diplopia frequently learn to suppress one set of images and, within one to three years, become relatively untroubled by the problem. Other patients report that they have been handicapped for years by what may appear on examination to be a minor disability. Should the patient complain of visual problems, the examiner may want a neurological or ophthalmological opinion before determining whether the patient can be examined with tests requiring visual acuity. Persons over the age of 45 need to be checked for visual competency as many of them will need reading glasses for fine, close work. Those who use reading glasses should be reminded to bring them to the examination. Not infrequently, hospitalized patients will not have brought their glasses with them. Examiners in hospital settings in particular should keep reading glasses with their testing equipment. Hearing problems. Although most people readily acknowledge their visual defects, many who are hard of hearing are secretive about auditory handicaps. It is not unusual to find hard-of-hearing persons who prefer to guess what the examiner is saying rather than admit their problem and ask the examiner to speak up. It is also not unusual for persons in obvious need of hearing aids to reject their use, even when they own aids that have been fitted for them. Sensitive observation can often uncover hearing impairment, as these patients may cock their head to direct their best ear to the examiner, make a consistent pattern of errors in response to the examiner ’s questions or comments, or ask the examiner to repeat what was said. When hard-of-hearing patients come for the examination without hearing aids, the examiner must speak loudly, clearly, and slowly, and check for receptive accuracy by having these patients repeat what they think they have heard. If a patient answers a question oddly, a simple inquiry may reveal that the question was misheard. Patients coming for neuropsychological assessment are more likely to have hearing loss than the population at large. Along with cognitive and other kinds of deficits, hearing impairments can result from a brain injury. Moreover, the
likelihood of defective hearing increases with advancing age such that many patients with neurological disorders associated with aging will also have compromised hearing (G.A. Gates and Mills, 2005; E. Wallace et al., 1994). A commonly used but crude test of auditory acuity involving rattling paper or rubbing fingers by the patient’s ear will not identify this problem which can seriously interfere with accurate cognitive testing (Schear, Skenes, and Larson, 1988). Diminished sound detection is not the only problem that affects auditory acuity. Some patients who have little difficulty hearing most sounds, even soft ones, find it hard to discriminate sounds such as certain consonants. The result is that people with this condition confuse similar sounding words, making communication difficult. Lateralized sensory deficits. Many brain impaired patients with lateralized lesions have reduced vision or hearing on the side opposite the lesion and may have little awareness of this problem (see pp. 427–428). This is particularly true for patients who have homonymous field cuts (loss of vision in the same part of the field of each eye) or in whom nerve damage has reduced auditory acuity or auditory discrimination functions in one ear only. Their normal conversational behavior may give no hint of the deficit, yet presentation of test material to the affected side makes their task more difficult (B. Caplan, 1985). The neuropsychologist is often not able to find out quickly and reliably whether the patient’s sight or hearing has suffered impairment. Therefore, when the patient is known to have a lateralized lesion, it is a good testing practice for the examiner to sit either across from the patient or to the side least likely to be affected. The examiner must take care that the patient can see all of the visually presented material and the examiner should speak to the ear on the side of the lesion. Patients with right-sided lesions, in particular, may have reduced awareness of stimuli in the left half of space so that all material must be presented to their right side. Use of vertical arrays for presenting visual stimuli to these patients should be considered (B. Caplan, 1988; B. Caplan and Shechter, 1995). Motor problems. Motor deficits do not present as great an obstacle to standardized and comprehensive testing as sensory deficits since most all but constructional abilities can be examined when a patient is unable to use the preferred hand. Many brain injured patients with lateralized lesions will have use of only one hand, and that may not be the preferred hand. One-handed performances on construction or drawing tests tend to be a little slowed, particularly when performed by the nonpreferred hand. Meeting the challenge of sensory or motor deficits. Neuropsychological assessment of patients with sensory or motor deficits presents the problem of
testing a variety of functions in as many modalities as possible with a more or less restricted test repertory. Since almost all psychological tests have been constructed with physically able persons in mind, examiners often have to find reasonable alternatives to the standard tests the physically impaired patient cannot use, or they have to juggle test norms, improvise or, as a last resort, do without (B. Caplan and Shechter, 1995, 2005). Although the examination of patients with sensory or motor disabilities is necessarily limited insofar as the affected input or output modality is concerned, the disability should not preclude at least some test evaluation of any cognitive function or executive capacity not immediately dependent on the affected modality. Of course, blind patients cannot be tested for their ability to organize visual percepts, nor can patients with profound facial paralysis be tested for verbal fluency; but patients with these deficits can be tested for memory and learning, arithmetic, vocabulary, abstract reasoning, comprehension of spatial relationships, a multitude of verbal skills, and other abilities. The haptic (touch) modality lends itself most readily as a substitute for visually presented tests of nonverbal functions. For example, to assess concept formation of blind patients, size, shape, and texture offer testable dimensions. The patient with a movement disorder presents similar challenges. Visuoperceptual functions in these patients can be relatively easily tested since most tests of these functions lend themselves to spoken answers or pointing. However, drawing tasks requiring relatively fine motor coordination cannot be satisfactorily evaluated when the patient’s preferred hand is shaky or spastic. Even when only the nonpreferred hand is involved, some inefficiency and slowing on other construction tasks will result from the patient’s inability to anchor a piece of paper with the nonpreferred hand or to turn blocks or manipulate parts of a puzzle with two-handed efficiency. After discussing some of the major issues in assessing patients with movement disorders (e.g., Huntington’s disease, Parkinson’s disease, cerebellar dysfunction), Stout and Paulsen (2003) identify the motor demands and suggest possible adaptations for a number of tests in most common use. Some tests have been devised specifically for physically handicapped people. Most of them are listed in test catalogues or can be located through local rehabilitation services. One problem that these substitute tests present is normative comparability; but since this is a problem in any substitute or alternative version of a standard test, it should not dissuade the examiner if the procedure appears to test the relevant functions. Another problem is that alternative forms usually test many fewer and sometimes different functions
than the original test. For example, multiple-choice forms of design copying tests obviously do not measure constructional abilities. What may be less obvious is loss of data about the patient’s ability to organize, plan, and order responses. Unless the examiner is fully aware of all that is missing in an alternative battery, some important functions may be overlooked. The severely handicapped patient
When mental or physical handicaps greatly limit the range of response, it may first be necessary to determine whether the patient has enough verbal comprehension for formal testing procedures. A set of questions and commands calling for one-word answers and simple gestures will quickly give the needed information. Useful questions include name, age, orientation to place, naming of common objects and colors, simple counting, following oneand two-step commands, and reciting well learned sequences such as the alphabet. Patients who do not speak well enough to be understood can be examined for verbal comprehension and ability to follow directions. Show me your (hand, thumb, a button, your nose). Give me your (left, right [the nonparalyzed]) hand. Put your (nonparalyzed) hand on your (left, right [other]) elbow.
Place several small objects (button, coin, etc.) in front of the patient with a request. Show me the button (or key, coin, etc.). Show me what you use to write. How do you use it? Do what I do (salute; touch nose, ear opposite hand, chin in succession).
Place several coins in front of the patient. Show me the quarter (nickel, dime, etc.). Show me the smallest coin. Give me (three, two, five) coins.
Patients who can handle a pencil may be asked to write their name, age, where they live, and to answer simple questions calling for “yes,” “no,” short word, or simple number answers; and to write the alphabet and the first 20 numbers. Patients who cannot write may be asked to draw a circle, copy a circle drawn by the examiner, copy a vertical line drawn by the examiner, draw a square, and imitate the examiner ’s gestures and patterns of tapping with a pencil. Reading comprehension can be tested by printing the question as well as the answers or by giving the patient a card with printed instructions such as, “If you are a man (or “if it is morning”), hand this card back to me; but if you are a woman (or “if it is afternoon”), set it down.” The Boston Diagnostic Aphasia Examination (Goodglass, Kaplan, and Barresi, 2000) and other tests for aphasia contain similar low-level questions that can be appropriate for
nonaphasic but motorically and/or mentally handicapped patients. For patients who are unable to answer questions calling for “yes” or “no” verbal answers, a thumbs up or thumbs down gesture may substitute. With severe motor paralysis, some patients can communicate with one or two eye blinks (Schnakers et al., 2008). Patients who respond to most of these questions correctly are able to comprehend and cooperate well enough for formal testing. Patients unable to answer more than two or three questions probably cannot be tested reliably. Their behavior is best evaluated by rating scales (see Chapter 18, passim). A 22-year-old woman rendered quadriplegic and anarthric by a traffic TBI was dependent on a feeding tube to live and considered to be in a vegetative state (McMillan, 1996a). Euthanasia was considered, but first the court required a neurobehavioral examination. It was found that she could press a button with her clenched right hand. She was instructed in a pattern of holding or withholding the button press for “yes” and “no” respectively. With this response capacity in place, she was given a set of questions of the order, “Is your sister’s name Lydia?” “Is your sister’s name Lucy?”, with correct “yes” responses randomized among the “no” responses. By this technique, cognitive competency was established, which allowed further exploration into her feelings, insight into her condition, and whether she wanted to live. She did, and continued to want to live at least for the next several years, despite her report of some pain and depression. (McMillan and Herbert, 2000). The severely brain damaged patient
With few exceptions, tests developed for adults have neither items nor norms for grading the performance of severely mentally impaired adults. On adult tests, the bottom 1% or 2% of the noninstitutionalized adult population can usually pass the simplest items. These items leave a relatively wide range of behaviors unexamined and are too few to allow for meaningful performance gradations. The WAIS-IV has included more easy items for this purpose (PsychCorp, 2008). Yet it is as important to know about the impairment pattern, the rate and extent of improvement or deterioration, and the relative strengths and weaknesses of the severely brain damaged patient as it is for the less afflicted patient. For patients with severe mental deficits, one solution is to use children’s tests (see Baron, 2003). Tests developed for children examine many functions in every important modality as well as providing children’s norms for some tests originally developed for adults (for example, the Developmental Test of Visual-Motor Integration [Beery et al., 2010]). Most of the Woodcock-Johnson III Tests of Cognitive Abilities (see pp. 731–733) extend to those younger than two years, all go to prekindergarten levels, and almost all have norms up to adult levels. When given to mentally deficient adults, children’s tests require little or no change in wording or procedure. At the lowest performance levels,
the examiner may have to evaluate observations of the patient by means of developmental scales. Some simple tests and tests of discrete functions were devised for use with severely impaired adults. A.-L. Christensen’s (1979) systematization of Luria’s neuropsychological investigation techniques gives detailed instructions for examining many of the perceptual, motor, and narrowly defined cognitive functions basic to complex cognitive and adaptive behavior. These techniques are particularly well suited for patients who are too impaired to respond meaningfully to graded tests of cognitive prowess but whose residual capacities need assessment for rehabilitation or management. Their clinical value lies in their flexibility, their focus on qualitative aspects of the data they elicit, and their facilitation of useful behavioral descriptions of the individual patient. Observations made by means of Luria’s techniques or by means of the developmental scales and simple tests that enable the examiner to discern and discriminate functions at low performance levels cannot be reduced to numbers and arithmetic operations without losing the very sensitivity that examination of these functions and good neuropsychological practice requires. Tests for elderly patients suspected of having deteriorating brain disorders are generally applicable to very impaired adults of all ages (see R.L. Tate, 2010; pp. 142–143). Elderly persons
Psychological studies of elderly people have shown that, with some psychometrically important exceptions, healthy and active people in their seventies and eighties do not differ greatly in skills or abilities from the generations following them (Hickman, Howieson, et al., 2000; Tranel, Benton, and Olson, 1997). However, the diminished sensory acuity, motor strength and speed, and particularly, flexibility and adaptability that accompany advancing age are apt to affect the elderly person’s test performance adversely (Bondi, Salmon, and Kaszniak, 1996). When examining elderly people, the clinician needs to determine whether their auditory and visual acuity is adequate for the tests they will be taking and, if not, to make every effort to correct the deficit or assist them in compensating for it. Some conditions that can adversely affect a person’s neuropsychological status are more common among the elderly. These include fatigue, central nervous system side effects due to medication, and lowered energy level or feelings of malaise associated with a chronic illness. A review of the patient’s recent health history should help the examiner to identify these problems so that testing will be appropriate for the patient’s physical capacities
and test interpretation will take such problems into account. Since age-related slowing affects the performance of timed tasks, the examiner who is interested in how elderly patients perform a given timed task can administer it without timing (e.g., see Storandt, 1977). Although this is not a standardized procedure, it will provide the qualitative information about whether they can do the task at all, what kinds of errors they make, how well they correct them, etc. This procedure will probably answer most of the examination questions that prompted use of the timed test. Since older persons are also apt to be cautious (Schaie, 1974), this too may contribute to performance slowing. When the examiner suspects that patients are being unduly cautious, an explanation of the need to work quickly may help them perform more efficiently. Often the most important factor in examining elderly persons is their cooperation (B. Caplan and Shechter, 2008; Jamora et al., 2008). With no school requirements to be met, no jobs to prepare for, and usually little previous experience with psychological tests, retired persons may very reasonably not want to go through fatiguing mental gymnastics that they may fear will make them look stupid. Particularly if they are not feeling well or are concerned about diminishing mental acuity, elderly persons may view a test as a nuisance or an unwarranted intrusion into their privacy. Thus, explaining to elderly persons the need for the examination and introducing them to the testing situation will often require more time than with younger people. Some of these problems can be avoided by examining elderly people with tests that have face validity, such as learning a telephone number as a supraspan memory test (Crook, Ferris, et al., 1980). When examinee and examiner speak different languages
Migration—of refugees, of persons seeking work or rejoining their displaced families—has brought millions of people into cultures and language environments foreign to them. When understanding or treatment of a brain disorder would benefit from neuropsychological assessment, the examiner must address a new set of issues if the patient is to be examined appropriately. Ideally, examiners are fluent in the patient’s primary language, but in reality examiners fluent in many uncommon languages are rare or nonexistent. Translators and interpreters. In many big cities with relatively large populations of foreign language speakers, medical centers provide interpreters. Metropolitan court systems also will have a pool of interpreters available. However, even when the interpreter can provide a technically accurate rendition of test questions and patient responses, slippages in the
interpreter ’s understanding of what is actually required or some of our terms of art can result in an inadequate or biased examination, especially when the examiner ’s language is the interpreter ’s second—or even third—language. When working with a neuropsychologically naive interpreter who is also unfamiliar with tests and test culture, the best practice has the examiner reviewing with the interpreter the assessment procedures, including intentional and idiomatic aspects of the wording of instructions and test questions, so that the interpreter has a reasonable idea of the normal response expectations for any item or test (Rivera Mindt et al., 2008). This can rarely be accomplished because of time and cost limitations. Thus, the examiner must be on the lookout for unexpected aberrations in the patient’s responses as these could indicate translation slippage in one or the other direction. Slippages may be easiest to recognize on such tests as Wechsler ’s Digit Span or Block Design tests, or design copying tests in which little cultural bias enters into the task and most people in most cultures are able to respond appropriately given the correct instructions. Clinicians practicing independently or in smaller communities may not have access to trained interpreters and thus face a dilemma: to examine, however crudely, or to refer to someone who can provide for translation or who speaks the patient’s language. Nonverbal tests are available for examining these patients but they require the subject to have an understanding of Western culture and at least a modicum of formal education, which makes these tests unsuitable for use with many migrants throughout the world. Artiola i Fortuny and Mullaney (1998) pointed out the ethical hazards when an examiner has only a superficial knowledge of the patient’s language. They advise examiners not well-grounded in a language to get an interpreter or make an appropriate referral. LaCalle (1987) warned against casual interpreters, usually family members or friends, who may be ill-equipped to translate accurately or protective of the patient. Examiners need also be aware that bilingualism can alter normal performance expectations (Ardila, 2000a). English-dominant bilinguals are often disadvantaged relative to monolinguals on a variety of language measures—such as when asked to produce low-frequency words—even when they are tested exclusively in their more dominant language (Rivera Mindt et al., 2008). A group of community living Spanish–English speakers performed speed and calculation tasks better in their first language (Ardila, Rosselli, Ostrosky-Solis, et al., 2000), but bilinguals’ production on a semantic fluency task fell below that of monolinguals and their own phonetic fluency (Rosselli, Ardila, Ostrosky-Solis, et al., 2000). Nevertheless, verbal memory
performance appears to be less affected by bilingualism. Hispanic-American bilinguals’ word-list learning performance was the same, regardless of language of administration (Gasquoine et al., 2007). Adults fully fluent in their second language performed memory and learning tasks at the same level as monolingual subjects; but those who were weaker in their second language had lower rates of learning and retention (J.G. Harris, Cullum, and Puente, 1995). Culture. Different populations have unique experiences, schooling, traditions, and beliefs that can affect patients’ reactions to an examination and their performance on neuropsychological tests (Brickman et al., 2006) . Most obviously, neuropsychological tests developed in one culture and adapted for another may not be equivalent for level of familiarity or difficulty. For example, Dodge and her associates (2008) showed that Japanese and American elders differed in their performances on a mental status examination developed in the U.S., although the total scores across groups were similar. The poorer performance of the Japanese on reading and writing items was explained on the basis of the more complex Japanese word order and written characters. The environment in which a person lives determines which skills are important for success in that environment (Ostrosky-Solis, Ramirez, and Ardila, 2004). Cultural differences may influence more indirect factors such as reactions to the examiner, the examination environment, or to the instruction to “do your best” or “go as fast as you can”(Ardila, 2005). A now important assessment problem is the lack of well-standardized, culturally relevant tests for minority groups (Manly, 2008; Pedraza and Mungas, 2008). One approach to the problem is to use tests that show the least cross-cultural differences (e.g., Levav et al., 1998; Maj et al., 1993). Some tests will be more susceptible to cultural bias than others: Wechsler ’s Comprehension and Picture Arrangement tests, for example, both require fairly subtle social understandings to achieve a high score; a request to draw a bicycle is asking for failure from a refugee raised in a hill village—but may be an effective way of examining an urban Chinese person. Other workers have focused on the need to develop tests and normative data appropriate for specific cultural groups (e.g., D.M. Jacobs et al., 1997; Mungas and Reed, 2000; G.J. Rey, Feldman, and Rivas-Vazquez, 1999). For a Spanish language battery developed for Hispanics of Latin American background or birth in the United States, education turned out to be an overriding variable despite efforts to make the tests culture-compatible (Pontón, Satz, et al., 1996). All tests were affected, both word-based and predominantly visual ones, including Block Design, the Complex Figure Test, and a test of fine motor dexterity. Lowest correlations with education occurred where least expected—on the WHO-
UCLA Auditory Verbal Learning Test (Maj et al., 1993) . As neuropsychology develops across the globe, appropriate tests and procedures are being selected for each society. In this book we are unable to provide a review of tests used or adapted in all cultures, but culture-specific norms are presented for some tests.
Common Assessment Problems with Brain Disorders The mental inefficiency that often prompts a referral for neuropsychological assessment presents both conditions that need to be investigated in their own right and obstacles to a fair assessment of cognitive abilities. Thus the examiner must not only document the presence and nature of mental inefficiency problems but must attempt to get as full a picture as possible of the cognitive functions that may be compromised by mental inefficiency. Attentional deficits
Attentional deficits can obscure the patient’s abilities in almost every area of cognitive functioning. Their effects tend to show up in those activities that provide little or no visual guidance and thus require the patient to perform most of the task’s operations mentally. While some patients with attentional deficits will experience difficulty in all aspects of attention, the problems of many other patients will be confined to only one or two of them. Reduced auditory span. Many patients have a reduced auditory attention span such that they only hear part of what was said, particularly if the message is relatively long, complex, or contains unfamiliar or unexpected wording. The original WAIS (Wechsler, 1955) provided a classic example of this problem in a 23-syllable request to subtract a calculated sum from “a halfdollar.” These patients would subtract the correct sum correctly from a dollar, thus giving an erroneous response to the question and earning no credit. When asked to repeat what they heard, they typically reported, “a dollar,” the “half” getting lost in what was for them too much verbiage to process at once. Their correct answers to shorter but more difficult arithmetic items and their good performances when given paper and pencil further demonstrated the attentional nature of their error. Slow processing speed. One of the most robust findings in patients with a variety of brain disorders is slow information processing speed (e.g., Lengenfelder et al., 2006; Rassovsky et al., 2006). Speed is reduced in normal aging and also is a sensitive indicator of developing cognitive impairment in the elderly (Dixon et al., 2007) . Many tests scored for speed will demonstrate
slow processing problems. When not specifically testing for speed, many patients benefit from a carefully paced presentation of questions and instructions. Mental tracking problems. Other patients may have mental tracking or working memory problems; i.e., difficulty juggling information mentally or keeping track of complex information. They get confused or completely lost performing complex mental tracking tasks such as serial subtraction, although they can readily demonstrate their arithmetic competence on paper. These problems often show up in many repetitions on list-learning or list-generating tasks when patients have difficulty keeping track of their ongoing mental activities, e.g., what they have already said, while still actively conducting a mental search. Distractibility. Another common concomitant of brain impairment is distractibility: some patients have difficulty shutting out or ignoring extraneous stimulation, be it noise outside the testing room, test material scattered on the examination table, or a brightly colored tie or flashy earrings on the examiner. Patients with frontal lesions often have a particular problem with distractibility (Aron et al., 2003). This difficulty may exacerbate attentional problems and increase the likelihood of fatigue and frustration. Distractibility can interfere with learning and cognitive performances generally (Aks and Coren, 1990). The examiner may not appreciate the patient’s difficulty, for the normal person screens out extraneous stimuli so automatically that most people are unaware that this problem exists for others. To reduce the likelihood of interference from unnecessary distractions, the examination should be conducted in what is sometimes referred to as a “sterile environment.” The examining room should be relatively soundproof and decorated in quiet colors, with no bright or distracting objects in sight. The examiner ’s clothing too can be an unwitting source of distraction. The examining table should be kept bare except for materials needed for the test at hand. Clocks and ticking sounds can be bothersome. Clocks should be quiet and out of sight, even when test instructions include references to timing. A wall or desk clock with an easily readable second indicator, placed out of the patient’s line of sight, is an excellent substitute for a stopwatch and frees the examiner ’s hands for note taking and manipulation of test materials. The examiner can count times under 30 seconds with a fair degree of accuracy by making a dot on the answer sheet every 5 seconds. Street noises, a telephone’s ring, or a door slamming down the hall can easily break an ongoing train of thought for many brain damaged patients. If this occurs in the middle of a timed test, the examiner must decide whether to
repeat the item, count the full time taken—including the interruption and recovery—count the time minus the interruption and recovery time, do the item over using an alternate form if possible, skip that item and prorate the score, or repeat the test again another day. Should there not be another testing day, then an alternate form is the next best choice, and an estimate of time taken without the interruption is a third choice. A prorated score is also acceptable. A record of the effects of interruptions due to distractibility on timed tasks gives valuable information about the patient’s efficiency. The sensitive examiner will document attention lapses and how they affect the patient’s performance generally and within specific functional domains. Whenever possible, these lapses need to be explored, usually through testing the limits, to clarify the level of the patient’s actual ability to perform a particular kind of task and how the attentional problem(s) interferes. Memory disorders
Many problems in following instructions or correctly comprehending lengthy or complex test items read aloud by the examiner seem to be due to faulty memory but actually reflect attentional deficits (Howieson and Lezak, 2002b). However, memory disorders too can interfere with assessment procedures. Defective short-term memory. A few patients have difficulty retaining information, such as instructions on what to do, for more than a minute or two. They may fail a task for performing the wrong operation rather than because of inability to do what was required. This problem can show up on tasks requiring a series of responses. For example, on the Picture Completion test of the WIS-A battery, rather than continuing to indicate what is missing in the pictures, some patients begin reporting what they think is wrong; yet if reminded of the instructions, many will admit they forgot what they were supposed to do and then proceed to respond correctly. If not reminded, they would have failed on items they could do perfectly well, and the low score—if interpreted as due to a visuoperceptual or reasoning problem—would have been seriously misleading. Similar instances of forgetting can show up on certain tests of the ability to generate hypotheses (e.g., Category Test, Wisconsin Card Sorting Test, and Object Identification Task) in which patients who have figured out the response pattern that emerges in the course of working through a series of items subsequently forget it as they continue through the series. In these latter tasks the examiner must note when failure occurs after the correct hypothesis has been achieved as these failures may indicate defective working memory. Defective retrieval. A not uncommon source of poor scores on memory
tests is defective retrieval. Many patients with retrieval problems learn and retain information well but are unable to recall at will what they have learned. When learning is not examined by means of a recognition format or by cueing techniques, a naive examiner can easily misinterpret the patient’s poor showing on free recall as evidence of a learning or retention problem. Fatigue
Patients with brain disorders tend to fatigue easily, particularly when an acute condition occurred relatively recently (Lezak, 1978b; van Zomeren and Brouwer, 1990) . Easy fatigability can also be a chronic problem in some conditions, such as multiple sclerosis (Arnett and Rabinowitz, 2010; M. Koch et al., 2008), Parkinson’s disease (Havlikova et al., 2008), post-polio syndrome (Bruno et al., 1993) and, of course, chronic fatigue syndrome (J. Glass, 2010; S.D. Ross et al., 2004). Depressed patients also often experience fatigue (Fava, 2003). It has been proposed that mental fatigue associated with these conditions results from dysfunction of the basal ganglia’s influence on the striato-thalamic-cortical loop (Chaudhuri and Behan, 2000; J. DeLuca, Genova, et al., 2008). The cognitive effects of fatigue have been studied in association with a variety of other medical conditions including cancer (Cull et al., 1996; C.A. Meyers, 2000a,b), chemotherapy (Caraceni et al., 1998; P.B. Jacobsen et al., 1999; Valentine et al., 1998), respiratory disease (P.D. White et al., 1998), and traumatic brain injury (Bushnik et al., 2008). When associated cognitive impairments have been found, they involve sustained attention, concentration, reaction time, and processing speed (Fleck et al., 2002; Groopman, 1998; Tiersky et al., 1997). Studies of sleep deprivation have found deficits in hand– eye coordination (D. Dawson and Reid, 1997), psychomotor vigilance (Dinges et al., 1997), executive function (Fluck et al., 1998; Killgore et al., 2009), psychomotor speed and accuracy (Waters and Bucks, 2011), and visuospatial reasoning and recall (Verstraeten et al., 1996). However, some studies report no association between complaints of fatigue and neuropsychological impairment (S.K. Johnson et al., 1997). Complaints of poor concentration and memory in some patients may be related to mood disorders (Cull et al., 1996) or fatigue-related distress (C.E. Schwartz et al., 1996; Stulemeijer et al., 2007). Admissions of fatigue are usually obtained from selfreport questionnaires (Arnett and Rabinowitz, 2010; R.L. Tate, 2010). DeLuca, Genova, and their colleagues (2008) used fMRI measures of cerebral activation as a measure of mental fatigue. As multiple sclerosis subjects continued to perform a lengthy coding
task, cerebral activity increased over time in the basal ganglia, frontal areas, parietal regions, thalamus, and occipital lobes, which was interpreted as indication of increased mental effort associated with fatigue. Interestingly, performance accuracy did not differ between the patients and a control group. Many brain impaired patients will tell the examiner when they are tired, but others may not be aware themselves or may be unwilling to admit fatigue. Therefore, the examiner must be alert to such signs as slurring of speech, an increased droop on the paralyzed side of the patient’s face, motor slowing increasingly apparent as the examination continues, or restlessness. Patients who are abnormally susceptible to fatigue are most apt to be rested and energized in the early morning and will perform at their best at this time. Even the seemingly restful interlude of lunch may require considerable effort from a debilitated patient and increase fatigue. Physical or occupational therapy is exhausting for many postacute patients. Therefore, in arranging test time, the patient’s daily activity schedule must be considered if the effects of fatigue are to be kept minimal. For patients who must be examined late in the day, in addition to requesting that they rest beforehand, the examiner should recommend that they have a snack. Medication
In the outpatient setting, many patients take medications, whether for a behavioral or mood disturbance, pain, sleep disturbance, or other neurological or medical disorders. Others may be treating themselves with nonprescription over-the-counter (OTC) remedies. While drugs are often beneficial or can be life saving, the effects of medications on different aspects of behavior can significantly alter assessment findings and may even constitute the reason for the emotional or cognitive changes that have brought the patient to neuropsychological attention. Not only may medications in themselves complicate a patient’s neuropsychological status, but complications also can result from incorrect dosages or combinations of medications as well as interactions with OTC drugs, herbal remedies, and certain food (Bjorkman et al., 2002; J.A. Owen, 2010). In the treatment of epilepsy, where physicians have long been sensitive to cognitive side effects of antiepileptic drugs (AEDs) (Salehinia and Rao, 2010), the goal is always to use multiple medications only as a last resort and to use the lowest efficacious dosage (Meador, 2002). This is the ideal goal for every other kind of medical disorder but is not always realized. A 56-year-old sawmill worker with a ninth grade education was referred to an urban medical
center with complaints of visual disturbances, dizziness, and mental confusion. A review of his recent medical history quickly identified the problem as he had been under the care of several physicians. The first treated the man’s recently established seizure disorder with phenytoin (Dilantin), which made him feel sluggish. He went to a second physician with complaints of sluggishness and his seizure history but neglected to report that he was already on an anticonvulsant, so phenytoin was again prescribed and the patient now took both prescriptions. The story repeated itself once again so that by the time his problem was identified he had been taking three times the normal dose for some weeks. Neurological and neuropsychological examinations found pronounced nystagmus and impaired visual scanning, cerebellar dysfunction, and an attentional disorder (digits forward/backward = 4/4; WAIS Arithmetic = 8, WAIS Comprehension = 13 probably is a good indicator of premorbid functioning), and some visuospatial compromise (WAIS Block Design = 8 [age-corrected], see Fig. 5.2, p. 148). Off all medications, he made gains in visual, cerebellar, and cognitive functioning but never enough to return to his potentially dangerous job.
The effect of medications on cognitive functioning is a broad and complex issue involving many different classes of drugs and a host of medical and psychiatric disorders. Although many medications can be associated with cognitive impairment, the drugs with the highest incidence of cognitive side effects are anticholinergics, benzodiazepines, narcotics, neuroleptics, antiepileptic drugs, and sedative-hypnotics (Ferrando et al., 2010; Meador, 1998a,b). Examiners should also be aware that it often takes patients several weeks to adjust to a new drug, and they may experience changes in mental efficiency in the interim. Even nonprescription (in the United States) antihistamines may produce significant cognitive effects (G.G. Kay and Quig, 2001). Nevertheless, medications differ within each drug class, and newer agents are likely to have fewer cognitive side effects. The reader needing information on specific drug effects or on medications used for particular medical or psychiatric conditions should consult the Clinical Manual of Psychopharmacology in the Medically Ill (Ferrando et al., 2010), Physicians’ Desk Reference: PDR (PDR Network, 2010), Goodman and Gilman’s The Pharmacological Basis of Therapeutics (Brunton and Knollman, 2011), “Neuropharmacology”(C.M. Bradshaw, 2010), or similar medication reviews. Commonly prescribed medications for psychiatric disorders are reviewed in The American Psychiatric Publishing Textbook of Psychopharmacology (Schatzberg and Nemeroff, 2009). This latter book goes into some detail describing how these medications work at the intracellular and neurotransmitter levels. Chemotherapy has been linked to cognitive complaints in cancer patients who report “chemo brain” or “chemo fog”(C.A. Meyers, 2008). Patients often complain of subtle difficulties with concentration and memory, even after treatment is over. In a typical study cognitive dysfunction was observed in 17% of women approximately four weeks after chemotherapy for breast cancer
(Vearncombe et al., 2009). In this study, declines in hemoglobin were found to predict impairment on tests of verbal learning and memory and abstract reasoning; still, the reason(s) for cognitive impairment associated with chemotherapy is not known. Other factors that may contribute to cognitive decline include the type of chemotherapy administered, intensity of treatment, severity of diagnosis, other health factors, stress, depression, and fatigue (Anderson-Hanley et al., 2003).
FIGURE 5.2 Copies of the Bender-Gestalt designs drawn on one page by a 56-year-old sawmill worker with phenytoin toxicity.
Geriatric patients are particularly susceptible to drug reactions that can affect—usually negatively—some aspect(s) of cognitive functioning, alertness,
or general activity level (Godwin-Austen and Bendall, 1990). Factors associated with the increased risk of cognitive impairment associated with medication use in elderly persons include imbalances in neurotransmitter systems such as acetylcholine, age-related changes in pharmacodynamics and pharmacokinetics, and high levels of concomitant medication use (S.L. Gray et al., 1999). Elderly people are often on multiple medications (on average seven different drugs according to one report [Bjorkman et al., 2002]), which by itself is a significant risk factor. Complicating matters, patients are often poor historians about what drugs they are taking, their doses, or their dosing intervals (M.K. Chung and Bartfield, 2002). Delirium occurs in up to 50% of hospitalized elderly, many with preexisting dementia (Rigney, 2006) and may occur in younger patients with metabolic disorders, serious illnesses, and following surgery. It is a common, distressing, and often drug-induced complication in patients with advanced cancer (S.H. Bush and Bruera, 2009). The strongest delirium risk appears to be associated with use of opioids and benzodiazepines (Clegg and Young, 2011). The anticholinergic action of some drugs used in Parkinson’s disease or for depression can interfere with memory and, in otherwise mentally intact elderly persons, create the impression of cognitive dilapidation or greatly exacerbate existing dementia (Pondal et al., 1996; Salehinia and Rao, 2010). Brain injury may also increase susceptibility to adverse cognitive reactions to various medications (Cope, 1988; O’Shanick and Zasler, 1990). Brain injury certainly makes drug effects less predictable than for neurologically intact persons (Eames et al., 1990). In many instances, the treating physician must weigh the desired goal of medication—such as the amelioration of anxiety or depression, seizure control, or behavioral calming—against one or another kind of cognitive compromise. Monitoring the neuropsychological status of patients who might benefit from medications known to affect cognition can provide for an informed weighing of these alternatives. Pain
Certain pain syndromes are common in the general population, particularly headache and back pain. Many patients with traumatic brain injury experience pain whether from headaches or bodily injuries, and pain may result from other brain disorders such as thalamic stroke, multiple sclerosis, or disease involving cranial or peripheral nerves. Patients with pain often have reduced attentional capacity, processing speed, and psychomotor speed (Grigsby, Rosenberg, and Busenbark, 1995; McCabe et al., 2005). When comparing TBI patients with and without pain complaints
and TBI noncomplainers with neurologically intact chronic pain patients, those complaining of pain tended to perform more poorly (see R.P. Hart, Martelli, and Zasler, 2000, for a review of studies). Deficits in learning and problem solving also occur in some neurologically intact pain patients (Blackwood, 1996; Jorge et al., 1999). Heyer and his colleagues (2000) found both processing speed and problem solving reduced in cognitively intact elderly patients the day after spinal surgery; poorer performances correlated with higher scores on a pain scale. Decreased mental flexibility also has been associated with pain (Karp et al., 2006; Scherder et al., 2008). Understanding performance deficits by patients with pain may be confounded with the effects of pain medication (Banning and Sjogren, 1990). The presence of pain does not necessarily affect cognitive functioning negatively (B.D. Bell et al., 1999; J.E. Meyers and Diep, 2000). Performances by chronic pain patients on tests of attentional functions, memory, reasoning, and construction were directly related to their general activity level, regardless of extent of emotional distress (S. Thomas et al., 2000). While pain reduced cognitive functioning in some patients (Scherder et al., 2008; P. Sjøgren, Olsen, et al., 2000), it may heighten “working memory” in others (e.g., PASAT performance, P. Sjøgren, Thomsen, and Olsen, 2000). The interpretation of the relationship between pain and cognitive dysfunction is complicated by a variety of symptoms that are often highly associated with pain and may be key factors in this relationship, including anxiety, depression, sleep disturbance, and emotional distress (Iezzi et al., 1999; Jorge et al., 1999; S. Thomas et al., 2000). Pain with suffering, which can be distinguished from pain per se, and pain behavior are more common in patients with cognition disruption (J.B. Wade and Hart, 2002). Cripe and his colleagues (1995) pointed out that the chronicity of the problem (neurologic symptoms, pain, and/or emotional distress) may be a relevant factor in the patient’s behavior as “neurologically impaired patients … might experience more acute emotional distress in the acute phase of their illness” than at later stages (p. 265). Women, particularly those who tend to be fearful, experience lower pain thresholds compared to men (Keogh and Birkby, 1999) . Unfortunately, minorities in the United States, African Americans and Latinos, are more likely to have their pain underestimated by providers and to be under treated (Cintron and Morrison, 2006). Pain assessment scales may indicate the degree of suffering experienced by the patient, and mood assessment scales and symptom checklists may help clarify the role of emotional factors in the patient’s experience of pain. A variety of assessment tools are available and have been developed for specific
pain syndromes (R.L. Tate, 2010; Turk and Melzack, 2001). Cripe (1996b) cautioned against using inventories designed to assist in psychiatric diagnosis (e.g., the Minnesota Multiphasic Personality Inventory) to identify patients for whom pain is a significant problem. Measures of the patient’s ability to muster and sustain effort may provide insight into the role of low energy and fatigue associated with pain. When patients report that their pain is in the moderate to intense range, interpretation of test scores that are below expectation requires consideration of the role of pain on test performance. R.P. Hart, Martelli, and Zasler (2000) stressed the importance of attempting to minimize the effects of pain on test performance when chronic pain is one of the patient’s presenting complaints. They suggested postponing neuropsychological assessment until aggressive efforts aimed at pain reduction have been tried. In cases in which pain treatment is not successful, they offer a variety of suggestions. It may be possible to alter physical aspects of the testing situation to ensure optimal comfort. Frequent breaks allowing the patient to move about, brief “stand up and stretch breaks,” or short appointments may be helpful. Performance inconsistency
It is not unusual for patients with cerebral impairments to report that they have “good days” and “bad days,” so it should not be surprising to discover that in some conditions the level of an individual’s performances can vary noticeably from day to day (Bleiberg et al., 1997) and even hour to hour (A. Smith, 1993), especially with lapses of attention (Stuss, Pogue, et al., 1994; van Zomeren and Brouwer, 1990). Repeated examinations using—in so far as possible—tests that are relatively resistant to practice effects will help to identify best performance and typical performance levels in patients with these kinds of ups and downs. The Dixon group (2007) examined the performances of elders with and without mild cognitive impairment on a battery of cognitive tests taken four times over a period of four to six weeks and found that individuals’ inconsistency in performance, adjusted for practice effect, may be a leading indicator of emerging cognitive impairment. Motivation
Apathy, defined as a lack of self-initiated action, is common across a number of conditions including dementia, Huntington’s disease, traumatic brain injury, and depression (van Reekum et al., 2005). This condition often reflects the patient’s inability to formulate meaningful goals or to initiate and carry out
plans (see pp. 669–670). Behaviorally, motivational defects are associated with lower functional level in terms of activities of daily living and with caregiver distress. Apathy can occur independently of depression and the distinction is important for treatment strategies (M.L. Levy, Cummings, et al., 1998). Working with poorly motivated patients can be difficult. Such patients may perform significantly below their capacities unless cajoled or goaded or otherwise stimulated to perform; and even then, some patients may not fully respond (e.g., see Orey et al., 2000). Damage to the limbic-frontal-subcortical circuits appears to underlie apathy for many disorders (Darby and Walsh, 2005). In a SPECT study using the Apathy Inventory (Robert et al., 2002), Alzheimer patients’ lack of initiative was associated with lower perfusion of the right anterior cingulate cortex compared to other brain regions while lack of interest was associated with lower perfusion in the right middle orbitofrontal gyrus (Benoit et al., 2004). Many other apathy scales are also available (Cummings, Mega, Grey, et al., 1994; Marin et al., 1991; Starkstein, Federoff, et al., 1993). Anxiety, stress, and distress
It is not unusual for the circumstances leading to a neuropsychological examination to have been experienced as anxiety-producing or stressful. Persons involved in litigation frequently admit to anxiety and other symptoms of stress (Gasquoine, 1997a; Murrey, 2000b). Patients who have acquired neuropsychological and other deficits altering their ability to function normally in their relationships and/or their work and living situations have been going through significant and typically highly stressful and anxietyproducing life changes (T.H. Holmes and Rahe, 1967). Negative expectations about one’s potential performance or abilities can affect the test performance (Suhr and Gunstad, 2002). A 60-year-old minister appeared anxious during memory testing. He had requested a neuropsychological examination because he was no longer able to recall names of his parishioners, some of whom he had known for years. He feared that an examination would reveal Alzheimer’s disease, yet he realized that he had to find out whether this was the problem.
Whereas low levels of anxiety can be alerting, high anxiety levels may result in such mental efficiency problems as slowing, scrambled or blocked thoughts and words, and memory failure (Buckelew and Hannay, 1986; Hogan, 2003; Sarason et al., 1986). High levels of test anxiety have been shown to affect performance adversely on many different kinds of mental ability tests (Bennett-Levy, Klein-Boonschate, et al., 1994; C. Fletcher et al., 1998;
Minnaert, 1999). Specific memory dysfunction in some combat survivors (Vasterling et al., 2010; Yehuda et al., 1995) and exacerbation of cognitive deficits following TBI (Bryant and Harvey, 1999a,b; McMillan, 1996b) have been associated with posttraumatic stress disorder. Some studies found that anxiety and emotional distress do not appear to affect cognitive performances whether in TBI patients (Gasquoine, 1997b); in “healthy men”(Waldstein et al., 1997); in open-heart surgery candidates (Vingerhoets, De Soete, and Jannes, 1995); or with “emotional disturbances” in psychiatric patients without brain damage as well as TBI patients (Reitan and Wolfson, 1997b). When anxiety contributes to distractibility, anxiety effects may be reduced by instructions that help to focus the examinee’s attention on the task at hand (Sarason et al., 1986) or by tasks which so occupy the subject’s attention as to override test anxiety (J.H. Lee, 1999). Depression and frustration
Depression is associated with many brain disorders and may be due to any combination of “neuroanatomic, neurochemical, and psychosocial factors”(Rosenthal, Christensen, and Ross, 1998; Sweet, Newman, and Bell, 1992; see pp. 383–385). It can interfere with the motivational aspects of memory in that the patient simply puts less effort into the necessary recall. Prospective memory may be particularly vulnerable to this aspect of a depressed mental state (Hertel, 2000). Moreover, depression and frustration are often intimately related to fatigue in many ill patients, with and without brain disorders (Akechi et al., 1999); and the pernicious interplay between them can seriously compromise the patient’s performance (Kaszniak and Allender, 1985; Lezak, 1978b). Fatigue-prone patients will stumble more when walking, speaking, and thinking, and become more frustrated which, in turn, drains their energies and increases their fatigue. This results in a greater likelihood of failure and leads to more frustration and eventual despair. Repeated failure in exercising previously accomplished skills, difficulty in solving once easy problems, and the need for effort to coordinate previously automatic responses can further contribute to the depression that commonly accompanies brain disorders. After a while, some patients quit trying. Such discouragement usually carries over into their test performances and may obscure cognitive strengths from themselves as well as the examiner. When examining brain injured patients it is important to deal with problems of motivation and depression. Encouragement is useful. The examiner can deliberately ensure that patients will have some success, no matter how extensive the impairments. Frequently the neuropsychologist may be the first
person to discuss patients’ feelings about their mental changes and to give reassurance that depression is natural and common to people with this condition and that it may well dissipate in time. Many patients experience a great deal of relief and even some lifting of their depression by this kind of informational reassurance. The examiner needs to form a clear picture of a depressed patient’s state at the time of testing, as a mild depression or a transiently depressed mood state is less likely to affect test performance than a more severe one. Depression can —but will not necessarily—interfere with performance due to distracting ruminations (M.A. Lau et al., 2007) and/or response slowing (Kalska et al., 1999; Watari et al., 2006) and most usually, contribute to learning deficits (Goggin et al., 1997; Langenecker, Lee, and Bieliauskas, 2009; Rosenstein, 1998). Yet, cognitive performances by depressed patients, whether brain damaged or not, may not be affected by the depression (Reitan and Wolfson, 1997b; Rohling et al., 2002) . In one series of patients with moderate to severe TBI patients, depression affected test scores only a little (Chaytor, Temkin, et al., 2007). Even major depression may not add to neuropsychological impairments (Crews et al., 1999; J.L. Wong, Wetterneck, and Klein, 2000). Sweet and his colleagues (1992) caution examiners not to use mildly depressed scores on tests of attention or memory as evidence of a brain disorder in depressed patients, but rather to look for other patterns of disability or signs of dysfunction. Patients in litigation
Providing evaluations for legal purposes presents special challenges (Bush, 2005; Larrabee, 2005; Sweet, Ecklund-Johnson, and Malina, 2008). Because the findings in forensic cases are prepared for nonclinicians, the conclusions should be both scientifically defensible and expressed or explained in lay terms. Moreover, at least the major portion of the examination procedures should have supporting references (see Daubert v. Merrell Dow Pharmaceuticals, 509 US 579 [1993]). Consistent with sound clinical practices, the forensic examination may be hypothesis driven and tailored to the patient’s unique condition (Bigler, 2008; Larrabee, 2008). The most important data may be behavioral or qualitative, such as apathy or changes in comportment associated with frontal lobe injuries, and thus appear “subjective.” In these cases, conclusions can be supported by information obtained from persons close to the patient, such as a spouse or intimate friend, and should be explainable in terms of known brain–behavior relationships and reports in the literature rather than deviant test scores. The discussion
presented here summarizes assessment issues and does not cover testifying as an expert witness, court proceedings, or other legal issues (for a full discussion, see Greiffenstein, 2008; Murrey, 2000a). When a psychologist is retained to examine a person involved in litigation, this arrangement may alter the examiner ’s duties to the patient as well as the rules of confidentiality (L.M. Binder and Thompson, 1995). Examiners may be asked to have an observer during the examination. Having a third party present can change the climate of the examination by making the patient selfconscious, inducing the patient to perform in a manner expected by the observer, or producing the possibility of distractions that normally would not exist (McCaffrey, Fisher, et al., 1996; McSweeny, Becker, et al., 1998). Kehrer and her colleagues (2000) found “a significant observer effect … on tests of brief auditory attention, sustained attention, speed of information processing, and verbal fluency.” They recommend “caution … when any observer is present (including trainees).” For these reasons, the National Academy of Neuropsychology (NAN) Policy and Planning Committee (2000a) strongly recommends that third party observers be excluded from the examination. Additionally, the NAN committee pointed out that having a nonpsychologist present violates test security, which is also a concern of test publishers as psychologists also have a responsibility to protect test data (Attix et al., 2007). If the examiner is adamant about not allowing an observer into the examining room and explains the reasons for protecting the subject and the test materials from an invasive intrusion, most lawyers will usually agree to these requirements and, if the issue must be adjudicated, the court will usually support this protection. If not, the examiner must decide whether to accede to this request or not; and if not, the examiner must be willing to relinquish this case to another who would accept such an intrusion (see also McCaffrey, Fisher, et al., 1996). Although recording the examination on tape may seem to be a realistic alternative to having an observer present, test security is necessarily compromised by such an arrangement and the possibly distractive effects of taping on the patient are unknown. Often, forensic evaluations are lengthy due to the perceived need to be thorough. It is particularly important in injury cases that the premorbid status of the patient be established with as much evidence as possible. The examiner should have an understanding of the base rates of the neurobehavioral symptoms relevant to the case at hand (McCaffrey, Palav, et al., 2003; Rosenfeld et al., 2000; Yedid, 2000b). In choosing tests, preference should be given to well-known ones with appropriate normative data and, as much as possible, known rates of error. As
is true for clinical evaluations, when performance below expectation is observed on one test, the reliability of the finding should be assessed using other tests requiring similar cognitive skills. Every effort should be made to understand discrepancies so that spurious findings can be distinguished from true impairment. Emotional problems frequently complicate the patient’s clinical picture. The patient’s emotional and psychiatric status should be assessed in order to appreciate potential contributions of depression, anxiety, or psychotic thinking to test performance. When performance below expectation is observed, the examiner should assess the patient’s motivation and cooperation and, most notably, the possibility that the subject has wittingly (i.e., malingering) or unwittingly exaggerated present symptoms or introduced imagined ones (Larrabee, 2007; Yedid, 2000a). Intentionally feigning or exaggerating symptoms typically occurs in the context of potential secondary gain, which may be financial or psychological (e.g., perpetuating a dependency role) (Pankratz, 1998). Tests have been developed to measure response bias and, especially, deliberate malingering (see Chapter 20). Most tests of motivation examine one or another aspect of memory because of the prevalence of memory complaints in patients who have had any kind of damage to the brain. Tests of motivation involving other cognitive domains are scarce, although data from research studies suggest models (see Pankratz, 1983, 1998). However, the determination of malingering or other response bias must be based on overall clinical evaluation. Alternative explanations for poor performance on these tests should be considered, such as anxiety, perplexity, fatigue, misunderstanding of instructions, or fear of failure. Moreover, for some patients—and especially with some tests—poor performance may only reflect a significant memory or perceptual disorder. Estimates of base rates of malingering vary from clinician to clinician but average around 17% in the forensic setting, about 10% in some clinical settings (Rosenfeld et al., 2000). When base rates are this low, the positive predictive accuracy of tests can be unacceptably low, so caution is advised in interpreting scores of malingering tests. Neuropsychological evaluations may be requested to provide evidence for competency determinations, which are made by the court. The purpose of the evaluation and the consequences of impaired performance should be explained to the examinee. Although the risk of antagonizing some people exists, they need to understand that it is important for them to give their best effort in the examination. Test selection should be based on the particular mental capacity in question (K. Sullivan, 2004; see pp. 761–763 for a discussion of tests for
mental capacity). Most competency judgments require that the person has good reality contact, general orientation to time, memory for pertinent personal information, and intact reasoning and judgment including appreciation of one’s condition, situation, and needs. If an area of impairment is found, the examiner should look for the presence of residual compensatory abilities (M. Freedman, Stuss, and Gordon, 1991) . Mental capacity evaluations in criminal cases may involve assessing culpable state of mind or mental capacity to stand trial. The former requires assessment of a defendant’s intent to do something wrong while the latter involves assessing whether a defendant is able to understand the nature of the charges and assist in the defense of the case. The same person may be examined by more than one psychologist within a short period of time when attorneys are seeking to make their case as convincing as possible or when opposing attorneys each request an examination. Since practice effects can be substantial, the second psychologist will want to know which tests have already been given so that alternate tests may be selected, or areas of underrepresentation at the first examination may be appropriately explored. When this information is not available, the examiner needs to ask the patient if the test materials are familiar and, if so, arrange to see the previous examination’s data before preparing a report. Interpretation of repeated tests is more accurate if their practice effects are known. Neuropsychologists are bound to provide an objective evaluation and to present the findings and conclusions in an unbiased manner. Awareness of the pressures in the forensic setting can help them avoid bias (van Gorp and McMullen, 1997). MAXIMIZING THE PATIENT’S PERFORMANCE LEVEL The goal of testing is always to obtain the best performance the patient is capable of producing. S.R. Heaton and R.K. Heaton, 1981
It is not difficult to get a brain damaged patient to do poorly on a psychological examination, for the quality of the performance can be exceedingly vulnerable to external influences or changes in internal states. All an examiner need do is make these patients tired or anxious, or subject them to any one of a number of distractions most people ordinarily do not even notice, and their test scores will plummet. In neuropsychological assessment, the difficult task is enabling the patient to perform as well as possible. Eliciting the patient’s maximum output is necessary for a valid behavioral assessment.
Interpretation of test scores and of test behavior is predicated on the assumption that the demonstrated behavior is a representative sample of the patient’s true capacity in that area. Of course, it is unlikely that all of a person’s ability to do something can ever be demonstrated; for this reason many psychologists distinguish between a patient’s level of test performance and an estimated ability level. The practical goal is to help patients do their best so that the difference between what they can do and how they actually perform is negligible.
Optimal versus Standard Conditions In the ideal testing situation, both optimal and standard conditions prevail. Optimal conditions are those that enable patients to do their best on the tests. They differ from patient to patient, but for most brain injured patients they include freedom from distractions, a nonthreatening emotional climate, and protection from fatigue. Standard conditions are prescribed by the testmaker to ensure that each administration of the test is as much like every other administration as possible so that scores obtained on different test administrations can be compared. To this end, many testmakers give detailed directions on the presentation of their test, including specific instructions on word usage, handling the material, etc. Highly standardized test administration is necessary when using norms of tests that have a fine-graded and statistically well standardized scoring system, such as the Wechsler Intelligence Scale tests. By exposing each patient to nearly identical situations, the standardization of testing procedures also enables the examiner to discover the individual characteristics of each patient’s responses. Normally, there need be no conflict between optimal and standard conditions. When brain impaired patients are tested, however, a number of them will be unable to perform well within the confines of the standard instructions. For some patients, the difficulty may be in understanding the standard instructions. It is particularly important to find out what patients understood or retained when their response is so wide of the mark that it is doubtful they were answering the question the examiner asked. In such cases, subtle attention, memory, or hearing defects may emerge; or if the wrong answer was due to a chance mishearing of the question, the patient has an opportunity to correct the error and gain the credit due. It may be necessary to repeat instructions or even paraphrase them. “The same words do not necessarily mean the same thing to
different people and it is the meaning of the instructions which should be the same for all people rather than the wording”(M. Williams, 1965, p. xvii). Some tests, such as tests on the Wechsler Intelligence Scale, specifically say not to paraphrase. In those cases, answers can be scored for both the standard and nonstandard instructions. The examination of patients can pose other problems. Should a patient not answer a question for 30 seconds or more, the examiner can ask the patient to repeat it, thus finding out if lack of response is due to inattention, forgetting, slow thinking, uncertainty, or unwillingness to admit failure. When the patient has demonstrated a serious defect of attention, immediate memory, or capacity to make generalizations, it is necessary to repeat the format each time one of a series of similar questions is asked. For example, if the patient’s vocabulary is being tested, the examiner must ask what the word means with every new word, for the subject may not remember how to respond without prompting at each question. This is the kind of aberrant behavior that should be documented and described in the report, for it affords a valuable insight into the patient’s cognitive dysfunction. Scoring questions arise when the patient gives two or more responses to questions that have only one correct or one best answer. When one of the patient’s answers is correct, the examiner should invite the patient to decide which answer is preferred and then score accordingly unless the test administration instructs otherwise. Timing presents even greater and more common standardization problems than incomprehension in that both brain impaired and elderly patients are likely to do timed tests slowly and lose credit for good performances. Many timing problems can be handled by testing the limits. With a brain damaged population and with older patients (Storandt, 1977), many timed tests should yield two scores: the score for the response within the time limit and another for the performance regardless of time. Nowhere is the conflict between optimal and standard conditions so pronounced or so unnecessary as in the issue of emotional support and reassurance of the test-taking patient. For many examiners, standard conditions have come to mean that they have to maintain an emotionally impassive, standoffish attitude towards their patients when testing. The stern admonitions of test-makers to adhere to the wording of the test manual and not tell the patient whether any single item was passed have probably contributed to the practice of coldly mechanical test administration. From the viewpoint of any but the most severely regressed or socially insensitive patient, that kind of test experience is very anxiety-provoking.
Almost every patient approaches psychological testing with a great deal of apprehension. Brain injured patients and persons suspected of harboring a brain tumor or some insidious degenerative disease are often frankly frightened. When confronted with an examiner who displays no facial expression and speaks in a flat—monotonic—voice, who never smiles, and who responds only briefly and curtly to the patient’s questions or efforts at conversation, patients generally assume that they are doing something wrong —failing or displeasing the examiner. Their anxiety soars. Such a threatening situation can compromise some aspects of the test performance. Undue anxiety certainly will not be conducive to a representative performance (Bennett-Levy, Klein-Boonschate, et al., 1994). Fear of appearing stupid may also prevent impaired patients from showing what they can do. In working with patients who have memory disorders, the examiner need be aware that in order to save face many of them say they cannot remember not only when they cannot remember but also when they can make a response but are unsure of its correctness. When the examiner gently and encouragingly pushes them in a way that makes them feel more comfortable, most patients who at first denied any recall of test material demonstrate at least some memory. Although standard conditions do require that the examiner adhere to the instructions in the test manual and give no hint regarding the correctness of a response, these requirements can easily be met without creating a climate of fear and discomfort. A sensitive examination calls for the same techniques the psychologist uses to put a patient at ease in an interview and to establish a good working relationship. Conversational patter is appropriate and can be very anxiety-reducing. The examiner can maintain a relaxed conversational flow with the patient throughout the entire test session without permitting it to interrupt the administration of any single item or task. The examiner can give continual support and encouragement to the patient without indicating success or failure by smiling and rewarding the patient’s efforts with words such as “fine,” “good,” which do not indicate whether the patient passed or failed an item. If a patient wants to know whether a response is correct, the examiner must explain it is not possible to give this information and that a general performance summary will be given at the end of the examination. Of course, without being able to score many of the tests at this point, the summary will be “off the cuff,” with limited details, and offered as such.
When Optimal Conditions Are Not Best
Some patients who complain of significant problems attending, learning, and responding efficiently in their homes or at work perform well in the usual protective examination situation. Their complaints, when not supported by examination findings, may become suspect or be interpreted as signs of some emotional disturbance reactive to or exacerbated by a recent head injury or a chronic neurologic disease. Yet the explanation for the discrepancy between their complaints and their performance can lie in the calm and quiet examining situation in which distractions are kept to a minimum. This contrasts with their difficulties concentrating in a noisy machine shop or buzzing busy office, or keeping thoughts and perceptions focused in a shopping mall with its flashing lights, bustling crowds, and piped-in music from many cacophonous sources. Of course an examination cannot be conducted in a mall. However, the examiner can usually find a way to test the effects of piped-in music or distracting street or corridor noises on a patient’s mental efficiency. Those examiners whose work setting does not provide a sound-proofed room with controlled lighting and no interruptions may not always be able to evoke their patients’ best performance, but they are likely to learn more about how the patients perform in real life.
Talking to Patients With few exceptions, examiners will communicate best by keeping their language simple. Almost all of the concepts that professionals tend to communicate in technical language can be conveyed in everyday words. It may initially take some effort to substitute “find out about your problem” for “differential diagnosis” or “loss of sight to your left” for “left homonymous hemianopsia” or “difficulty thinking in terms of ideas” for “abstract conceptualization.” Examiners may find that forcing themselves to word these concepts in everyday speech may add to their understanding as well. Exceptions to this rule may be those brain damaged patients who were originally well endowed and highly accomplished, for whom complex ideation and an extensive vocabulary came naturally, and who need recognition of their premorbid status and reassurance of residual intellectual competencies. Talking at their educational level conveys this reassurance and acknowledges their intellectual achievements implicitly even more forcefully than telling them that they are bright. In reviewing results of an examination, most patients will benefit from a short explanation of their strengths and weaknesses. If the entire set of results
is presented, the patient likely will be overwhelmed and not retain the information. A good rule of thumb is to select up to three weaknesses and explain them in simple language. See if the patient can relate the information to their daily experience. To keep up the patient’s spirits, balance the few weaknesses with a similar number of strengths. Finding strengths can be more challenging than weaknesses in some cases and may require statements such as, “And you have a supportive family.” Many patients will benefit from having the results of the examination explained to them on a day different from the examination, when they may be so fatigued as to not process the information. If a patient’s spouse or close person can be there, all the better for ensuring that what was said is understood and retained by someone. Waiting to a later time also gives the patient a chance to formulate questions. Now for some “don’ts.” Don’t “invite” patients to be examined, to take a particular test or, for that matter, to do anything they need to do. If you invite people to do something or ask if they would care to do it, they can say “no” as well as “yes.” Once a patient has refused you have no choice but to go along with the decision since you offered the opportunity. Therefore, when patients must do something, tell them what it is they need to do as simply and as directly as you can. I have a personal distaste for using expressions such as “I would like you to …” or “I want you to …” when asking patients to do something [mdl]. I feel it is important for them to undertake for their own sake whatever it is the clinician asks or recommends and that they not do it merely or even additionally to please the clinician. Thus, I tell patients what they need to do using such expressions as, “I’m going to show you some pictures and your job is to …” or, “When I say ‘Go,’ you are to… .” My last “don’t” also concerns a personal distaste, and that is for the use of the first person plural when asking the patient to do something: “Let’s try these puzzles” or “Let’s take a few minutes’ rest.” The essential model for this plural construction is the kindergarten teacher ’s directive, “Let’s go to the bathroom.” The usual reason for it is reluctance to appear bossy or rude. Because it smacks of the kindergarten and is inherently incorrect (the examiner is not going to take the test nor does the examiner need a rest from the testing), sensitive patients may feel they are being demeaned. CONSTRUCTIVE ASSESSMENT Every psychological examination can be a personally useful experience for the patient. Patients should leave the examination feeling that they have gained
something for their efforts, whether it was an increased sense of dignity or self-worth, insight into their behavior, or constructive appreciation of their problems or limitations. When patients feel better at the end of the examination than they did at the beginning, the examiner has probably helped them to perform at their best. When they understand themselves better at the end than at the beginning, the examinations were probably conducted in a spirit of mutual cooperation in which patients were treated as reasoning, responsible individuals. It is a truism that good psychological treatment requires continuing assessment. By the same token, good assessment will also contribute to each patient’s psychological well-being.
1 In the United States, examining clinicians providing health care services are now required by the Health Information Privacy Protection Act (HIPPA) to review items 1–5 above with their patients or patients’ guardians (American Psychological Association, no date). 1 As possible, tests in the public domain will be identified when presented in this text.
6 The Neuropsychological Examination: Interpretation THE NATURE OF NEUROPSYCHOLOGICAL EXAMINATION DATA The basic data of psychological examinations, like any other psychological data, are behavioral observations. In order to get a broad and meaningful sample of the patient’s behavior from which to draw diagnostic inferences or conclusions relevant to patient care and planning, the psychological examiner needs to have made or obtained reports of many different kinds of observations, including historical and demographic information.
Different Kinds of Examination Data Background data
Background data are essential for providing the context in which current observations can be best understood. In most instances, accurate interpretation of the patient’s examination behavior and test responses requires at least some knowledge of the developmental and medical history, family background, educational and occupational accomplishments (or failures), and the patient’s current living situation and level of social functioning. The examiner must take into account a number of patient variables when evaluating test performances, including sensory and motor status, alertness cycles and fatigability, medication regimen, and the likelihood of drug or alcohol dependency. An appreciation of the patient’s current medical and neurological status can guide the examiner ’s search for a pattern of neuropsychological deficits. The importance of background information in interpreting examination observations is obvious when evaluating a test score on school-related skills such as arithmetic and spelling or in the light of a vocational history that implies a particular performance level (e.g., a journeyman millwright must be of at least average ability but is more likely to achieve high average or even better scores on many tests; to succeed as an executive chef requires at least high average ability but, again, many would perform at a superior level on cognitive tests). However, motivation to reach a goal is also important:
professionals can be of average ability while an individual with exceptional ability might be a shoe clerk. The contributions of such background variables as age or education to test performance have not always been appreciated in the interpretation of many different kinds of tests, including those purporting to measure neuropsychological integrity (e.g., not PsychCorp, 2008a; nor Reitan and Wolfson, 1995b; nor Wechsler, 1997a; 1997b provide education data for computed scores or score conversions on any tests). Behavioral observations
Naturalistic observations can provide very useful information about how the patient functions outside the formalized, usually highly structured, and possibly intimidating examination setting. Psychological examiners rarely study patients in their everyday setting yet reports from nursing personnel or family members may help set the stage for evaluating examination data or at least raise questions about what the examiner observes or should look for. The value of naturalistic observations may be most evident when formal examination findings alone would lead to conclusions that patients are more or less capable than they actually are (Capitani, 1997; Newcombe, 1987). Such an error is most likely to occur when the examiner confounds observed performance with ability. For example, many people who survive even quite severe head trauma in moving vehicle accidents ultimately achieve scores that are within or close to the average ability range on most tests of cognitive function (Crosson, Greene, Roth, et al., 1990; H.S. Levin, Grossman, Rose, and Teasdale, 1979; Ruttan et al., 2008). Yet, by some accounts, as few as one-third of them hold jobs in the competitive market as so many are troubled by problems of attention, temperament, and self-control (Bowman, 1996; Cohadon et al., 2002; Hoofien, Vakil, Cohen, and Sheleff, 1990; Lezak and O’Brien, 1990). The behavioral characteristics that compromise their adequate and sometimes even excellent cognitive skills are not elicited in the usual neuropsychiatric or neuropsychological examination. Mesulam (1986) reviewed several cases of patients with frontal lobe damage who exhibited no cognitive deficits on formal neuropsychological examination (see follow-up by Burgess, Alderman, and colleagues, 2009). However, these deficits become painfully apparent to anyone who is with these patients as they go about their usual activities—or, in many cases, inactivities. In contrast, there is the shy, anxious, or suspicious patient who responds only minimally to a white-coated examiner but whose everyday behavior is far superior to anything the examiner sees; and also patients whose coping strategies enable them to function well despite significant cognitive deficits (B.A. Wilson, 2000; R.L.
Wood, Williams, and Kalyani, 2009). How patients conduct themselves in the course of the examination is another source of useful information. Their comportment needs to be documented and evaluated as attitudes toward the examination, conversation or silence, the appropriateness of their demeanor and social responses, can tell a lot about their neuropsychological status as well as enrich the context in which their responses to the examination proper will be evaluated. Test data In a very real sense there is virtually no such thing as a neuropsychological test. Only the method of drawing inferences about the tests is neuropsychological. K.W. Walsh, 1992
Testing differs from these other forms of psychological data gathering in that it elicits behavior samples in a standardized, replicable, and more or less artificial and restrictive situation (S.M. Turner et al., 2001; Urbina, 2004). Its strengths lie in the approximate sameness of the test situation for each subject, for it is the sameness that enables the examiner to compare behavior samples between individuals, over time, or with expected performance levels. Its weaknesses too lie in the sameness, in that psychological test observations are limited to the behaviors prompted by the test situation. To apply examination findings to the problems that trouble the patient, the psychological examiner extrapolates from a limited set of observations to the patient’s behavior in real-life situations. Extrapolation from the data is a common feature of other kinds of psychological data handling as well, since it is rarely possible to observe a human subject in every problem area. Extrapolations are likely to be as accurate as the observations on which they are based are pertinent, precise, and comprehensive, as the situations are similar, and as the generalizations are apt. A 48-year-old advertising manager with originally superior cognitive abilities sustained a right hemisphere stroke with minimal sensory or motor deficits. He was examined at the request of his company when he wanted to return to work. His verbal skills in general were high average to superior, but he was unable to construct two-dimensional geometric designs with colored blocks, put together cut-up picture puzzles, or draw a house or person with proper proportions (see Fig. 6.1). The neuropsychologist did not observe the patient on the job but, generalizing from these samples, she concluded that the visuoperceptual distortions and misjudgments demonstrated on the test would be of a similar kind and would occur to a similar extent with layout and design material. The patient was advised against retaining responsibility for the work of the display section of his department. Later conferences with the patient’s employers confirmed that he was no longer able to evaluate or supervise the display operations.
In most instances examiners rely on their common-sense judgments and
practical experiences in making test-based predictions about their patients’ real-life functioning. Studies of the predictive validity and ecological validity of neuropsychological tests show that many of them have a good predictive relationship with a variety of disease characteristics (e.g., pp. 125–126) and practical issues (see p. 126).
FIGURE 6.1 House-Tree-Person drawings of the 4 8-year-old advertising manager described in the text (size reduced to one-third of original).
Quantitative and Qualitative Data Every psychological observation can be expressed either numerically as quantitative data or descriptively as qualitative data. Each of these classes of data can constitute a self-sufficient data base as demonstrated by two different approaches to neuropsychological assessment. An actuarial system (Reitan, 1966; Reitan and Wolfson, 1993)—elaborated by others (e.g., Heaton, Grant, and Matthews, 1991; J.A. Moses, Jr., Pritchard, and Adams, 1996, 1999)— exemplifies the quantitative method. It relies on scores, derived indices, and score relationships for diagnostic predictions. Practitioners using this method may have a technician examine the patient so that, except for an introductory or closing interview, their data base is in numerical, often computer-processed,
form. At the other extreme is a clinical approach built upon richly described observations without objective standardization (A.-L. Christensen, 1979; Luria, 1966). These clinicians documented their observations in careful detail, much as neurologists or psychiatrists describe what they observe. Both approaches have contributed significantly to the development of contemporary neuropsychology (Barr, 2008). Together they provide the observational frames of reference and techniques for taking into account, documenting, and communicating the complexity, variability, and subtleties of patient behavior. Although some studies suggest that reliance on actuarial evaluation of scores alone provides the best approach to clinical diagnosis (R.M. Dawes, Faust, and Meehl, 1989), this position has not been consistently supported in neuropsychology (Cimino, 1994; Heaton, Grant, Anthony, and Lehman, 1981; Ogden-Epker and Cullum, 2001). Nor is it appropriate for many—perhaps most—assessment questions in neuropsychology, as only simple diagnostic decision making satisfies the conditions necessary for actuarial predictions to be more accurate than clinical ones: (1) that there be only a small number of probable outcomes (e.g., left cortical lesion, right cortical lesion, diffuse damage, no impairment); (2) that the prediction variables be known (which limits the amount of information that can be processed by an actuarial formula to the information on which the formula was based); and (3) that the data from which the formula was derived be relevant to the questions asked (American Academy of Clinical Neuropsychology, 2007; Pankratz and Taplin, 1982). Proponents of purely actuarial evaluations overlook the realities of neuropsychological practice in an era of advanced neuroimaging and medical technology: most assessments are not undertaken for diagnostic purposes but to describe the patient’s neuropsychological status. Even in those instances in which the examination is undertaken for diagnostic purposes the issue is more likely to concern diagnostic discrimination requiring consideration of a broad range of disorders—including the possibility of more than one pathological condition being operative—than making a decision between three or four discrete alternatives. Moreover, not infrequently diagnosis involves variables that are unique to the individual case and not necessarily obvious to a naive observer or revealed by questionnaires, variables for which no actuarial formulas have been developed or are ever likely to be developed (Barth, Ryan, and Hawk, 1992). It is also important to note that the comparisons in most studies purporting to evaluate the efficacy of clinical versus actuarial judgments are not presenting the examiners with real patients with whom the examiner has a live
interaction, but rather with the scores generated in the examination—and just the scores, without even descriptions of the qualitative aspects of the performance (e.g., Faust, Hart, and Guilmette, 1988a; Faust, Hart, Guilmette, and Arkes, 1988b; see also this page). This debate has extended into one concerning “fixed” versus “flexible” approaches (Larrabee, Millis, and Meyers, 2008). Practical judgment and clinical experience supports the use of a “flexible” selection of tests to address the referral question(s) and problems/issues raised in neuropsychological consultation (American Academy of Clinical Neuropsychology, 2007). Quantitative data The number is not the reality, it is only an abstract symbol of some part or aspect of the reality measured. The number is a reduction of many events into a single symbol. The reality was the complex dynamic performance. Lloyd Cripe, 1996a, p. 191
Scores are summary statements about observed behavior. Scores may be obtained for any set of behavior samples that can be categorized according to some principle. The scorer evaluates each behavior sample to see how well it fits a predetermined category and then gives it a place on a numerical scale (Urbina, 2004). A commonly used scale for individual test items has two points, one for “good” or “pass” and the other for “poor” or “fail.” Three-point scales, which add a middle grade of “fair” or “barely pass,” are often used for grading ability test items. Few item scales contain more than five to seven scoring levels because the gradations become so fine as to be confusing to the scorer and meaningless for interpretation. Scored tests with more than one item produce a summary score that is usually the simple sum of the scores for all the individual items. Occasionally, test-makers incorporate a correction for guessing into their scoring systems so that the final score is not just a simple summation. Thus, a final test score may misrepresent the behavior under examination on at least two counts: It is based on only one narrowly defined aspect of a set of behavior samples, and it is two or more steps removed from the original behavior. “Global,” “aggregate,” or “full-scale” scores calculated by summing or averaging a set of test scores are three to four steps removed from the behavior they represent. Summary index scores based on item scores that have had their normal range restricted to just two points representing either pass or fail, or “within normal limits” or “brain damaged,” are also many steps removed from the
original observations. Thus “index scores,” which are based on various combinations of scores on two or more—more or less similar—tests suffer the same problems as any other summed score in that they too obscure the data. One might wonder why index scores should exist at all: if the tests entering into an index score are so similar that they can be treated as though they examined the same aspects of cognitive functioning, then two tests would seem unnecessary. On the other hand, if each of two tests produces a different score pattern or normative distribution or sensitivity to particular kinds of brain dysfunction, then the two are different and should be treated individually so that the differences in patient performances on these tests can be evident and available for sensitive test interpretation. The inclusion of test scores in the psychological data base satisfies the need for objective, readily replicable data cast in a form that permits reliable interpretation and meaningful comparisons. Standard scoring systems provide the means for reducing a vast array of different behaviors to a single numerical system (see pp. 165–167). This standardization enables the examiner to compare the score of any one test performance of a patient with all other scores of that patient, or with any group or performance criteria. Completely different behaviors, such as writing skills and visual reaction time, can be compared on a single numerical scale: one person might receive a high score for elegant penmanship but a low one on speed of response to a visual signal; another might be high on both kinds of tasks or low on both. Considering one behavior at a time, a scoring system permits direct comparisons between the handwriting of a 60-year-old stroke patient and that of school children at various grade levels, or between the patient’s visual reaction time and that of other stroke patients of the same age. Problems in the evaluation of quantitative data To reason—or do research—only in terms of scores and score-patterns is to do violence to the nature of the raw material. Roy Schafer, 1948
When interpreting test scores it is important to keep in mind their artificial and abstract nature. Some examiners come to equate a score with the behavior it is supposed to represent. Others prize standardized, replicable test scores as “harder,” more “scientific” data at the expense of unquantified observations. Reification of test scores can lead the examiner to overlook or discount direct observations. A test-score approach to psychological assessment that minimizes the importance of qualitative data can result in serious distortions in
the interpretations, conclusions, and recommendations drawn from such a onesided data base. To be neuropsychologically meaningful, a test score should represent as few kinds of behavior or dimensions of cognitive functions as possible. The simpler the test task, the clearer the meaning of scored evaluations of the behavior elicited by that task. Correspondingly, it is often difficult to know just what functions contribute to a score obtained on a complex, multidimensional test task without appropriate evaluation based on a search for commonalities in the patient’s performances on different tests, hypotheses generated from observations of the qualitative features of the patient’s behavior, and the examiner ’s knowledge of brain-behavior relationships and how they are affected by neuropathological conditions (Cipolotti and Warrington, 1995; Darby and Walsh, 2005; Milberg, Hebben, and Kaplan, 1996). If a score is overinclusive, as in the case of summed or averaged test battery scores, it becomes virtually impossible to know just what behavioral or cognitive characteristic it represents. Its usefulness for highlighting differences in ability and skill levels is nullified, for the patient’s behavior is hidden behind a hodgepodge of cognitive functions and statistical manipulations (J.M. Butler et al., 1963; A. Smith, 1966). N. Butters (1984b) illustrated this problem in reporting that the “memory quotient” (MQ) obtained by summing and averaging scores on the Wechsler Memory Scale (WMS) was the same for two groups of patients, each with very different kinds of memory disorders based on very different neuropathological processes. His conclusion that “reliance on a single quantitative measure of memory … for the assessment of amnesic symptoms may have as many limitations as does the utilization of an isolated score … for the full description of aphasia” (p. 33) applies to every other kind of neuropsychological dysfunction as well. The same principle of multideterminants holds for single test scores too as similar errors lowering scores in similar ways can occur for different reasons (e.g., attentional deficits, language limitations, motor slowing, sensory deficits, slowed processing, etc.). Further, the range of observations an examiner can make is restricted by the test. This is particularly the case with multiple-choice paper-and-pencil tests and those that restrict the patient’s responses to button pushing or another mechanized activity that limits opportunities for self-expression. A busy examiner may not stay to observe the cooperative, comprehending, or docile patient manipulating buttons or levers or taking a paper-and-pencil test. Multiple-choice and automated tests offer no behavior alternatives beyond the prescribed set of responses. Qualitative differences in these test performances are recorded only when there are frank aberrations in test-taking behavior,
such as qualifying statements written on the answer sheet of a personality test or more than one alternative marked on a single-answer multiple-choice test. For most paper-and-pencil or automated tests, how the patient solves the problem or goes about answering the question remains unknown or is, at best, a matter of conjecture based on such relatively insubstantial information as heaviness or neatness of pencil marks, test-taking errors, patterns of nonresponse, erasures, and the occasional pencil-sketched spelling tryouts or arithmetic computations in the margin. In addition, the fine-grained scaling provided by the most sophisticated instruments for measuring cognitive competence is not suited to the assessment of many of the behavioral symptoms of cerebral neuropathology. Defects in behaviors that have what can be considered “species-wide” norms, i.e., that occur at a developmentally early stage and are performed effectively by all but the most severely impaired school-aged children, such as speech and dressing, are usually readily apparent. Quantitative norms generally do not enhance the observer ’s sensitivity to these problems nor do any test norms pegged at adult ability levels when applied to persons with severe defects in the tested ability area. Using a finely scaled vocabulary test to examine an aphasic patient, for example, is like trying to discover the shape of a flower with a microscope: the examiner will simply miss the point. Moreover, behavioral aberrations due to brain dysfunction can be so highly individualized and specific to the associated lesion that their distribution in the population at large, or even in the brain impaired population, does not lend itself to actuarial prediction techniques (W.G. Willis, 1984). The evaluation of test scores in the context of direct observations is essential when doing neuropsychological assessment. For many brain impaired patients, test scores alone give relatively little information about the patient’s functioning. The meat of the matter is often how a patient solves a problem or approaches a task rather than what the score is. “There are many reasons for failing and there are many ways you can go about it. And if you don’t know in fact which way the patient was going about it, failure doesn’t tell you very much” (Darby and Walsh, 2005). There can also be more than one way to pass a test. A 54 -year-old sales manager sustained a right frontal lobe injury when he fell as a result of a heart attack with several moments of cardiac arrest. On the Hooper Visual Organization Test, he achieved a score of 26 out of a possible 30, well within the normal range. However, not only did his errors reflect perceptual fragmentation (e.g., he called a cut-up broom a “long candle in holder”), but his correct responses were also fragmented (e.g., “wrist and hand and fingers” instead of the usual response, “hand”; “ball stitched and cut” instead of “baseball”). Another patient, a 4 0-year-old computer designer with a seven-year history of multiple
sclerosis, made only 13 errors on the Category Test (CT), a number considerably lower than the 27 error mean reported for persons at his very high level of mental ability (Mitrushina, Boone, et al., 2005). (His scores on the Gates-MacGinitie Vocabulary and Comprehension tests were at the 99th percentile; WAIS-R Information and Arithmetic age-graded scaled scores were in the very superior and superior ranges, respectively.) On two of the more difficult CT subtests he figured out the response principle within the first five trials, yet on one subtest he made 4 errors after a run of 14 correct answers and on the other he gave 2 incorrect responses after 15 correct answers. This error pattern suggested difficulty keeping in mind solutions that he had figured out easily enough but lost track of while performing the task. Nine repetitions on the first five trials of the Auditory Verbal Learning Test and two serial subtraction errors unremarked by him, one on subtracting “7s” when he went from “16” to “19,” the other on the easier task of subtracting 3s when he said “23, 21,” further supported the impression that this graduate engineer “has difficulty in monitoring his mental activity … and [it] is probably difficult for him to do more than one thing at a time.” (K. Wild, personal communication, 1991).
This latter case also illustrates the relevance of education and occupation in evaluating test performances since, by themselves, all of these scores are well within normal limits, none suggestive of cognitive dysfunction. Moreover, “Different individuals may obtain the same test score on a particular test for very different reasons” (C. Ryan and Butters, 1980b). Consider two patients who achieve the same score on the WIS-A Arithmetic test but may have very different problems and abilities with respect to arithmetic. One patient performs the easy, single operation problems quickly and correctly but fails the more difficult items requiring two operations or more for solution because of an inability to retain and juggle so much at once in his immediate memory. The other patient has no difficulty remembering item content. She answers many of the simpler items correctly but very slowly, counting aloud on her fingers. She is unable to conceptualize or perform the operations on the more difficult items. The numerical score masks the disparate performances of these patients. As this test exemplifies, what a test actually is measuring may not be what its name suggests or what the test maker has claimed for it: while it is a test of arithmetic ability for some persons with limited education or native learning ability, the WIS-A Arithmetic’s oral format makes it a test of attention and short-term memory for most adults, a feature that is now recognized by the test maker (PsychCorp, 2008a; Wechsler, 1997a; see also p. 657). Walsh (1992) called this long-standing misinterpretation of what Arithmetic was measuring, “The Pitfall of Face Validity.” The potential for error when relying on test scores alone is illustrated in two well-publicized studies on the clinical interpretation of test scores. Almost all of the participating psychologists drew erroneous conclusions from test scores faked by three preadolescents and three adolescents, respectively (Faust et al., 1988a; 1988b).
Although the investigators used these data to question the ability of neuropsychological examiners to detect malingering, their findings are open to two quite different interpretations: (1) Valid interpretations of neuropsychological status cannot be accomplished by reliance on scores alone. Neuropsychological assessment requires knowledge and understanding of how the subject performed the tests, of the circumstances of the examination—why, where, when, what for—and of the subject’s appreciation of and attitudes about these circumstances. The psychologist/subjects of these studies did not have access to this information and apparently did not realize the need for it. (2) Training, experience, and knowledge are prerequisites for neuropsychological competence. Of 226 mailings containing the children’s protocols that were properly addressed, only seventy-seven (34 %) “usable ones” were returned; of the adolescent study, again only about one-third of potential judges completed the evaluation task. The authors made much of the 8+ years of practice in neuropsychology claimed by these respondent-judges, but they noted that in the child study only “about 17%” had completed formal postdoctoral training in neuropsychology, and in the adolescent study this number dropped to 12.5%. They did not report how many diplomates of the American Board of Professional Psychology in Neuropsychology participated in each study. (Bigler [1990b] found that only one of 77 respondents to the child study had achieved diplomate status!); nor did they explain that any psychologist can claim to be a neuropsychologist with little training and no supervision. An untrained person can be as neuropsychologically naive in the 8th or even the 16th year of practice as in the first. Those psychologists who were willing to draw clinical conclusions from this kind of neuropsychological numerology may well have been less well-trained or knowledgeable than the greater number of psychologists who actively declined or simply did not send in the requested judgments. (I was one who actively declined [mdl].) Qualitative data
Qualitative data are direct observations. In the formal neuropsychological examination these include observations of the patient’s test-taking behavior as well as test behavior per se. Observations of patients’ appearance, verbalizations, gestures, tone of voice, mood and affect, personal concerns, habits, and idiosyncrasies can provide a great deal of information about their life situation and overall adjustment, as well as attitudes toward the examination and the condition that brings them to it. More specific to the test situation are observations of patients’ reactions to the examination itself, their approach to different kinds of test problems, and their expressions of feelings and opinions about how they are performing. Observations of the manner in which they handle test material, the wording of test responses, the nature and consistency of errors and successes, fluctuations in attention and perseverance, emotional state, and the quality of performance from moment to moment as they interact with the examiner and with the different kinds of test material are the qualitative data of the test performance itself (Milberg, Hebben, and Kaplan, 2009). Limitations of qualitative data
Distortion or misinterpretation of information obtained by direct observation
results from different kinds of methodological and examination problems. All of the standardization, reliability, and validity problems inherent in the collection and evaluation of data by a single observer are ever-present threats to objectivity (Spreen and Risser, 2003, p. 46). In neuropsychological assessment, the vagaries of neurological impairment compound these problems. When the patient’s communication skills are questionable, examiners can never be certain that they have understood their transactions with the patient—or that the patient has understood them. Worse yet, the communication disability may be so subtle and well masked by the patient that the examiner is not aware of communication slips. There is also the likelihood that the patient’s actions will be idiosyncratic and therefore unfamiliar and subject to misunderstanding. Some patients may be entirely or variably uncooperative, many times quite unintentionally. Moreover, when the neurological insult does not produce specific defects but rather reduces efficiency in the performance of behaviors that tend to be normally distributed among adults, such as response rate, recall of words or designs, and ability to abstract and generalize, examiners benefit from scaled tests with standardized norms. The early behavioral evidence of a deteriorating disease and much of the behavioral expression of traumatic brain injury or little strokes can occur as a quantifiable diminution in the efficiency of the affected system(s) rather than as a qualitative distortion of the normal response. A pattern of generalized diminished function can follow conditions of rapid onset, such as trauma, stroke, or certain infections, once the acute stages have passed and the first vivid and highly specific symptoms have dissipated. In such cases it is often difficult if not impossible to appreciate the nature or extent of cognitive impairment without recourse to quantifiable examination techniques that permit a relatively objective comparison between different functions. By and large, as clinicians gain experience with many patients from different backgrounds, representing a wide range of abilities, and suffering from a variety of cerebral insults, they are increasingly able to estimate or at least anticipate the subtle deficits that show up as lowered scores on tests. This sharpening of observational talents reflects the development of internalized norms based on clinical experience accumulated over the years. Blurring the line between quantitative and qualitative evaluations
Efforts to systematize and even enhance observation of how subjects go about failing—or succeeding—on tests have produced a potentially clinically valuable hybrid: quantification of the qualitative aspects of test responses
(Poreh, 2000). Glozman (1999) showed how the examination procedures considered to be most qualitative (i.e., some of Luria’s examination techniques) can be quantified and thus adaptable for retest comparisons and research. She developed a 6-point scale ranging from 0 (no symptoms) to 3 (total failure), with halfsteps between 0 and 1 and 2 to document relatively subtle differences in performance levels. Other neuropsychologists have developed systems for scoring qualitative features. Joy, Fein and colleagues (2001) demonstrated this hybrid technique in their analysis of Block Design (WIS-A) performances into specific components that distinguish good from poor solutions. Based on their observations, they devised a numerical rating scheme and normed it on a large sample of healthy older (50 to 90 years of age) subjects, thus providing criteria for normal ranges of error types for this age group. Joy and his colleagues emphasized that the purely quantitative “pass–fail” scoring system does not do justice to older subjects who may copy most but not quite all of a design correctly. Similarly, Hubbard and colleagues (2008) used a mixture of quantitative and qualitative measures to assess performance of clock drawing performance in cognitively normal elderly persons (55 to 98 years of age). These measures provide a comparison for evaluating a number of neuropsychological functions including visuoconstructive and visuospatial as well as language skills and hemiattention. This type of scoring for qualitative features allows the clinician to make judgments based on the qualitative aspects of a patient’s performance while supporting clinical judgment with quantitative data. Quantified qualitative errors provide information about lateralized deficits that summary scores alone cannot give. For example, quantifying broken configuration errors on Block Design discriminated seizure patients with left hemisphere foci from those with foci on the right as the latter made more such errors (p = .008) although the raw score means for these two groups were virtually identical (left, 26.6 ± 12.4; right, 26.4 ± 12.8) (Zipf-Williams et al., 2000). Perceptual fragmentation (naming a part rather than the whole pictured puzzle) on the Hooper Visual Organization Test was a problem for more right than left hemisphere stroke patients, while the reverse was true for failures in providing the correct name of the picture (Merten, Volkel, and Dornberg, 2007; Nadler, Grace, et al., 1996, see p. 400). Methods for evaluating strategy and the kinds of error made in copying the Complex Figure have been available for decades (see pp. 582–584). Their score distributions, relationships to recall scores, interindividual variability, and executive function correlates were evaluated by Troyer and Wishart (1997)
who recommended that, although not all had satisfactory statistical properties, examiners “may wish to select a system appropriate for their needs.” Integrated data
The integrated use of qualitative and quantitative examination data treats these two different kinds of information as different parts of the whole data base. Test scores that have been interpreted without reference to the context of the examination in which they were obtained may be objective but meaningless in their individual applications. Clinical observations unsupported by standardized and quantifiable testing, although full of import for the individual, lack the comparability necessary for many diagnostic and planning decisions. Descriptive observations flesh out the skeletal structure of numerical test scores. Each is incomplete without the other. The value of taking into account all aspects of a test performance was exemplified in a study comparing the accuracy of purely score-based predictors of lateralization with accuracy based on score profiles plus qualitative aspects of the patient’s performance (Ogden-Epker and Cullum, 2001). Accuracy was greatest when qualitative features entered into performance interpretation. Neuropsychology is rapidly moving into an era where unprecedented clinical information will be available on every patient including genetic, neuroimaging, and other neurodiagnostics studies that ultimately needs to be integrated with the neuropsychological consultation and test findings. Indeed, the era of neuroinformatics contributing to neuropsychological decision making is upon us (Jagaroo, 2010). These kinds of data call for full integration.
Common Interpretation Errors 1. If this, then that: the problem of overgeneralizing
Kevin Walsh (1985) described a not uncommon kind of interpretation error made when examiners overgeneralize their findings. He gave the example of two diagnostically different groups (patients with right hemisphere damage and those with chronic alcoholism) generating one similar cluster of scores, a parallel that led some investigators to conclude that chronic alcoholism somehow shriveled the right but not the left hemisphere (see p. 306). At the individual case level, dementia patients as well as chronic alcoholics can earn depressed scores on the same WIS tests that are particularly sensitive to right hemisphere damage. If all that the examiner attends to is this cluster of low
scores, then diagnostic confusion can result. The logic of this kind of thinking “is the same as arguing that because a horse meets the test of being a large animal with four legs [then] any newly encountered large animal with four legs must be a horse”(E. Miller, 1983). 2. Failure to demonstrate a reduced performance: the problem of false negatives
The absence of low scores or other evidence of impaired performance is expected in intact persons but will also occur when brain damaged patients have not been given an appropriate examination (Teuber, 1969). If a function or skill is not examined, its status will remain unknown. And again, the typical neuropsychological examination situation is no substitute for reality in that the examination is undertaken in a controlled environment usually minimizing all extraneous stimuli with assessment being done on a one-to-one basis. This does not replicate the real world circumstances that may be particularly challenging for the neurologically impaired individual. 3. Confirmatory bias
This is the common tendency to “seek and value supportive evidence at the expense of contrary evidence” when the outcome is [presumably] known (Wedding and Faust, 1989). A neuropsychologist who specializes in blind analysis of Halstead-Reitan data reviewed the case of a highly educated middle-aged woman who claimed neuropsychological deficits as a result of being stunned when her car was struck from the rear some 21 months before she took the examination in question. In the report based on his analysis of the test scores alone the neuropsychologist stated that, “The test results would be compatible with some type of traumatic injury (such as a blow to the head), but they could possibly have been due to some other kind of condition, such as viral or bacterial infection of the brain.” After reviewing the history he concluded that although he had suspected an infectious disorder as an alternative diagnostic possibility, the case history that he later reviewed provided no evidence of encephalitis or meningitis, deemed by him to be the most likely types of infection. He thus concluded that the injury sustained in the motor vehicle accident caused the neuropsychological deficits indicated by the test data. Interestingly, the patient’s medical history showed that complaints of sensory alterations and motor weakness dating back almost two decades were considered to be suggestive of multiple sclerosis; a recent MRI scan added support to this diagnostic possibility. 4. Misuse of salient data: over- and underinterpretation
Wedding and Faust (1989) made the important point that a single dramatic finding (which could simply be a normal mistake; see Roy, 1982) may be given much greater weight than a not very interesting history that extends over years (such as steady employment) or base rate data. On the other hand, a cluster of a few abnormal examination findings that correspond with the
patient’s complaints and condition may provide important evidence of a cerebral disorder, even when most scores reflect intact functioning. Gronwall (1991) illustrated this problem using mild head trauma as an example, as many of these patients perform at or near premorbid levels except on tests sensitive to attentional disturbances. If only one or two such tests are given, then a single abnormal finding could seem to be due to chance when it is not. 5. Underutilization or misutilization of base rates
Base rates are particularly relevant when evaluating “diagnostic” signs or symptoms (D. Duncan and Snow, 1987). When a sign occurs more frequently than the condition it indicates (e.g., more people have mild verbal retrieval problems than have early Alzheimer ’s disease) relying on that sign as a diagnostic indicator “will always produce more errors than would the practice of completely disregarding the sign(s)”(B.W. Palmer, Boone, Lesser, and Wohl, 1998; Wedding and Faust, 1989). Another way of viewing this issue is to regard any sign that can occur with more than one condition as possibly suggestive but never pathognomonic. Such signs can lead to potentially fruitful hypotheses but not to conclusions. Thus, slurred speech rarely occurs in the intact adult population and so is usually indicative of some problem; but whether that problem is multiple sclerosis, a relatively recent right hemisphere infarct, or acute alcoholism—all conditions in which speech slurring can occur—must be determined by some other means. A major limitation in contemporary neuropsychology is that base rate data for neurobehavioral and neurocognitive symptoms/problems is often lacking for a particular disorder or available information is based on inadequate sampling. Proper base rate studies need to be large scale, prospective, done independently with several types of clinical disorders examined within a population. Such in-depth investigations of a neuropsychological variable are rare but necessary. Compounding the base rate problem is use of inappropriate base rate data which can be as distorting than using no base rate data. For example, G.E. Smith, Ivnik, and Lucas (2008) note the differences in the ratios for identifying probable Alzheimer patients on the basis of a verbal fluency score depending on whether base rate was developed on patients coming to a memory clinic or persons in the general population (see also B.L. Brooks, Iverson, and White, 2007, for base rate variations and ability levels). 6. Effort effects
Both the American Academy of Clinical Neuropsychology and the National
Academy of Neuropsychology have produced position papers supporting the use of effort testing in neuropsychological assessment as a means to address the validity of an assessment (S.S. Bush, Ruff, et al., 2005; Heilbronner, Sweet, et al., 2009). Underperformance on neuropsychological measures because of insufficient effort results in a patient’s performance appearing impaired when it is not (see Chapter 20). EVALUATION OF NEUROPSYCHOLOGICAL EXAMINATION DATA
Qualitative Aspects of Examination Behavior Two kinds of behavior are of special interest to the neuropsychological examiner when evaluating the qualitative aspects of a patient’s behavior during the examination. One, of course, is behavior that differs from normal expectations or customary activity for the circumstances. Responding to Block Design instructions by matter-of-factly setting the blocks on the stimulus cards is obviously an aberrant response that deserves more attention than a score of zero alone would indicate. Satisfaction with a blatantly distorted response or tears and agitation when finding some test items difficult also should elicit the examiner ’s interest, as should statements of displeasure with a mistake unaccompanied by any attempt to correct it. Each of these behavioral aberrations may arise for any number of reasons. However, each is most likely to occur in association with certain neurological conditions and thus can also alert the examiner to look for other evidence of the suspected condition. Regardless of their possible diagnostic usefulness, these aberrant responses also afford the examiner samples of behavior that, if characteristic, tell a lot about how patients think and how they perceive themselves, the world, and its expectations. The patient who sets blocks on the card not only has not comprehended the instructions but also is not aware of this failure when proceeding—unselfconsciously?—with this display of very concrete, structure-dependent behavior. Patients who express pleasure over an incorrect response are also unaware of their failures but, along with a distorted perception of the task, the product, or both, they demonstrate self-awareness and some sense of a scheme of things or a state of self-expectations that this performance satisfied. The second kind of qualitatively interesting behaviors deserves special attention whether or not they are aberrant. Gratuitous responses are the comments patients make about their test performance or while they are taking
the test, or the elaborations beyond the necessary requirements of a task that may enrich or distort their drawings, stories, or problem solutions, and usually individualize them. The value of gratuitous responses is well recognized in the interpretation of projective test material, for it is the gratuitously added adjectives, adverbs, or action verbs, flights of fancy whether verbal or graphic, spontaneously introduced characters, objects, or situations, that reflect the patient’s mood and betray his or her preoccupations. Gratuitous responses are of similar value in neuropsychological assessment. The unnecessarily detailed spokes and gears of a bike with no pedals (see Fig. 6.2) tell of the patient’s involvement with details at the expense of practical considerations. Expressions of self-doubt or self-criticism repeatedly voiced during a mental examination may reflect perplexity or depression and raise the possibility that the patient is not performing up to capacity (Lezak, 1978b).
FIGURE 6.2 This bicycle was drawn by a 61-year-old retired millwright with a high school education. Two years prior to the neuropsychological examination he had suffered a stroke involving the right parietal lobe. He displayed no obvious sensory or motor deficits, and was alert, articulate, and cheerful but so garrulous that his talking could be interrupted only with difficulty. His highest WAIS scores, Picture Completion and Picture Arrangement, were in the high average ability range.
In addition, patient responses gained by testing the limits or using the standard test material in an innovative manner to explore one or another working hypothesis have to be evaluated qualitatively. For example, on asking a patient to recall a set of designs ordinarily presented as a copy task (e.g., Wepman’s variations of the Bender-Gestalt Test, see p. 571) the examiner will look for systematically occurring distortions—in size, angulation, simplifications, perseverations—that, if they did not occur on the copy trial,
may shed some light on the patient’s visual memory problems. In looking for systematic deviations in these and other drawing characteristics that may reflect dysfunction of one or more behavioral systems, the examiner also analyzes the patient’s self-reports, stories, and comments for such qualities as disjunctive thinking, appropriateness of vocabulary, simplicity or complexity of grammatical constructions, richness or paucity of descriptions, etc.
Test Scores Test scores can be expressed in a variety of forms. Rarely does a test-maker use a raw score—the simple sum of correct answers or correct answers minus a portion of the incorrect ones—for in itself a raw score communicates nothing about its relative value. Instead, test-makers generally report scores as values of a scale based on the raw scores made by a standardization population (the group of individuals tested for the purpose of obtaining normative data on the test). Each score then becomes a statement of its value relative to all other scores on that scale. Different kinds of scales provide more or less readily comprehended and statistically well-defined standards for comparing any one score with the scores of the standardization population. B. L. Brooks, Strauss, and their colleagues (2009) review four themes underlying the interpretation and reporting of test scores and neuropsychological findings: (1) the adequacy of the normative data for the test administered; (2) inherent measurement error of any neuropsychological test instrument including ceiling and floor effects; (3) what represents normal variability; and (4) what represents a significant change over time with sequential testing. To make clinical sense out of test data is the focus of neuropsychological assessment and is dependent on the fundamental assumptions discussed below. Standard scores
The usefulness of standard scores. The treatment of test scores in neuropsychological assessment is often a more complex task than in other kinds of cognitive evaluations because test scores can come from many different sources. In the usual cognitive examination, generally conducted for purposes of academic evaluation or career counseling, the bulk of the testing is done with one test battery, such as one of the WIS-A batteries or the Woodcock-Johnson Tests of Cognitive Ability. Within these batteries the scores for each of the individual tests are on the same scale and standardized
on the same population so that test scores can be compared directly. On the other hand, no single test battery provides all the information needed for adequate assessment of most patients presenting neuropsychological questions. Techniques employed in the assessment of different aspects of cognitive functioning have been developed at different times, in different places, on different populations, for different ability and maturity levels, with different scoring and classification systems, and for different purposes. Taken together, they are an unsystematized aggregate of more or less standardized tests, experimental techniques, and observational aids that have proven useful in demonstrating deficits or disturbances in some cognitive function or activity. These scores are not directly comparable with one another. To make the comparisons necessary for evaluating impairment, the many disparate test scores must be convertible into one scale with identical units. Such a scale can serve as a kind of test users’ lingua franca, permitting direct comparison between many different kinds of measurements. The scale that is most meaningful statistically and that probably serves the intermediary function between different tests best is one derived from the normal probability curve and based on the standard deviation unit (SD) (Urbina, 2004) (see Fig. 6.3). Thus the most widely used scale is based on the standard score. The value of basing a common scale on the standard deviation unit lies primarily in the statistical nature of the standard deviation as a measure of the spread or dispersion of a set of scores (X1, X2, X–3, etc.) around their mean (M). Standard deviation units describe known proportions of the normal probability curve (note on Fig. 6.3, “Percent of cases under portions of the normal curve”). This has very practical applications for comparing and evaluating psychological data in that the position of any test score on a standard deviation unit scale, in itself, defines the proportion of people taking the test who will obtain scores above and below the given score. Virtually all scaled psychological test data can be converted to standard deviation units for intertest comparisons. Furthermore, a score based on the standard deviation, a standard score, can generally be estimated from a percentile, which is the most commonly used nonstandard score in adult testing (Crawford and Garthwaite, 2009). The likelihood that two numerically different scores are significantly different can also be estimated from their relative positions on a standard deviation unit scale. This use of the standard deviation unit scale is of particular importance in neuropsychological testing, for evaluation of test scores depends upon the significance of their distance from one another or
from the comparison standard. Since direct statistical evaluations of the difference between scores obtained on different kinds of tests are rarely possible, the examiner must use estimates of the ranges of significance levels based on score comparisons. In general, differences of two standard deviations or more may be considered significant, whereas differences of one to two standard deviations suggest a trend; although M.J. Taylor and Heaton (2001) accept scores falling at –1 SD as indicating deficit.
FIGURE 6.3 The relationship of some commonly used test scores to the normal curve and to one another. AGCT, Army General Classification Test; CEEB, College Entrance Examination Board. (Reprinted from the Test Service Bulletin of The Psychological Corporation, 1955).
Kinds of standard scores. Standard scores come in different forms but are
all translations of the same scale, based on the mean and the standard deviation The z-score is the basic, unelaborated standard score from which all others can be derived. The z-score represents, in standard deviation units, the amount a score deviates from the mean of the population from which it is drawn.
The mean of the normal curve is set at zero and the standard deviation unit has a value of one. Scores are stated in terms of their distance from the mean as measured in standard deviation units. Scores above the mean have a positive value; those below the mean are negative. Elaborations of the z-score are called derived scores. Derived scores provide the same information as do zscores, but the score value is expressed in scale units that are more familiar to most test users than z-scores. Test-makers can assign any value they wish to the standard deviation and mean of their distribution of test scores. Usually, they follow convention and choose commonly used values. (Note the different means and standard deviations for tests listed in Fig. 6.3.) When the standardization populations are similar, all of the different kinds of standard scores are directly comparable with one another, the standard deviation and its relationship to the normal curve serving as the key to translation. Estimating standard scores from nonstandard scores. Since most published standardized tests today use a standard score format for handling the numerical test data, their scores present little or no problem to the examiner wishing to make intertest comparisons. However, a few test makers still report their standardization data in percentile or IQ score equivalents. In these cases, standard score approximations can be estimated. Unless there is reason to believe that the standardization population is not normally distributed, a standard score equivalent for a percentile score can be estimated from a table of normal curve functions. Table 6.1 gives z-score approximations, taken from a normal curve table, for 21 percentiles ranging from 1 to 99 in five-point steps. The z-score that best approximates a given percentile is the one that corresponds to the percentile closest to the percentile in question. TABLE 6.1 Standard Score Equivalents for 21 Percentile Scores Ranging from 1 to 99
Exceptions to the use of standard scores
Standardization population differences. In evaluating a patient’s performance on a variety of tests, the examiner can only compare scores from different tests when the standardization populations of each of the tests are identical or at least reasonably similar, with respect to both demographic characteristics and score distribution (Axelrod and Goldman, 1996; Mitrushina, Boone, et al., 2005; Urbina, 2004; see Chapter 2). Otherwise, even though their scales and units are statistically identical, the operational meanings of the different values are as different as the populations from which they are drawn. This restriction becomes obvious should an examiner attempt to compare a vocabulary score obtained on a WIS-A test, which was standardized on cross-sections of the general adult population, with a score on the Graduate Record Examination (GRE), standardized on college graduates. A person who receives an average score on the GRE would probably achieve scores of one to two standard deviations above the mean on WIS-A tests, since the average college graduate typically scores one to two standard deviations above the general population mean on tests of this type (Anastasi, 1965). Although each of these mean scores has the same z-score value, the performance levels they represent are very different. Test-makers usually describe their standardization populations in terms of sex, race, age, and/or education. Intraindividual comparability of scores may differ between the sexes in that women tend to do less well on advanced arithmetic problems and visuospatial items and men are more likely to display a verbal skill disadvantage (see pp. 362–364). Education, too, affects level of performance on different kinds of tests differentially, making its greatest contribution to tasks involving verbal skills, stored information, and other
school-related activities, but affects test performances in all areas (see pp. 360). Age can be a very significant variable when evaluating test scores of older patients (see pp. 356–360 and Chapters 9–16, passim). In patients over 50, the normal changes with age may obscure subtle cognitive changes that could herald an early, correctable stage of a tumor or vascular disease. The use of age-graded scores puts the aging patient’s scoring pattern into sharper focus. Age-graded scores are important aids to differential diagnosis in patients over 50 and are essential to the clinical evaluation of test performances of patients over 65. Although not all tests an examiner may wish to use have age-graded norms or age corrections, enough are available to determine the extent to which a patient might be exceeding the performance decrements expected at a given age. An important exception is in the use of age-graded scores for evaluating older persons’ performances on tasks which require a minimum level of competence, such as driving (Barrash, Stillman, et al., 2010). This research team found that non-age-graded scores predicted driving impairment better than age-graded ones. A major debate continues in neuropsychology as to whether significant differences in neuropsychological performance relates to race (Gasquoine, 2009; Manly, 2005). Significant differences between major racial groups have not been consistently demonstrated in the score patterns of tests of various cognitive abilities or in neuropsychological functioning (A.S. Kaufman, McLean, and Reynolds, 1988; Manly, Jacobs, Touradji, et al., 2002; P.E. Vernon, 1979). Nevertheless, there are racial differences in expression of various neurological disorders (Brickman, Schupf, et al., 2008). Race norms have been developed for some standardized neuropsychological measures (Lucas, Ivnik, Smith, et al., 2005), but there are limitations as to how they should be used (Gasquoine, 2009; Manly, 2005). Vocational and regional differences between standardization populations may also contribute to differences between test norms. Clinicians should always keep in mind that vocational differences generally correlate highly with educational differences, and regional differences tend to be relatively insignificant compared with age and variables that are highly correlated with income level, such as education or vocation. Children’s tests. Some children’s tests are applicable to the examination of patients with severe cognitive impairment or profound disability. Additionally, many good tests of academic abilities such as arithmetic, reading, and spelling have been standardized for child or adolescent populations. The best of these invariably have standard score norms that, by and large, cannot be applied to
an adult population because of the significant effect of age and education on performance differences between adults and children. Senior high school norms are the one exception to this rule. On tests of mental ability that provide adult norms extending into the late teens, the population of 18-year-olds does not perform much differently than the adult population at large (e.g., PsychCorp, 2008; Wechsler, 1997a), and four years of high school is a reasonable approximation of the adult educational level. This exception makes a great number of very well-standardized and easily administered paper-and-pencil academic skill tests available for the examination of adults, and no scoring changes are necessary. All other children’s tests are best scored and reported in terms of mental age (MA), which is psychologically the most meaningful score derived from these tests. Most children’s tests provide mental age norms or grade level norms (which readily convert into mental age). Mental age scores allow the examiner to estimate the extent of impairment, or to compare performance on different tests or between two or more tests administered over time, just as is done with test performances in terms of standard scores. When test norms for children’s tests are given in standard scores or percentiles for each age or set of ages the examiner can convert the score to a mental age score by finding the age at which the obtained score is closest to a score at the 50th percentile or the standard score mean. Mental age scores can be useful for planning educational or retraining programs. Small standardization populations. A number of interesting and potentially useful tests of specific skills and abilities have been devised for studies of particular neuropsychological problems in which the standardization groups are relatively small (often under 20) (Dacre et al., 2009; McCarthy and Warrington, 1990, passim). Standard score conversions are inappropriate if not impossible in such cases. When there is a clear relationship between the condition under study and a particular kind of performance on a given test, there is frequently a fairly clear-cut separation between patient and control group scores. Any given patient’s score can be evaluated in terms of how closely it compares with the score ranges of either the patient or the control group reported in the study. Nonparametric distributions
It is not uncommon for score distributions generated by a neuropsychologically useful test to be markedly skewed—often due to ceiling (e.g., digit span) or floor (e.g., Trail Making Test) effects inherent in the nature of the test and human cognitive capability (Retzlaff and Gibertini, 1994). For
these tests, including many used in neuropsychological assessments, standard scores—which theoretically imply a distribution base that reasonably approximates the parametric ideal of a bell-shaped curve—are of questionable value as skewing greatly exaggerates the weight of scores at the far end of a distribution. These distorted distributions produce overblown standard deviations (Lezak and Gray, 1984a [1991]). When this occurs, standard deviations can be so large that even performances that seemingly should fall into the abnormal range appear to be within normal limits. The Trail Making Test provides an instructive example of this statistical phenomenon (see Mitrushina, Boone, et al., 2005). R.K. Heaton, Grant, and Matthews (1986) thoughtfully provided score ranges and median scores along with means and standard deviations of a normative population. Their 20–29 age group’s average score on Trails B was 86 ± 39 sec but the range of 47” to 24 5” with a median score of 76 indicates that many more subjects performed below than above the mean and that the large standard deviation—swollen by a few very slow responders—brings subjects under the umbrella of within normal limits who—taking as much as 124” (i.e., < –1 SD) to complete Trails B—do not belong there.
Benton showed the way to resolve the problem of skewed distributions by identifying the score at the 5th percentile as the boundary for abnormality— i.e., defective performance (see Benton, Sivan, Hamsher, et al., 1994). Benton and his coworkers used percentiles to define degrees of competency on nonparametric test performances which also avoids the pitfalls of trying to fit nonparametric data into a Procrustean parametric bed.
Evaluation Issues Norms
Most tests of a single cognitive function, ability, or skill do not have separate norms for age, sex, education, etc. A few widely used tests of general mental abilities take into account the geographic distribution of their standardization population; the rest are usually standardized on local people. Tests developed in Minnesota will have Minnesota norms; New York test makers use a big city population; and British tests are standardized on British populations. Although this situation results in less than perfect comparability between the different tests, in most cases the examiner has no choice but to use norms of tests standardized on an undefined mixed or nonrandom adult sample. Experience quickly demonstrates that this is usually not a serious hardship, for these “mixed-bag” norms generally serve their purpose.
“I sometimes determine SD units for a patient’s score on several norms to see if they produce a different category of performance. Most of the time it doesn’t make a significant difference. [If] it does then [one has] to use judgment [H.J. Hannay, 2004, personal communication].” Certainly, one important normative fault of many single-purpose tests is that they lack discriminating norms at the population extremes. Different norms, derived on different samples in different places, and sometimes for different reasons, can produce quite different evaluations for some subjects resulting in false positives or false negatives, depending on the subject’s score, condition, and the norm against which the score is compared (Kalechstein et al., 1998; Lezak, 2002). Thus, finding appropriate norms applicable for each patient is still a challenge for clinicians. Many neuropsychologists collect a variety of norms over the years from the research literature. The situation has improved to some degree in recent years with the publication of collections of norms for many but not all of the most favored tests (Mitrushina, Boone, et al., 2005; E. Strauss, Sherman, and Spreen, 2006). However, there are times when none of these norms really applies to a particular person’s performance on a specific test. In such cases, the procedure involves checking against several norm samples to see if a reasonable degree of consistency across norms can be found. When the data from other tests involving a different normative sample but measuring essentially the same cognitive or motor abilities are not in agreement, this should alert the clinician about a problem with the norms for that test as applied to this individual. This problem with norms is very important in forensic cases when the choice of norms can introduce interpretation bias (van Gorp and McMullen, 1997). The final decision concerning the selection of norms requires clinical judgment (S.S. Bush, 2010). A large body of evidence clearly indicates that demographic variables— especially age and education (and sex and race on some tests)—are related to performance (see data presented in Chapters 9–16, passim). Yet some have argued against the use of demographically based norms and suggest that test score adjustment may invalidate the raw test scores (Reitan and Wolfson, 1995b). This argument is based on findings that test performance was significantly related to age and education for normal subjects but not to age and barely for education in a brain damaged group. However, a reduction in the association between demographics and performance is to be expected on a statistical basis for brain damaged individuals. Suppose that variable X is significantly related to variable Y in the normal population. If a group of individuals is randomly selected from the population, the relationship between variables X and Y will continue to be present in this group. Add random error to one of the variables, for instance
Y, and the relationship between X and (Y + random error) will be reduced. Now apply this reasoning to an example bearing on the argument against use of demographic score adjustments. Age is related to performance on a memory test in the normal population. Some individuals, a random sample from the normal population, have a brain disorder and are asked to take the memory test. The effects of their brain dysfunction on memory performance introduces random error, given that brain dysfunction varies in the cause, location, severity, and effects on each person’s current physiology, psychiatric status, circumstances, motivation, etc. As a result, the statistical association between age and memory test performance is likely to be reduced.
If aspects of the brain damage itself had been held constant in the Reitan and Wolfson (1995b) study that prompted questioning about use of demographic variables, perhaps the associations would have been quite significant in the brain damaged group, too (Vanderploeg, Axelrod, Sherer, et al., 1997). If younger individuals had more severe brain damage than older ones or more educated individuals had greater brain damage than less educated ones, the age–education relationships could be small or insignificant. In short, changes in these relationships do not invalidate the use of demographically based norms. Since premorbid neuropsychological test data are rare, demographically based norms aid test interpretation. Without demographically appropriate norms, the false positive rate for older or poorly educated normal individuals tends to increase (Bornstein, 1986a; R.K. Heaton, Ryan, and Grant, 2009; also see pp. 374–375). Some false negative findings can be expected (J.E. Morgan and Caccappolo-van Vliet, 2001). Yet, should a test consistently produce many false negatives or false positives with particular demographic combinations, this problem requires reevaluation of norms or demographic scoring adjustments. Another major demographic issue in contemporary clinical neuropsychology is the use of tests across cultures and different languages and their standardization and normative base (K.B. Boone, Victor, et al., 2007; Gasquoine, 2009; K. Robertson et al., 2009). Neuropsychology had a Western European and North American origin with most standardized tests coming from these countries and languages. Eastern European, Asian, and African countries are just beginning this process and therefore additional demographic factors and normative data will likely become available. At this time, relatively few normative samples include all of the demographic variable combinations that may be pertinent to measurement data on a particular ability or behavior. Those few samples in which all relevant demographic variables have been taken into account typically have too few subjects for dependable interpretation in the individual case. Major efforts are underway to correct this limitation for certain neuropsychological measures (Cherner et al., 2007; Gavett et al., 2009; Iverson, Williamson, et al., 2007;
PenaCasanova et al., 2009). Possibly the most ambitious undertaking along these lines is sponsored by the National Institutes of Health. (NIH): the NIH Toolbox (Gershon et al., 2010). When the NIH Toolbox is complete it will provide the clinician with a wellstandardized and normed brief assessment battery from which the appropriate cognitive measure can be selected to assess motor, sensory, emotional, and cognitive functioning for clinical or research purposes. The cognitive module includes assessment of the following domains: executive, episodic memory, working memory, processing speed, language, and attention. All measures will be standardized and normed in both English and Spanish on individuals 3 to 85 years of age. Impairment criteria
Neuropsychologists generally use a criterion for identifying when performance on a particular test may represent impairment, but it is not necessarily explicitly stated and is unlikely to appear in reports. Once test data have been reliably scored and appropriate norms have been chosen to convert scores to standard scores and percentiles, the clinician needs to determine if performance on individual tests is impaired or not, and whether the pattern of performance is consistent with the patient’s background and relevant neurologic, psychiatric, and/or other medical disorders. Sometimes, when poor performance does not represent an acquired impairment, simple questions about a person’s abilities may elicit information that confirms lifelong difficulty in these areas of cognitive or motor ability. A poor performance may also indicate that the person was not motivated to do well or was anxious, depressed, or hostile to the test-taking endeavor rather than impaired. Estimates of premorbid level of a patient’s functioning become important in determining whether a given test performance represents impairment (see pp. 553–555, 561–563). In some cases such estimates are relatively easy to make because prior test data are available from school, military, medical, psychological, or neuropsychological records. At other times, the current test data are the primary source of the estimate. A change from this estimate, perhaps 1, 1.5, or 2 SDs lower than the premorbid estimate, may be used as the criterion for determining the likelihood that a particular test performance is impaired. A test score that appears to represent a 1 SD change from premorbid functioning may not be a statistically significant change but may indicate an impairment to some examiners and only suggest impaired performance to others. A 2 SD score depression is clear evidence of impairment.
Since approximately 15% of intact individuals obtain scores greater than 1 SD below test means, there is concern that too many individuals who are intact with respect to particular functions will be judged impaired when using –1 SD as an impairment criterion. When the criterion is less stringent (e.g., –1 SD rather than –2), more intact performance will be called impaired (i.e., false positive) and more “hits” (i.e., impaired performance correctly identified) are to be expected. On the other hand, when criteria become overly strict (e.g., > – 2) the possible increase in misses occurs such that a truly impaired performance is judged normal (i.e., false negative). These errors can be costly to patients with a developing, treatable disease such as some types of brain tumors which will grow and do much mischief if not identified as soon as possible. Should this be a false alarm, the patient is no worse off in the long run but may have paid in unnecessary worry and expensive medical tests. In the case of a possible dementia, this would not be so costly an error since there is no successful treatment at the moment and the disorder will progress and have to be managed until the individual dies. However, neuropsychological conclusions must not rest on a single aberrant score. Regardless of the criterion used, it is the resulting pattern of change in performance that should make diagnostic sense. Some neuropsychologists interpret as “probably impaired” any test score 1 or more SD lower than the mean of a normative sample that may or may not take into account appropriate demographics (e.g., Golden, Purisch, and Hammeke, 1991; R.K. Heaton, Grant, and Matthews, 1991). This latter group converted scores from the Halstead-Reitan battery plus other tests into T-scores based on age, education, and sex corrections. In this system a T-score below 40 (> –1 SD below the mean) is considered likely to represent impaired performance. The pattern of test scores is also important and must make sense in terms of the patient’s history and suspected disorder or disease process (R.K. Heaton, Ryan, and Grant, 2009). In evaluating test performances, it must be kept in mind that intact individuals are likely to vary in their performance on any battery of cognitive tests and it is not unusual for them to score in the impaired range on one or two tests (Jarvis and Barth, 1994; M.J. Taylor and Heaton, 2001). It is important to note that using a criterion for decision making that represents a deviation from the mean of the normative sample rather than change from premorbid level of functioning is likely to miss significant changes in very high functioning individuals while suggesting that low functioning individuals have acquired impairments that they do not have.
For instance, a concert pianist might begin to develop slight difficulties in hand functioning in the early stages of Parkinson’s disease that were noticeable to him but not to an examiner who uses a criterion for impairment linked to the mean of the distribution of scores for males of his age, education, and sex. In that case another musician might pick up the difference by comparing recordings of an earlier performance with a current performance. Contrast this example with one of several painters who claimed to be brain-damaged after inhaling epoxy paint fumes in a poorly ventilated college locker room. On the basis of his age and education he would be expected to perform at an average level. Linking poor performance on many tests to toxic exposure by one psychologist seemed appropriate. However, once his grade school through high school records were obtained, it was found that he had always been functioning at a borderline to impaired level on group mental ability and achievement tests.
When such evidence of premorbid functioning is available—and often it is not—it far outweighs normative expectations. “If I had reason to believe that the person was not representative of what appears to be the appropriate normative sample, I would compare the individual with a more appropriate sample [e.g., compare an academically skilled high-school dropout to a higher educational normative sample] and be prepared to defend this decision” (R.K. Heaton, personal communication, 2003). This is how competent clinicians tend to decide in the individual case whether to use impairment criteria based on large sample norms or smaller, more demographically suitable norms. Sensitivity/specificity and diagnostic accuracy
It has become the custom of some investigators in clinical neuropsychology to judge the “goodness” of a test or measure and its efficiency in terms of its diagnostic accuracy, i.e., the percentage of cases it correctly identifies as belonging to either a clinical population or a control group or to either of two clinical populations. This practice is predicated on questionable assumptions, one of which is that the accuracy with which a test makes diagnostic classifications is a major consideration in evaluating its clinical worth. Most tests are not used for this purpose most of the time but rather to provide a description of the individual’s strengths and weaknesses, to monitor the status of a disorder or disease, or for treatment and planning. The criterion of diagnostic accuracy becomes important when evaluating screening tests for particular kinds of deficits (e.g., an aphasia screening test), single tests purporting to be sensitive to brain dysfunction, and sometimes other tests and test batteries as well. The accuracy of diagnostic classification depends to some degree on its sensitivity and specificity (see p. 127). The percentage of cases classified accurately by any given test, however, will depend on the base rate of the condition(s) for which the test is sensitive in the population(s) used to evaluate its goodness. It will also depend on the demographics of the population, for
instance, level of education (Ostrosky-Solis, Lopez-Arango, and Ardila, 2000). With judicious selection of populations, an investigator can virtually predetermine the outcome. If high diagnostic accuracy rates are desired, then the brain damaged population should consist of subjects who are known to suffer the condition(s) measured by the test(s) under consideration (e.g., patients with left hemisphere lesions suffering communication disorders tested with an aphasia screening test); members of the comparison population (e.g., normal control subjects, neurotic patients) should be chosen on the basis that they are unlikely to have the condition(s) measured by the test. Using a population in which the frequency of the condition measured by the test(s) under consideration is much lower (e.g., patients who have had only one stroke, regardless of site) will necessarily lower the sensitivity rate. However, this lower hit rate should not reflect upon the value of the test. The extent to which sensitivity/specificity rates will differ is shown by the large differences reported in studies using the same test(s) with different kinds of clinical (and control) populations (Bornstein, 1986a; Mitrushina, Boone, et al., 2005). Moreover, it will usually be inappropriate to apply sensitivity/specificity data collected on a population with one kind of neurological disorder to patients suspected of having a different condition. Since the “sensitivity/specificity diagnostic accuracy rate” standard can be manipulated by the choice of populations studied and the discrimination rate found for one set of populations or one disorder may not apply to others, it is per se virtually meaningless as a measure of a test’s effectiveness in identifying brain impaired or intact subjects except under similar conditions with similar populations. A particular test’s sensitivity to a specific disorder is, of course, always of interest. The decision-making procedure (or combination of procedures) that best accomplishes the goal of accurate diagnosis has yet to be agreed upon; and there may be none that will be best in all cases. In the end, decisions are made about individuals. Regardless of how clinicians reach their conclusions, they must always be sensitive to those elements involved in each patient’s case that may be unique as well as those similar to cases seen before: qualitative and quantitative data from test performance, behavioral observation, interviews with family members and others as possible, and the history. Disagreements among clinicians are most likely to occur when the symptoms are vague and/or mild; the developmental, academic, medical, psychiatric, psychosocial, and/or occupational histories are complex or not fully available; and the pattern of test performance is not clearly associated with a specific diagnostic entity.
Screening Techniques Different screening techniques make use of different kinds of behavioral manifestations of brain damage. Some patients suffer only a single highly specific defect or a cluster of related disabilities while, for the most part, cognitive functioning remains intact. Others sustain widespread impairment involving changes in cognitive, self-regulating, and executive functions, in attention and alertness, and in their personality. Still others display aberrations characteristic of brain dysfunction (signs) with more or less subtle evidence of cognitive or emotional deficits. With such a variety of signs, symptoms, and behavioral alterations, it is no more reasonable to expect accurate detection of every instance of brain disorder with one or a few instruments or lists of signs and symptoms than to expect that a handful of laboratory tests would bring to light all gastrointestinal tract diseases. Yet many clinical and social service settings need some practical means for screening when the population under consideration—such as professional boxers, alcoholics seeking treatment, persons tested as HIV positive, or elderly depressed patients, to give just a few instances—is at more than ordinary risk of a brain disorder. The accuracy of screening tests varies in a somewhat direct relationship to the narrowness of range or specificity of the behaviors assessed by them (Sox et al., 1988). Any specific cognitive defect associated with a neurological disorder affects a relatively small proportion of the brain-impaired population as a whole, and virtually no one whose higher brain functions are intact. For instance, perseveration (the continuation of a response after it is no longer appropriate, as in writing three or four “e’s” in a word such as “deep” or “seen” or in copying a 12-dot line without regard for the number, stopping only when the edge of the page is reached) is so strongly associated with brain damage that the examiner should suspect it on the basis of this defect alone. However, since most patients with brain disorders do not give perseverative responses, it is not a practical criterion for screening purposes. Use of a highly specific sign or symptom such as perseveration as a screening criterion for brain damage results in virtually no one without brain damage being misidentified as brain damaged (false positive errors), but such a narrow test will let many persons who are brain damaged slip through the screen (false negative errors). In contrast, defects that affect cognitive functioning generally, such as distractibility, impaired immediate memory, and concrete thinking, are not only very common symptoms of brain damage but tend to accompany a number of emotional disorders as well. As a result, a sensitive screening test that relies on a defect impairing cognitive functioning generally will identify
many brain damaged patients correctly with few false negative errors, but a large number of people without brain disorders will also be included as a result of false positive errors of identification. Limitations in predictive accuracy do not invalidate either tests for specific signs or tests that are sensitive to conditions of general dysfunction. Each kind of test can be used effectively as a screening device as long as its limitations are known and the information it elicits is interpreted accordingly. When testing is primarily for screening purposes, a combination of tests, including some that are sensitive to specific impairment, some to general impairment, and others that tend to draw out diagnostic signs, will make the best diagnostic discriminations. Signs
The reliance on signs for identifying persons with a brain disorder has a historical basis in neuropsychology and is based on the assumption that brain disorders have some distinctive behavioral manifestations. In part this assumption reflects early concepts of brain damage as a unitary kind of dysfunction (e.g., Hebb, 1942; Shure and Halstead, 1958) and in part it arises from observations of response characteristics that do distinguish the test performances of many patients with brain disease. Most pathognomonic signs in neuropsychological assessment are specific aberrant test responses or modes of response. These signs may be either positive, indicating the presence of abnormal function, or negative in that the function is lost or significantly diminished. Some signs are isolated response deviations that, in themselves, may indicate the presence of an organic defect. Rotation in copying a block design or a geometric figure has been considered a sign of brain damage. Specific test failures or test score discrepancies have also been treated as signs of brain dysfunction, as for instance, marked difficulty on a serial subtraction task (Ruesch and Moore, 1943) or a wide spread between the number of digits recalled in the order given and the number recalled in reversed order (Wechsler, 1958). The manner in which the patient responds to the task may also be considered a sign indicating brain damage. M. Williams (1979) associated three response characteristics with brain damage: “stereotyping and perseveration”; “concreteness of behavior,” defined by her as “response to all stimuli as if they existed only in the setting in which they are presented”; and “catastrophic reactions” of perplexity, acute anxiety, and despair when the patient is unable to perform the presented task. Another common sign approach relies on not one but on the sum of different signs, i.e., the total number of different kinds of specific test response
aberrations or differentiating test item selections made by the patient. This method is used in some mental status examinations to determine the likelihood of impairment (see p. 127). In practice, a number of behavior changes can serve as signs of brain dysfunction (see Table 6.2). None of them alone is pathognomonic of a specific brain disorder. When a patient presents with more than a few of these changes, the likelihood of a brain disorder runs high. Cutting scores
The score that separates the “normal” or “not impaired” from the “abnormal” or “impaired” ends of a continuum of test scores is called a cutting score, which marks the cut-off point (Dwyer, 1996). The use of cutting scores is akin to the sign approach, for their purpose is to separate patients in terms of the presence or absence of the condition under study. A statistically derived cutting score is the score that differentiates brain impaired patients from others with the fewest instances of error on either side. A cutting score may also be derived by simple inspection, in which case it is usually the score just below the poorest score attained by any member of the “normal” comparison group or below the lowest score made by 95% of the “normal” comparison group (see Benton, Sivan, Hamsher, et al., 1994, for examples). Cutting scores are a prominent feature of most screening tests. However, many of the cutting scores used for neuropsychological diagnosis may be less efficient than the claims made for them (Meehl and Rosen, 1967). This is most likely to be the case when the determination of a cutting score does not take into account the base rate at which the predicted condition occurs in the sample from which the cutting score was developed (Urbina, 2004; W.G. Willis, 1984). Other problems also tend to vitiate the effectiveness of cutting scores. The criterion groups are often not large enough for optimal cutting scores to be determined (Soper, Cicchetti, et al., 1988). Further, cutting scores developed on one kind of population may not apply to another. R.L. Adams, Boake, and Crain (1982) pointed out the importance of adjusting cutting scores for “age, education, premorbid intelligence, and race–ethnicity” by demonstrating that the likelihood of false positive predictions of brain damage tends to increase for nonwhites and directly with age, and inversely with education and intelligence test scores. Bornstein (1986a) and Bornstein, Paniak, and O’Brien (1987) demonstrated how cutting scores, mostly developed on a small and relatively young normative sample, classified as “impaired” from 57.6% to 100% of normal control subjects in the 60–90 age range.
TABLE 6.2 Behavior Changes that Are Possible Indicators of a Pathological Brain Process
*Many emotionally disturbed persons complain of memory deficits that typically reflect their selfpreoccupations, distractibility, or anxiety rather than a dysfunctional brain. Thus memory complaints in themselves are not good indicators of neuropathology. † These changes are most likely to have neuropsychological relevance in the absence of depression, but they can be mistaken for depression. Adapted from Howieson and Lezak, 2002; © 2002, American Psychiatric Association Press.
When the recommended cutting scores are used, these tests generally do identify impaired patients better than chance alone. They all also misdiagnose both intact persons (false positive cases) and persons with known brain impairment (false negative cases) to varying degrees. The nature of the errors of diagnosis depends on where the cut is set: if it is set to minimize misidentification of intact persons, then a greater number of brain impaired patients will be called “normal” by the screening. Conversely, if the testmaker ’s goal is to identify as many patients with brain damage as possible, more intact persons will be included in the brain damaged group. Only rarely does the cutting score provide a distinct separation between two populations, and then only for tests that are so simple that all ordinary intact adults would not fail. For example, the Token Test, which consists of simple verbal instructions involving basic concepts of size, color, and location, is unlikely to misidentify verbally intact persons as impaired.
Single tests for identifying brain disorders
The use of single tests for identifying brain damaged patients—a popular enterprise several decades ago—was based on the assumption that brain damage, like measles perhaps, can be treated as a single entity. Considering the heterogeneity of brain disorders, it is not surprising that single tests have high misclassification rates (G. Goldstein and Shelly, 1973; Spreen and Benton, 1965). Most single tests, including many that are not well standardized, can be rich sources of information about the functions, attitudes, and habits they elicit. Yet to look to any single test for decisive information about overall cognitive behavior is not merely foolish but can be dangerous as well, since the absence of positive findings does not rule out the presence of a pathological condition. Usefulness of screening techniques
In the 1940s and 1950s, in the context of the simple “organic” versus “functional” distinction, brain damage was still thought by many to have some general manifestation that could be demonstrated by psychological tests, screening techniques were popular, particularly for identifying the brain impaired patients in a psychiatric population. As a result of better understanding of the multifaceted nature of brain pathology and of the accelerating development and refinement of other kinds of neurodiagnostic techniques, the usefulness of neuropsychological screening has become much more limited. Screening is unnecessary or inappropriate in most cases referred for neuropsychological evaluation: either the presence of neuropathology is obvious or otherwise documented, or diagnosis requires more than simple screening. Furthermore, the extent to which screening techniques produce false positives and false negatives compromises their reliability for making decisions about individual patients. However, screening may still be useful with populations in which neurological disorders are more frequent than in the general population (e.g., community dwelling elderly people [Cahn, Salmon, et al., 1995]). The most obvious clinical situations in which neuropsychological screening may be called for are examinations of patients entering a psychiatric inpatient service or at-risk groups such as the elderly or alcoholics/substance abusers when they seek medical care. Screening tests are increasingly used in the U.S. and Canada to identify and monitor concussions in sports participants, especially soccer and football (Covassin et al., 2009; Van Kampen et al., 2007). Dichotomizing screening techniques are also useful in research for evaluating tests or treatments, or for comparing specific populations with respect to the presence
or absence of impaired functions. Once a patient has been identified by screening techniques as possibly having a brain disorder, the problem arises of what to do next, for simple screening at best operates only as an early warning system. These patients still need careful neurological and neuropsychological study to determine whether a brain disorder is present and, if so, to help develop treatment and planning for their care as needed. Evaluating screening techniques
In neuropsychology as in medicine, limitations in predictive accuracy do not invalidate either tests for specific signs or disabilities or tests that are sensitive to conditions of general dysfunction. We have not thrown away thermometers because most sick people have normal temperatures, nor do we reject the electroencephalogram (EEG) just because many patients with brain disorders test normal by that method. Thus, in neuro-psychology, each kind of test can be used effectively as a screening device as long as its limitations are known and the information it elicits is interpreted accordingly. For screening purposes, a combination of tests, including some that are sensitive to specific impairment, some to general impairment, and others that tend to draw out diagnostic signs, will make the best diagnostic discriminations. When evaluating tests for screening, it is important to realize that, although neuropsychological testing has proven effective in identifying the presence of brain disorders, it cannot guarantee its absence, i.e., “rule out” brain dysfunction. Not only may cerebral disease occur without behavioral manifestations, but the examiner may also neglect to look for those neuropsychological abnormalities that are present. Inability to prove the negative case in neuropsychological assessment is shared with every other diagnostic tool in medicine and the behavioral sciences. When a neuropsychological examination produces no positive findings, the only tenable conclusion is that the person in question performed within normal limits on the tests taken at that time. While the performance may be adequate for the test conditions at that time of assessment, the neuropsychologist cannot give a “clean bill of health.”
Pattern Analysis Intraindividual variability
Discrepancy, or variability, in the pattern of successes and failures in a test
performance is called scatter. Variability within a test is intratest scatter; variability between the scores of a set of tests is intertest scatter (Wechsler, 1958). Intratest scatter. Scatter within a test is said to be present when there are marked deviations from the normal pass–fail pattern. On tests in which the items are presented in order of difficulty, it is usual for the subject to pass almost all items up to the most difficult passed item, with perhaps one or two failures on items close to the last passed item. Rarely do cognitively intact persons fail very simple items or fail many items of middling difficulty and pass several difficult ones. On tests in which all items are of similar difficulty level, most subjects tend to do all of them correctly, with perhaps one or two errors of carelessness, or they tend to flounder hopelessly with maybe one or two lucky “hits.” Variations from these two common patterns deserve the examiner ’s attention. Certain brain disorders as well as some emotional disturbances may manifest themselves in intratest scatter patterns. Hovey and Kooi (1955) demonstrated that, when taking mental tests, patients with epilepsy who exhibit paroxysmal brain wave patterns (sudden bursts of activity) were significantly more likely to be randomly nonresponsive or forgetful than were psychiatric, brain damaged, or other epileptic patients. Some patients who have sustained severe head injuries respond to questions that draw on prior knowledge as if they had randomly lost chunks of stored information. For example, moderately to severely injured patients as a group displayed more intratest scatter than a comparable control group, although scatter alone did not reliably differentiate brain injured from control subjects on an individual basis (Mittenberg, Hammeke, and Rao, 1989). Variability, both intratest and over time, characterized responses of patients with frontal lobe dementia (Murtha et al., 2002). E. Strauss, MacDonald, and their colleagues (2002) found a relationship between inconsistency in physical performance and fluctuations on cognitive tests. If scatter is present within test performances, the challenge for the examiner is to assess whether the observed scatter in a given patient is beyond what would occur for the relevant reference group. As few intratest scatter studies for specific diagnostic groups have been undertaken, the examiner can only rely on experience, personal judgment, and what is known about scatter patterns for particular tests (e.g., Crawford, Allan, McGeorge, and Kelly, 1997). Intratest scatter may also be influenced by cultural and language factors (Rivera Mindt et al., 2008). Intertest scatter. Probably the most common approach to the psychological
evaluation of brain disorders is through comparison of the test score levels obtained by the subject—in other words, through analysis of the intertest score scatter. By this means, the examiner attempts to relate variations between test scores to probable neurological events—or behavioral descriptions in those many cases in which a diagnosis is known. This technique clarifies a seeming confusion of signs and symptoms of behavioral disorder by giving the examiner a frame of reference for organizing and evaluating the data. Making sense of intraindividual variability
A significant discrepancy between any two or more scores is the basic element of test score analysis (Silverstein, 1982). Any single discrepant score or response error can usually be disregarded as a chance deviation. A number of errors or test score deviations, may form a pattern. Marked quantitative discrepancies in a person’s performance—within responses to a test, between scores on different tests, and/or with respect to an expected level of performance—suggest that some abnormal condition is interfering with that person’s overall ability to perform at their characteristic level of cognitive functioning. Brain dysfunction is suspected when a neurological condition best accounts for the patient’s behavioral abnormalities. In order to interpret the pattern of performance in a multivariate examination, the clinician must fully understand the nature of the tests administered, what the various tests have in common and how they differ in terms of input and output modalities, and what cognitive processes are required for successful completion. Appropriate interpretation of the data further requires a thoughtful integration of historical, demographic, and psychosocial data with the examination information. A 32-year-old doctoral candidate in the biological sciences sustained a head injury with momentary loss of consciousness just weeks before she was to take her qualifying examinations. She was given a small set of neuropsychological tests two months after the accident to determine the nature of her memory complaints and how she might compensate for them. Besides a few tests of verbal, visuospatial, and conceptual functions, the relatively brief examination consisted mainly of tests of attention and memory as they are often most relevant to mild post traumatic conditions. The patient had little problem with attentional or reasoning tests, whether verbal or visual, although some tendency to concrete thinking was observed. Both story recall and sentence repetition were excellent; she recalled all of nine symbol–digit pairs immediately after 3 min spent assigning digits to an associated symbol, and seven of the pairs a half hour later (Symbol Digit Modalities Test); and she recognized an almost normal number of words (12) from a list of 15 she had attempted to learn in five trials (Auditory-Verbal Learning Test). However, this very bright woman, whose speaking skills were consistent with her high academic achievement, could not retrieve several words without phonetic cueing (Boston Naming Test); and she gave impaired performances when attempting to learn a series of nine digits (Serial Digit Learning), on
immediate and delayed recall of the 15-word list, and on visual recall on which she reproduced the configuration of the geometric design she had copied but not the details (Complex Figure Test). Thus she clearly demonstrated the ability for verbal learning at a normal level, and her visual recall indicated that she could at least learn the “big picture.” Her successes occurred on all meaningful material and when she had cues; when meaning or cues—hooks she could use to aid retrieval—were absent, she performed at defective levels. Analysis of her successes and failures showed a consistent pattern implicating retrieval problems that compromised her otherwise adequate learning ability. This analysis allowed the examiner to reassure her regarding her learning capacity and to recommend techniques for prodding her sluggish retrieval processes. Pattern analysis procedures
The question of neuroanatomical or neurophysiological likelihood underlies all analyses of test patterns undertaken for differential diagnosis. As in every other diagnostic effort, the most likely explanation for a behavioral disorder is the one that requires the least number of unlikely events to account for it. Once test data have been reliably scored and appropriate norms have been chosen to convert scores to standard scores or percentiles, the clinician determines whether the pattern of performance is typical of individuals with a particular diagnosis. The many differences in cognitive performance between diagnostic groups and between individuals within these groups can be best appreciated and put to clinical use when the evaluation is based on test score patterns and item analyses taken from tests of many different functions. If it fits a diagnostic pattern, the clinician then must consider what would be the behavioral ramifications of this individual’s unique pattern, as even within a diagnostic category, few persons will have an identical presentation. Now that neuroimaging and laboratory technology often provide the definitive neurological diagnosis, how a brain disorder or disease might play out in real life may be the most important issue in the neuropsychological examination. In planning the examination, the examiner will have in mind questions about the patient’s real life functioning, such as potential for training or rehabilitation, return to work or requiring assisted living, quality of life and capacity for interpersonal relationships. These examinations require a fairly broad review of functions. Damage to cortical tissue in an area serving a specific function not only changes or abolishes the expression of that function but changes the character of all activities and functions in which the impaired function was originally involved, depending upon how much the function itself has changed and the extent to which it entered into the activity (see pp. 347–348). A minor or well-circumscribed cognitive deficit may show up on only one or a very few depressed test scores or may not become evident at all if the test battery samples a narrow range of behaviors. Most of the functions that a neuropsychologist examines are complex. In
analyzing test score patterns, the examiner looks for both commonality of dysfunction and evidence of impairment on tests involving functions or skills that are associated neuroanatomically, in their cognitive expression, and with well-described disease entities and neuropathological conditions. First, the examiner estimates a general level of premorbid functioning from the patient’s history, qualitative aspects of performance, and test scores, using the examination or historical indicators that reasonably allow the highest estimate (see Chapter 4). This aids the examiner in identifying impaired test performances. The examiner then follows the procedures for dissociation of dysfunction by comparing test scores with one another to determine whether any factors are consistently associated with high or low scores, and if so, which ones (see p. 131). The functions which contribute consistently to impaired test performances are the possible behavioral correlates of brain dysfunction, and/or represent those areas of function in which the patient can be expected to have the most difficulty. When the pattern of impaired functions or lowered test scores does not appear to be consistently associated with a known or neurologically meaningful pattern of cognitive dysfunction, discrepant scores may well be attributable to psychogenic, developmental, or chance deviations (L.M. Binder, Iverson, and Brooks, 2009). By and large, the use of pattern analysis has been confined to tests in the Wechsler batteries because of their obvious statistical comparability. However, by converting different kinds of test scores into comparable score units, the examiner can compare data from many different tests in a systematic manner, permitting the analysis of patterns formed by the scores of tests from many sources. For example, R.K. Heaton, Grant, and Matthews (1991) converted scores from a large number of tests to a single standard score system. INTEGRATED INTERPRETATION Pattern analysis is insufficient to deal with the numerous exceptions to characteristic patterns, with the many rare or idiosyncratically manifested neurological conditions, and with the effects on test performance of the complex interaction between patients’ cognitive status, their emotional and social adjustment, and their appreciation of their altered functioning. For the examination to supply answers to many of the diagnostic questions and most of the treatment and planning questions requires integration of all the data—from tests, observations made in the course of the examination, and the history of the problem. Some conditions do not lend themselves to pattern analysis beyond the use
of large and consistent test score discrepancies to implicate brain damage. For example, malignant tumors are unlikely to follow a regular pattern of growth and spread (e.g., see Plates x and x). In order to determine which functions are involved and the extent of their involvement, it is usually necessary to evaluate the qualitative aspects of the patient’s performance very carefully for evidence of complex or subtle aberrations that betray damage in some hitherto unsuspected area of the brain. Such painstaking scrutiny may not be as necessary when dealing with a patient whose disease generally follows a wellknown and regular course. Test scores alone do not provide much information about the emotional impact of brain damage on the individual patient’s cognitive functioning or how fatigue may alter performance. However, behavior during the examination is likely to reveal a great deal about reactions to the disabilities and how these reactions in turn affect performance efficiency. Emotional reactions of brain damaged patients can affect their cognitive functioning adversely. The most prevalent and most profoundly handicapping of these are anxiety and depression. Euphoria and carelessness, while much less distressing to the patient, can also seriously interfere with expression of a patient’s abilities. Many brain impaired patients have other characteristic problems that generally do not depress test scores but must be taken into account in rehabilitation planning. These are motivational and control (executive function) problems that show up in a reduced ability to organize, to react spontaneously, to initiate goal-directed behavior, or to carry out a course of action independently. They are rarely reflected in test scores since almost all tests are well structured and administered by an examiner who plans, initiates, and conducts the examination (see Chapter 16 for tests that elicit these problems). Yet, no matter how well patients do on tests, if they cannot develop or carry out their own course of action, they are incompetent for all practical purposes. Such problems become apparent during careful examination, but they usually must be reported descriptively unless the examiner sets up a test situation that can provide a systematic and scorable means of assessing the patient’s capacity for self-direction and planning.
7 Neuropathology for Neuropsychologists In order to make diagnostic sense out of the behavioral patterns that emerge in neuropsychological assessment, the practitioner must be knowledgeable about the neuropsychological presentation of many kinds of neurological disorders and their underlying pathology (Hannay, Bieliauskas, et al., 1998). This knowledge gives the examiner a diagnostic frame of reference that helps to identify, sort out, appraise, and put into a diagnostically meaningful context the many bits and pieces of observations, scores, family reports, and medical and social history that typically make up the material of a case. Furthermore, such a frame of reference should help the examiner know what additional questions need be asked or what further observations or behavioral measurements need be made to arrive at the formulation of the patient’s problems. This chapter can only sketch broad and mostly behavioral outlines of such a frame of reference. It cannot substitute for knowledge of neuropathology gained from contact with many patients and their many different neuropathological disorders at many different stages in their course and— ideally—in a training setting. However, with its predominantly neuropsychological perspective, this chapter may help to crystallize understandings gained in clinical observations and training, and enhance the clinician’s sensitivity to the behavioral aspects of the conditions discussed here. The major disorders of the nervous system having neuropsychological consequences will be reviewed according to their customary classification by known or suspected etiology or by the system of primary involvement. While this review cannot be comprehensive, it covers the most common neuropathological conditions seen in the usual hospital or clinic practice in western countries. For more detailed presentations of the medical aspects of these and other less common conditions that have behavioral ramifications see Asbury et al., Diseases of the Nervous System (2002); Brazis et al., Localization in Clinical Neurology, 5th ed. (2007); Gilman, Oxford American Handbook of Neurology (2010); Ropper and Samuels’ Adams and Victor’s Principles of Neurology, 9th ed. (2009). As in every aspect of neuropsychological assessment or any other personalized clinical assessment procedure, the kind of information the
examiner needs to know will vary from patient to patient. For example, hereditary predisposition is not an issue with infectious disorders or a hypoxic (condition of insufficient oxygenation) episode during surgery, but it becomes a very important consideration when a previously socially appropriate person begins to exhibit uncontrollable movements and poor judgment coupled with impulsivity. Thus, it is not necessary to ask every candidate for neuropsychological assessment for family history going back a generation or two, although family history is important when the diagnostic possibilities include a hereditary disorder such as Huntington’s disease. In certain populations, the incidence of alcohol or drug abuse is so high that every person with complaints suggestive of a cerebral disorder should be carefully questioned about drinking or drug habits; yet for many persons, such questioning becomes clearly unnecessary early in a clinical examination and may even be offensive. Moreover, a number of different kinds of disorders produce similar constellations of symptoms. For example, apathy, affective dulling, and memory impairment occur in Korsakoff’s psychosis, with heavy exposure to certain organic solvents, as an aftermath of severe traumatic brain injury or herpes encephalitis, and with conditions in which the supply of oxygen to the brain has been severely compromised. Many conditions with similar neuropsychological features can be distinguished by differences in other neuropsychological dimensions. Other conditions may be best identified in terms of the patient’s history, associated neurological symptoms, and the nature of the onset and course of the disorder. The presence of one kind of neuropathological disorder does not exclude others, nor does it exclude emotional reactions or psychiatric and personality disorders. With more than one disease process affecting brain function, the behavioral presentation is potentially complex with a confusing symptom picture: e.g., Chui, Victoroff, and their colleagues (1992) suggested the diagnostic category of “mixed dementia”for those dementing conditions involving more than one neuropathological entity. Also, some conditions may increase the occurrence of other disorders; e.g., traumatic brain injury is a risk factor for Alzheimer ’s disease and stroke is associated with Alzheimer ’s disease (Hachinski, 2011), and alcoholism increases the likelihood of head injuries from falling off bar stools, motor vehicle accidents, or Saturday night fights. No single rule of thumb will tell the examiner just what information about any particular patient is needed to make the most effective use of the examination data. Whether the purpose of the examination is diagnosis or
delineation of the behavioral expression of a known condition, knowledge about the known or suspected condition(s) provides a frame of reference for the rational conduct of the examination. TRAUMATIC BRAIN INJURY Humpty Dumpty sat on a wall. Humpty Dumpty had a great fall. And all the king’s horses and all the king’s men Couldn’t put Humpty together again. Mother Goose
Traumatic brain injury (TBI) generally refers to injury involving the brain resulting from some type of impact and/or acceleration/deceleration of the brain. An international working group sponsored by the National Institutes of Health and other government agencies in the United States sponsored an international and interagency working group to establish this consensus statement. “TBI is defined as an alteration in brain function, or other evidence of brain pathology, caused by an external force”(p. 1637) (D.K. Menon et al., 2010). This brief definition provides a consensus standard but does not address severity, how the effects of TBI are assessed, or neurobehavioral outcome. Some of the terminology related to TBI classification and severity is relevant for these important issues. Head injury is still synonymously with TBI, but in some cases it refers to injury of other head structures such as the face or jaw. Most TBIs are closed in that the skull remains intact and the brain is not exposed. Closed head injury (CHI) is referred to as blunt head trauma or blunt injury as well. The skull can be fractured and the injury may still be a CHI as long as the meningeal covering of the brain, or the brain itself is not breached by penetration through the skull. Penetrating head injuries (PHI), sometimes called open head injuries, include all injuries from any source in which the skull and dura are penetrated by missiles or other objects. While there are communalities between CHI and PHI, not only the nature of the injury but also the pathophysiological processes set in motion by damage to the brain may differ in these two types of injuries. For some clinicians, the term TBI can include other acquired etiologies such as stroke and anoxia; the term acquired brain injury (ABI) refers to just about anything that can damage brain tissue and may be applied to TBIs. Thus the meaning of TBI continues to be somewhat confusing and needs to be clarified in the literature as well as by the clinician when evaluating patients
with such injuries. In this book TBI refers strictly to the effects of CHI and/or PHI. Another term is concussion, considered a mild form of TBI (p. 183). TBI is the most common cause of brain damage in children and young adults (for reviews see Rutland-Brown et al., 2006; Summers et al., 2009; Thurman et al., 1999). Modern medical techniques for the management of acute brain conditions are saving many accident victims who ten or twenty years ago would have succumbed to the metabolic, hemodynamic, and other complications that accompany severe TBI (Diedler et al., 2009; Jagannathan et al., 2007; M.E. Tang and Lobel, 2009). As a result, an ever-increasing number of survivors of severe TBI, mostly children and young adults at the time of injury, are living with this relatively new and usually tragic phenomenon of being physically fit young people whose brains have been significantly damaged for their lifetime. The secondary or delayed injury to the brain from a variety of sources such as ongoing hemorrhage, hypoxia (insufficient oxygen), ischemia (insufficient or absent blood supply), elevated intracranial pressure (ICP) and changes in metabolic function, coagulopathy (blood clotting), and pyrexia (fever) may be as or even more important than the immediate direct damage to and disruption of brain tissue and neural circuitry (M.W. Greve and Zink, 2009; Maas et al., 2008; Povlishock and Katz, 2005). Better understanding of these conditions has led to the development of specialized clinical monitoring techniques for more serious injuries (Guérit et al., 2009; Helmy et al., 2007) and investigations into the basic mechanisms underlying these clinical changes. Knowledge from these studies stimulates the search for efficacious pharmacological treatments (Narayan, Michel, et al., 2002; Povlishock, 2008; Zitnay et al., 2008) and other interventions such as hypothermia (Marion and Bullock, 2009) and hyperbaric oxygen therapies (Rockswold et al., 2007). Research findings that seem promising in the laboratory may not prove to be clearly efficacious in clinical trials in which the same rigorous control over a myriad of variables, including genetic and injury characteristics, is not possible. Prevalence estimates and incidence reports in epidemiological studies vary depending on such decisions as whether to include all grades of severity, to count deaths, to limit the study to hospitalized patients, etc. (Berrol, 1989; J.F. Kraus and Chu, 2005). Incidence of TBI also varies with the study site, as urban centers account for a higher incidence of TBI than rural areas (Gabella et al., 1997; F.B. Rogers et al., 1997). In the United States in 2003, based on Centers for Disease Control (CDC) data, there were an estimated 1,565,000 TBIs (see Rutland-Brown et al., 2006). Of these, approximately 1,224,000
were evaluated in an emergency room with 290,000 hospitalized and 51,000 deaths. Also based on CDC data, it was estimated that in 2005 approximately 1.1%, or 3.17 million individuals in the U.S. civilian population had some form of long-term disability associated with TBI (Corrigan, Selassie, and Orman, 2010; Zaloshnja et al., 2008). The estimated current incidence of all types of TBI varies across studies but averages about 150 per 100,000 (J.F. Kraus and Chu, 2005), considerably lower than a 220 per 100,000 previous estimate by the same senior author a decade earlier (J.F. Kraus, McArthur, et al., 1996). However, for the most common type of CHI, mild TBI, many injured never seek medical care. If their numbers were included in epidemiological studies, the annual incidence rate could be as high as ~500/100,000 population (Bazarian et al., 2005; Ryu et al., 2009). Higher rates have been reported for South Africa (316 per 100,000; Nell and Brown, 1991) and South Australia (322 per 100,000; see Hillier et al., 1997). Whether due to improved driving habits or inclusion of data from all parts of Australia, the estimate for 2004 was 107 per 100,000 (O’Rance and Fortune, 2007). Some countries (e.g., England, Japan, Sweden) have posted half as many fatal injuries as the United States (J.F. Kraus, McArthur, et al., 1996; J.T.E. Richardson, 2000) . While different across countries, these data point out the universal nature and high frequency of TBI. Even estimates of mortality rates vary greatly (J.F. Kraus and Chu, 2005), especially by severity of injury (Udekwu et al., 2001). Mortality rates may vary over time for such reasons as changing hospital admission practices and effective preventive programs (Engberg and Teasdale, 2001). In France, almost 8,000 deaths were from motor vehicle accidents (MVAs) in 2001; following a strict system for taxing speeders, the French MVA death rate was below 4,000 for 2010 (J-L Truelle, 2011, personal communication [mdl]). After the initial period of high risk, long-term mortality from TBI is primarily related to the late effects of injury, lack of functional independence, age, and tube feeding (Baguley, Slewa-Younan, et al., 2000; Harrison-Felix et al., 2009; Shavelle et al., 2001). Posttraumatic epilepsy (Annegers, Hauser, et al., 1998), increased lifetime incidence of neuropsychiatric sequelae (Holsinger et al., 2002), and late life dementing illness (Plassman, Havlik, et al., 2000) are significant late sequelae associated with TBI. McGrath (2011), reports studies of retired professional football (USA style) players who are five to 19 times more likely to become demented than the general population. He also notes that 14 players have been diagnosed with amyotrophic lateral sclerosis, a morbidly paralyzing disease popularly called “Lou Gehrig’s disease”after a baseball hero who may actually have had concussion-related
trauma, not the condition that bears his name. The peak ages for TBI are in the 15–24 year range with high incidence rates also occurring in the first five years and for elderly persons (J.F. Kraus and Chu, 2005; Love et al., 2009; J.T.E. Richardson, 2000). The most common causes of TBI are falls (Helps et al., 2008; Jager et al., 2000; Naugle, 1990) and transportation related injuries (CDC, 1997; J.F. Kraus and Chu, 2005; Masson, Thicoipe, et al., 2001). More than half the injuries incurred by infants and young children and by persons in the 64 and older age range are due to falls (Love et al., 2009). Moving vehicle accidents (MVAs) account for half of all head injuries in the other age groups (Cohadon et al., 2002; Masson, Thicoipe, et al., 2001) . Motorcyclists have a higher mortality rate than occupants of motor vehicles, but pedestrians in traffic accidents have the highest rate of all (de Sousa et al., 1999; E. Wong et al., 2002). Helmets have reduced head injuries in sports such as bicycling, hockey, horseback riding, and football although not all helmets reduce craniofacial injuries effectively (S.W. Marshall et al., 2002; P.S. Moss et al., 2002; D.C. Thompson et al., 2003). In MVA-related accidents, helmets reduce mortality and morbidity but significant brain injury occurs even when helmets are worn (Croce et al., 2009). While helmets may protect the skull and surface of the head, the internal movement dynamics from the trauma still occur, producing shear-strain and mechanical deformation of the brain (Hamberger et al., 2009; Motherway et al., 2009). Some have also argued that wearing a helmet creates a sense of invulnerability, thus encouraging increased risk taking by the wearer, especially in sports. Clearly, research supports the use of helmets. Excepting the over-65 age group in which women outnumber men, men sustain injuries about twice as frequently as women, with this sex differential greatest at the peak trauma years (Cohadon et al., 2002; J.F. Kraus and Chu, 2005; Naugle, 1990). Lower socioeconomic status, unemployment, and lower educational levels are also risk factors, increasing the likelihood of TBIs due to falls or assaults more than for other groups (Cohadon et al., 2002; Naugle, 1990). “Typically, TBI occurs in young working class males, who may have had limited educational attainment and who may not have had a stable work history prior to injury”(Ponsford, 1995). Violent TBI (e.g., assault with a blunt or penetrating object, gunshot wound) inflicted by oneself or another, is higher for those who have less than a high school degree (48% vs. 39%), are unemployed (44% vs. 21%), are male (86% vs. 72%), and have a higher blood alcohol level at the time of injury (92.9 vs. 67 mg/dl), and also for African Americans (Hanks et al., 2003). Preexisting alcohol and substance abuse are major factors contributing to
the incidence of TBI (Parry-Jones et al., 2006). They are closely associated with risk-taking behavior and being “under the influence”at the time of injury. In one series of patients, at least 29% had some prior central nervous system condition, including history of alcoholism (18%) and head injury (8%) (J.L. Gale, Dikmen, et al., 1983), but higher estimates for heavy drinkers have been reported (Bombardier et al., 2002; Cohadon et al., 2002; Rimel, Giordani, et al., 1981). While transportation accidents and falls are the leading causes of TBI, assaults—whether by blows to the head or a penetrating weapon, sports and recreational activities, and the workplace—together account for about 25% to 40% of reported injuries (J.F. Kraus and Chu, 2005; Naugle, 1990; R.S. Parker, 2001). The behavioral effects of all brain lesions hinge upon a variety of factors, such as severity, age, site of lesions, and premorbid personality (see Chapter 8). The neuropsychological consequences of head trauma also vary according to how the injury happened, e.g., whether MVA related, as a result of a blow to the head, or from a missile furrowing through it. With knowledge of the kind of head injury, its severity, and the site of focal versus diffuse damage, experienced examiners can generally predict the broad outlines of their patients’ major behavioral and neuropsychological disabilities and the likely psychosocial prognosis. In contemporary practice, some form of brain imaging is performed on almost all patients presenting with acute TBI when medically indicated, thus providing the clinician information about the location(s) and extent of neuropathology detectable by neuroimaging. Careful neuropsychological examination can demonstrate the individual features of the patient’s disabilities, such as whether verbal or visual functions are more depressed, and the extent to which retrieval problems, frontal inertia, or impaired learning ability each contribute to the patient’s poor performance on memory tests. Yet, the similarities in the behavioral patterns of many patients, especially those with CHI, tend to outweigh individual differences. Furthermore, neuropsychological studies serve as a significant link between patients’ experienced neurocognitive and neurobehavioral deficits and the lesions observed in neuroimaging studies.
Severity Classifications and Outcome Prediction The range of TBI severity begins with impacts so mild as to leave no behavioral traces, resulting in no lasting structural injury to the brain and producing only the briefest of transient and temporary changes in neurological
function (Ommaya et al., 2002). Everyone has had a bruised head from bumping into a protruding shelf or being suddenly jostled while in a car or bus with no lasting ill effects; such injuries do not reach the threshold that would damage the brain and do not represent a TBI. The tough encasing skull and the configuration of the brain within it handle these movements without any damage whatsoever. The internal structure of the skull, as well as the configuration of the brain’s surface, holds it in place for most routine movements (Bigler, 2007b; Cloots et al., 2008; J. Ho and Kleiven, 2009). At the other end of the severity continuum are patients in prolonged coma or a vegetative state from catastrophic brain injury in which most regions of the brain have been damaged (H.S. Levin, Benton, Muizelaar, and Eisenberg, 1996) and where neuroimaging studies expose the most serious neuropathological abnormalities. Neuropsychological assessment is mostly concerned with patients between these two extremes. TBI severity generally relates to behavioral and neuropsychological outcomes (Cohadon et al., 2002; H. S. Levin, 1985; J.T.E. Richardson, 2000). The most far-reaching effects of TBI involve personal and social competence, more so than even the well-studied cognitive impairments. Relatively few patients who have sustained severe head injury return to competitive work similar to what they did prior to injury, and those who do often can hold jobs only in the most supportive settings (Hsiang and Marshall, 1998; Livingston et al., 2009; Shames et al., 2007), even despite relatively normal scores on tests of cognitive functions. Considering all levels of injury, van Velzen and colleagues (2009) observed that only 40% returned to work after two years. Quality of life as reflected in patient and family satisfaction and distress also tends to be increasingly compromised with increased severity of injury (Destaillats et al., 2009; Lezak and O’Brien, 1990; Ponsford, 1995). When discussing severity ratings and outcome prediction, it is as important to note the discrepancies from these predictions. Prediction exceptions occur at all points along the severity continuum. Thus patients whose injuries seem mild, as measured by most accepted methods, may have relatively poor outcomes, both cognitively and socially; and conversely, some others who have been classified as moderately to severely injured have enjoyed surprisingly good outcomes (Foreman et al., 2007; Newcombe, 1987). Moreover, the accuracy of an outcome prediction may depend on when outcome is evaluated. Some patients report more symptoms a year after the accident than after the first month (Dikmen, Machamer, Fann, and Temkin, 2010). While complaints of physical symptoms decreased, more emotionrelated symptoms (temper, irritability, and anxiety) were documented at a year
post injury. Behavior-based classification systems for TBI severity
The need to triage patients both for treatment purposes and for outcome prediction has led to the development of a generally accepted classification system based on the presence, degree, and duration of coma, the Glasgow Coma Scale (GCS) (Jennett and Bond, 1975; Matis and Birbilis, 2008; see Table 18.2, p.784). Measurement of severity by means of the GCS depends upon the evaluation of both depth and duration of altered consciousness. Coma duration alone is a poor predictor of outcome for the many patients who have brief periods of loss of consciousness (LOC) up to 20–30 minutes (Gronwall, 1989), but it is a good predictor for more severe injuries (J.F. Kraus and Chu, 2005; B.[A.] Wilson, Vizor, and Bryant, 1991). Duration of posttraumatic amnesia (PTA) can also help determine the presence and severity of a TBI. Brief or no PTA is associated with mild injury with increasing PTA duration associated with more severe injury (see p. 185 for methods measuring PTA; see also E.A. Shores et al., 2008). At the mildest end of the TBI spectrum is concussion, a term that has been an issue in TBI classification. Being the mildest form of TBI also means that definitional statements of concussion represent the minimal standards for presence of a brain injury, even one with only transient evident symptoms. Questions concerning the nature and duration of concussion symptoms have created considerable controversy about this condition (R.W. Evans, 1994, 2010; L.K. Lee, 2007). Three consensus-based documents that now define concussion—and therefore mild TBI—are probably most relevant to neuropsychology (there are more, but beyond the scope of this chapter to review). The oldest definition comes from the 1995 American Congress of Rehabilitation Medicine (ACRM) definition (see Table 7.1). This ACRM definition has been endorsed in the National Academy of Neuropsychology’s position paper on “Recommendations for diagnosing mild traumatic brain injury”(R.M. Ruff, Iverson, et al., 2009, p. 184). TABLE 7.1 Diagnostic Criteria for Mild TBI by the American Congress of Rehabilitation Medicine. Special Interest Group on Mild Traumatic Brain Injury
Note. Developed by the Mild Traumatic Brain Injury Committee of the Head Injury Interdisciplinary Special Interest Group (1993).
Another set of diagnostic criteria for concussion comes from the Third International Conference on Concussion in Sport (ICCS) (P. McCrory, Meeuwisse et al., 2009): Concussion is defined as a complex pathophysiological process affecting the brain, induced by traumatic biomechanical forces. Several common features that incorporate clinical, pathologic, and biomechanical injury constructs that may be utilized in defining the nature of a concussive head injury include: 1. Concussion may be caused by a direct blow to the head, face, neck, or elsewhere on the body with an “impulsive”force transmitted to the head. 2. Concussion typically results in the rapid onset of shortlived impairment of neurologic function that resolves spontaneously. 3. Concussion may result in neuropathologic changes, but the acute clinical symptoms largely reflect a functional disturbance rather than a structural injury. 4 . Concussion results in a graded set of clinical symptoms that may or may not involve loss of consciousness. Resolution of the clinical and cognitive symptoms typically follows a sequential course; however, it is important to note that in a small percentage of cases, postconcussive symptoms may be prolonged. 5. No abnormality on standard structural neuroimaging studies is seen in concussion.
The ICCS definition was intended for “… care of injured athletes, whether recreational, elite, or professional level”(McCrory, Meeuwisse, et al., 2009, p. 756). However, these authors also note that “there is still not professional unanimity concerning sports concussion”(p. 756). The ICCS document recommends the list of concussion symptoms in the Sports Concussion Assessment Tool (SCAT2) (see Table 7.2), for diagnosis, but limits its application to “ … the majority (80%–90%) of concussions [which] resolve in
a short (7- to 10-day) period although the time frame may be longer in children and adolescents”(p. 757). These concussion criteria were not intended as emergency department (ED) guidelines for hospital TBI evaluations, as the dynamics of injuries from other nonsports sources may be very different from what occurs in sports concussion. TABLE 7.2 Selected Signs and Symptoms of a Concussion Adapted from Sports Concussion Assessment Tool (SCAT2) and Halstead and Walter (2010)
Note. Concussion should be suspected in the presence of any one or more of the above symptoms following some form of head injury.
Many sports concussions as well as those that occur at home and in other recreational or leisure settings are never evaluated in the ED and have very brief and transient effects with no detectable sequelae (M. McCrea, Pliskin, et al., 2008). Athletes are susceptible to repeated concussive and other TBIs and thus have their own set of potential pathological consequences and neuropsychological sequelae that may be different from nonsport related TBIs (McKee, Cantu, et al., 2009; McKee, Gavett, et al., 2010) (see pp. 221–223). Once it has been determined that an individual has sustained a brain injury, at whatever severity level, that person should be considered a candidate for neuropsychological assessment of what could be cognitive and/or neurobehavioral sequelae. Three position papers from the National Academy of Neuropsychology discuss the neuropsychological correlates of brain injury that may interfere with real life functioning. One dealing with sports concussion offers assessment recommendations with conclusions similar to those of the ICCS (Moser, Iverson, et al., 2007). The others concern the diagnosis of mild TBI occurring as a result of military/combat
related injuries (McCrea, Pliskin, et al., 2008) and mild TBI in civilian head injury (Ruff, Iverson, et al., 2009). The latter paper provides useful directive for the initial evaluation of mild TBI: The diagnosis of mild TBI is based on injury characteristics. Neuroimaging is adjunctive, but in the absence of positive findings not conclusive. Neuropsychologic testing examines the consequences of a mild TBI, but cannot be used as the basis for the initial diagnosis, which must be determined on the basis of LOC, PTA, confusion and disorientation, or neurologic signs. It is well established that neuropsychologic test results can also be influenced by numerous demographic, situational, preexisting, co-occurring, and injury-related factors. Therefore, the diagnosis of a mild TBI is primarily based on a clinical interview, collateral interviews, and record review. Records of the day of injury and the first few medical contacts following the date of injury can be most helpful for an accurate diagnosis. However, records that contain an initial GCS of 15 are insufficient to rule out a mild TBI. Additional information is necessary. A thoughtful and deliberate approach should be used that retrospectively assesses the presence of loss or altered consciousness, gaps in memory or amnesia (retrograde and posttraumatic), and focal neurologic signs. One cannot assume that such a deliberate approach was taken by health care providers at the scene or in the emergency department (p. 9).
The most commonly used scale for assessing the presence and initial severity of TBI is the GCS, recorded by paramedics at the scene of an injury or in the ED or hospital. While valuable, it has limitations. Like any other predictor of human behavior, the GCS is not appropriate for many cases (Matis and Birbilis, 2008). A single GCS score without data on when it was determined and the status of other pertinent variables at the time (e.g., clinical signs, blood alcohol level and level of recreational or prescribed drugs, sedation for agitation, amount and timing of drugs administered earlier or currently, swelling and discoloration, intubation, facial injuries, anesthesia for surgery, CT scan findings) can lead to an inaccurate assessment of the severity of injury (see p. 186). Many different kinds of data, including GCS scores from the first 48–72 hours postinjury, may be required to establish the severity of injury in some patients. For example, persons with a GCS of 15 but abnormalities on the CT scan should be properly classified as “complicated mild TBI,” yet they often perform on neuropsychological tests more like individuals with a moderate TBI (Kashluba et al., 2008; R.M. Ruff, Iverson, et al., 2009). Persons who enter the TBI trauma system with little or no loss of consciousness but who suffer significant deterioration in mental status, usually within the first 72 hours postinjury from a delayed hematoma, cerebral edema, or other trauma related problems are likely to be misclassified by an early GCS score (Servadei et al., 2001; Styrke et al., 2007). A patient who clearly has a severe head injury but recovers consciousness within the first 24 hours might be misclassified if the best day 1 GCS score (highest GCS score in first 24 hours) is used as a measure of severity. Moreover, patients with left lateralized PHI are more likely to suffer loss of consciousness (LOC) and inability to respond verbally than those whose injuries are confined to the right side of the brain; and the duration of coma for those with right-sided lesions tends to be
shorter than when lesions are on the left (Salazar, Martin, and Grafman, 1987). As an additional problem, alcohol intoxication can spuriously lower a GCS score such that the higher the blood alcohol level at time of injury, the more likely it is that the GCS score will improve when reevaluated at least six hours later (Stuke et al., 2007).
Some clinicians rely instead on PTA to measure the severity of the injury (e.g., Bigler, 1990a; M.R. Bond, 1990; W.R. Russell and Nathan, 1946; see Table 7.3). Not surprisingly, duration of PTA correlates well with GCS ratings (H.S. Levin, Benton, and Grossman, 1982) except for some finer scaling at the extremes. N. Brooks (1989) observed that PTA duration (which begins at time of injury and includes the coma period) typically lasts about four times the length of coma. Early difficulties in defining and therefore determining the duration of PTA restricted its usefulness (Jennett, 1972; Macartney-Filgate, 1990). Standardized measures such as the Revised Westmead Post-Traumatic Amnesia Scale (Shores, Lammel, et al., 2008) and the Galveston Orientation Amnesia Test (GOAT, pp. 786–788) provide uniform formats for its measurement. However, some clinical challenges in establishing PTA remain. While it is generally agreed that PTA does not end when the patient begins to register experience again but only when registration is continuous, deciding when continuous registration returns may be difficult with confused or aphasic patients (Gronwall and Wrightson, 1980). Moreover, many patients with relatively mild TBI are discharged home while still in PTA or never seek emergency medical care in the first place. An examiner at some later date can only estimate PTA duration from reports by the patient or family members who often have less than reliable memories. These considerations have led such knowledgeable clinicians as Jennett (1979) and N. Brooks, (1989) to conclude that fine-tuned accuracy of estimation is not necessary; judgments of PTA in the larger time frames of hours, days, or weeks will usually suffice for clinical purposes (see Table 7.3). TABLE 7.3 Estimates of Injury Severity Based on Posttraumatic Amnesia (PTA) Duration PTA Duration 140 mm Hg and diastolic (heart beat phase when heart muscle relaxes allowing blood to reenter it) pressure > 90 mm Hg. A major precursor of heart attacks and strokes, hypertension in itself may alter brain substance and affect cerebral functioning (Johansson, 1997). The most usual risk factors for hypertension include obesity, excessive use of salt, excessive alcohol intake, lack of exercise, and tobacco use (N.M. Kaplan, 2001). Cerebrovascular risk factors in midlife appear to increase the likelihood of vascular cognitive impairment in later life (DeCarli et al., 2001; Kilander et al., 2000). Thus, young hypertensive patients may be more at risk for cognitive impairments than their older counterparts as the cumulative effects of elevated blood pressure take their toll later in life (Waldstein, Jennings, et al., 1996). Moreover, even people who have normal blood pressure at age 55 will have a 90% lifetime risk of developing hypertension (Chobanian et al., 2003). A review of studies by Birns and Kaira (2009) shows that the relationship between hypertension and cognitive function is complex. Cross-sectional studies find mixed relationships as many studies report no correlation between hypertension and cognitive impairment, or low blood pressure associated with nearly as much cognitive decline as hypertension, or a U-shaped association. Hypertension is more consistently linked with cognitive decline in longitudinal studies. Similar findings have been reported by the Baltimore Longitudinal Aging Study of 829 participants aged 50 and older (Waldstein, Giggey, et al., 2005). Cross-sectional and longitudinal correlations of blood pressure with cognitive function were predominantly nonlinear and moderated by age,
education, and antihypertensive medications. The Framingham Study Group, reporting on their 2,123 participants in the 55 to 89 age range, found no cognitive changes associated with hypertension (M.E. Farmer, White, et al., 1987); but upon reanalysis of tests taken 12 to 14 years later, hypertension with longer duration was associated with poorer cognitive performance (M.E. Farmer, Kittner, et al., 1990). Modern neuroimaging techniques and higher power MRIs, especially the use of 3 Tesla magnets, are more likely than older techniques to show microvascular ischemic changes or small vessel ischemic disease in elderly patients undergoing scans. The presence of such abnormalities on MRI often leads to a diagnosis of small vessel ischemic disease and, if cognitive impairment is suspected, vascular (or multi-infarct) dementia (see below). The detection of these changes, however, is at least partly an artifact of the more sensitive diagnostic measures: the “magnifying glass”of high power MRI shows “lesions”that were not observable before. What matters in the end is whether the patient has cognitive/behavioral manifestations. When cognitive deficits develop, they usually consist of impaired attention, information processing speed, and executive function (J.T. O’Brien et al., 2003). Tests requiring executive control of attention and speed are particularly sensitive (e.g., Digit Symbol or one of its variations, Trail Making Test, and the Stroop Test) (van Swieten et al., 1991; Verdelho et al., 2007). The effects of antihypertensive medications on cognition and quality of life vary. Favorable effects have been reported with ACE (angiotensin-converting enzyme) inhibitors and angiotensin II receptor antagonists (Fogari and Zoppi, 2004). However, drowsiness and listlessness can occur with methyldopa (Aldomet) (Lishman, 1997; Pottash et al., 1981), and fi-blockers such as propranolol have been associated with confusion and impaired cognition, especially in elderly persons (Roy-Byrne and Upadhyaya, 2002; M.A. Taylor, 1999) . Other studies suggest no significant cognitive changes with these medications (e.g., G. Goldstein, Materson et al., 1990; Pérez-Stable et al., 2000). Antihypertensive medication effects on quality of life measures find varying patterns on such measurement categories as “general well-being,” “sexual dysfunction,” “work performance,” and “life satisfaction”(Croog et al., 1986; Fogari and Zoppi, 2004). In a comparison of overweight women with and without hypertension, more hypertensive women scored in the negative direction than nonhypertensive women on seven (of eight) measures of well-being (e.g., General Health, Vitality, Social Functioning) (Kleinschmidt et al., 2000). They also had significantly higher scores on the Beck Depression Inventory as well as on self-report measures of fatigue, anxiety, and “vision loss.” Hypertensive women were taking more medications than the nonhypertensives, raising the chicken-egg question of whether medications
affected the quality of life of these women, or “perhaps the use of many medications relates to the severity of symptoms and concurrent problems associated with [hypertension]”(p. 324).
Prevention of hypertension or keeping it under control is important for preserving wellness. When lifestyle changes are not enough, antihypertensive medicines can be effective (Pedelty and Gorelick, 2008). Two or more antihypertensive medicines may be needed to achieve optimal control. For a list of common classes of oral antihypertensive drugs see “The Seventh Report to the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure”(Chobanian et al., 2003).
Vascular Dementia (VaD) This is a dementia syndrome with primarily subcortical involvement that has a number of vascular etiologies. Symptoms necessary for this diagnosis are a topic of debate with different criteria offered by different authors (S.A. Cosentino, Jefferson et al., 2004). For example, some consider that a strategically placed infarct can produce dementia (e.g., left angular gyrus, medial thalamus) (Amar and Wilcock, 1996); others have proposed evidence of two or more ischemic strokes accompanied by functional impairment to be necessary for this diagnosis (Chui, Victoroff, et al., 1992); or that diagnosis of VaD requires a decline in memory functioning (Roman, Tatemichi, et al., 1993). Thus the term “vascular dementia”lacks agreed upon diagnostic criteria resulting in significant differences in patient classification (Chui, Mack, et al., 2000; S.A. Cosentino, Jefferson, et al., 2004; Wetterling et al., 1996) . As a result, the term vascular cognitive impairment was coined to encompass the various forms of cognitive impairment due to cerebrovascular disease (J.V. Bowler and Hachinski, 1995). VaD is less common than once thought. In a large series of dementia cases with autopsy, 12% had dementia on the basis of infarcts alone (J.A. Schneider et al., 2007). Pure subcortical VaD is rare as vascular disease often co-occurs with AD (S.A. Cosentino, Jefferson, et al., 2004). Risk factors
White matter lesions may be present in older persons who have normal cognitive function for their age. However, in a longitudinal study of nondemented elderly patients, an increase over time in subcortical white matter hyperintensities was associated with memory decline (Silbert, Nelson, et al., 2008). These authors proposed that white matter changes should not be
considered a benign condition. Similarities between lacunar infarcts and subcortical arteriosclerosis include the common risk factors of hypertension, diabetes, abnormally high fatty content of the blood, obesity, and cigarette smoking. Pathophysiology
The forms of VaD can be divided into large and small vessel disease. Large vessel disease includes emboli, thrombi, and atherosclerosis that can cause multiinfarcts, accounting for about 15% of VaD (Jellinger, 2008) . The neuropsychological deficits associated with large vessel disease are dependent on the site and extent of cerebral lesions. Most large vessel infarcts affect the internal carotid artery blood supply to cortical association areas, but occlusions of the posterior cerebral artery and the anterior cerebral artery also occur (Wetterling, Kanitz, and Borgis, 1996). Symptoms often have an abrupt onset and may follow a step-wise decline in cognition along with increasing numbers and severity of neurological signs. This condition may be referred to as multi-infarct dementia. Cerebral autosomal dominant arteriopathy with subcortical infarcts and leucoaraiosis (CADISIL) is a rare genetic disease producing extensive subcortical infarctions and leukoencephalopathy (white matter disease). The types of small vessel disease are subcortical lacunes, strategic infarcts, watershed infarcts, and subcortical arteriosclerosis (Jellinger, 2008). Subcortical lacunes or microinfarcts, (< 2 mm areas) primarily involve central white matter and subcortical structures such as the thalamus, basal ganglia, internal capsule, and brainstem. These vulnerable areas underlie parts of frontal lobe circuitry so it is not surprising that patients typically exhibit signs of frontal system dysfunction, primarily deficits in executive behavior (C.L. Carey et al., 2008). Lacunes that lack obvious stroke-like symptoms—”silent strokes”or “silent brain infarcts”in that they may not be discovered until autopsy—are surprisingly common. In a longitudinal study of cerebrovascular disease and aging, 33% of normal participants with a mean age of 73 had lacunar infarcts (C.L. Carey et al., 2008). Silent lacunes in these subjects were associated with poorer performance on a composite measure of executive function that included tests of initiation/perseveration, letter fluency, reversed digit span, and visual memory span. Silent infarcts raise the risk of depression and AD (Vermeer, Longstreth, and Koudstaal, 2007). Lacunes also can produce neurological signs such as visual field defects, arm and leg sensory or motor disturbances, dysarthria, crying, small-stepped gait, and urinary incontinence (Chui, 2007;
Vermeer, Longstreth, and Koudstaal, 2007). Pseudobulbar palsy and affect, disordered activities involving mouth movements—e.g., drooling, swallowing —and emotional lability, may occur with multiple bilateral lacunes. Strategic bilateral infarction of the anteromedial thalamus, which includes the dorsomedial nuclei, can produce an abrupt onset dementia syndrome of impaired memory, attention, and executive function, sometimes accompanied by marked apathy (Chui, 2007). Infarction of the inferior genu of the internal capsule is another site that can give rise to a strategic-infarct dementia (i.e., dementia resulting from a single lesion in a critical area), presumably because of a disruption of thalamocortical white matter tracts (Tatemichi, Desmond, et al., 1992). Watershed infarcts due to hypoperfusion at the distal ends of vessels in the territories between arteries may produce hippocampal or thalamic infarcts. The hippocampus is especially sensitive to hypoperfusion, which can result in hippocampal sclerosis or lacunes (Menon and Kelley, 2009). Subcortical arteriosclerosis and Binswanger’s disease differ from lacunar conditions in that the onset is slow and insidious and they involve white matter lesions (Cummings and Mahler, 1991; Stuss and Cummings, 1990). Hypoperfusion and other disturbances of cerebral blood flow produce these chronic ischemia conditions which can result in demyelination, axonal loss, and lacunar infarcts in the periventricular/deep and subcortical white matter (Filley, 1995, 2001; Jellinger, 2008) . White matter hyperintensities show up on MRI scans. Periventricular white matter lesions, sometimes called leukoaraioses, can be quite extensive and may affect as many as 52% of multiinfarct patients, 61% of patients with Alzheimer disease, and more than a third of cognitively healthy individuals over age 50 (Kobari et al., 1990). Cognitive and behavioral symptoms
The defining cognitive features of VaD are psychomotor slowing and executive dysfunction, often accompanied by depression (J.A. Levy and Chelune, 2007). Research criteria for subcortical VaD include a dysexecutive syndrome, deterioration from a previous higher level of cognitive function, evidence of cerebrovascular disease, and the presence or history of neurological signs consistent with subcortical VaD, such as hemiparesis, lower facial weakness, Babinski’s sign, sensory deficit, dysarthria, gait disorder, or extrapyramidal signs (Erkinjuntti et al., 2000). In one study, radiological evidence of abnormalities in at least 25% of cerebral white matter was needed before patients displayed dementia with deficits in executive function, visuocontructions, memory, and language (C.C. Price, Jefferson, et al., 2005). VaD patients tend to retain awareness of their disabilities (DeBettignies et
al., 1990). Given this awareness, it is not surprising to find as many as 60% of these patients with depressive symptoms (Apostolova and Cummings, 2008, 2010; Cummings, Miller, et al., 1987) . Threatening delusions, such as being robbed or having an unfaithful spouse, are likely to occur in half of these patients at some time in their course. Treatment
The ideal treatment for many people with vascular risk factors is lifestyle modifications that include weight reduction, regular physical activity, a diet low in salt and saturated fat and rich in fruits and vegetables, moderation of alcohol consumption, and avoidance of cigarettes (Chobanian, Bakris, et al., 2003). Controlling both high blood pressure, especially systolic, and low blood pressure in elderly adults is important for reducing the risk of dementia (Qiu, Winblad, and Fratiglioni, 2005).
Migraine The second most common neurological disorder and ranked 19th among all diseases causing disability worldwide by the World Health Organization (International Headache Society, 2004), migraine is a headache condition involving 10% to 12% of the adult population (Ferrari and Haan, 2002; Lipton, Bigal, et al., 2007). Prevalence is highest in the 30 to 39 age range and lowest in those 60 years or older (R.W. Evans, 2009). The term, migraine, implies a lateralized headache, although only 60% of migraine headaches occur unilaterally (Derman, 1994). Typically, headaches last four to 72 hours and have at least two of the following pain characteristics: unilateral location, pulsating quality, moderate to severe intensity, and aggravation by routine physical activity; association nausea and/or photophobia and phonophobia are common. Aura, frequently associated with migraine, refers to the initial or presaging symptoms which, frequently, are unpleasant sensations. Classification of headaches has always been somewhat ambiguous. Patients can have more than one type of headache, their headaches may change in nature and frequency over their lifetime, and some headaches are not easily classified. In order to standardize the criteria for diagnosis of headaches and to facilitate the comparison of patients in various studies, a hierarchically constructed set of classification and diagnostic criteria was developed by the Headache Classification Committee of the International Headache Society (2004). In this classification system, the term migraine without aura replaces
common migraine. Migraine with aura refers to classic migraine, a disorder with focal neurological symptoms clearly localizable to the cerebral cortex and/or brainstem. Variants of this condition include prolonged aura, familial hemiplegic migraine, basilar migraine, migraine aura without headache, and migraine with acute onset aura (Silberstein et al., 2002). More unusual migraine disorders have been described. Risk factors
Estimates of prevalence range from 12.9% to 17.6% in women and from 3.4% to 6.1% in men with the ratio of females exceeding males peaking at age 42 (Lipton and Stewart, 1997; W.F. Stewart, Shechter, and Rasmussen, 1994). Migraine rates appear to vary with race: 24.4% for Caucasians, 16.2% for African Americans, and 9.2% for Asians, perhaps reflecting a genetic component (W.F. Stewart, Lipton, and Lieberman, 1996). Up to 61% of migraine is hereditable (R.W. Evans, 2009). A link to chromosome 19p has been identified in familial hemiplegic migraine (Mathew, 2000). Mood disorders—depression, anxiety, and panic attacks—are amongst the most common comorbidities (Breslau et al., 1994; R.W. Evans, 2009; Silberstein, 2001). Epilepsy (Lipton, Ottman, et al., 1994; Silberstein, 2001; Welch and Lewis, 1997), stroke, and essential tremor (Silberstein, 2001) also tend to occur with migraine. The basis for these associations is not clear (Lipton and Silberstein, 1994; Merikangas and Stevens, 1997). It may be bidirectional with depression, epilepsy, stroke, and tremor involving one or more common etiologies. The notion of a migraine personality was introduced by H.G. Wolff (1937) but evidence does not seem to support it (Lishman, 1997). Although some studies report that migraine patients have a relatively high incidence of questionnaire responses associated with “neurotic signs”or “neuroticism”(e.g., Silberstein, Lipton, and Breslau, 1995), this research failed to take into account score inflation resulting from honest reporting of migraine symptoms and their everyday repercussions. Various triggers can induce migraines. Foods such as cheese, chocolate, and alcohol—especially red wine and beer—as well as food additives (nitrates, aspartame, and monosodium glutamate) may precipitate a migraine in some individuals (Peatfield, 1995; Ropper and Samuels, 2009). Lack of sleep or too much, missing a meal, or stress can precipitate an attack (Lishman, 1997) . Other triggers are heat, high humidity, and high altitude (R.W. Evans, 2009). Some research has indicated that patients are more likely to have migraines on the weekend, perhaps due to habit changes such as consuming less caffeine, getting up later and sleeping longer, or reduced work-related stress (Couturier,
Hering, and Steiner, 1992; Couturier, Laman, et al., 1997) but others disagree (T.G. Lee and Solomon, 1996; Torelli et al., 1999). A fall in estrogen levels has been linked to the production of menstruation-related migraines while sustained high levels of estrogen in the second and third trimesters of pregnancy may lead to their reduction (Silberstein, 1992). Migraines may be better, worse, or unchanged with oral contraceptives, menopause, and postmenopausal hormone replacement therapy (MacGregor, 1997). Some drugs (e.g., nitroglycerine, histamine, reserpine, and hydralazine) can be triggers. Even weather changes, high altitudes, and glare lighting have been implicated (Mathew, 2000) . Pharmacologic intervention for migraines and its comorbidities should be individualized for each patient (Silberstein, 2001). Pathophysiology
A number of theories have attempted to account for vulnerability to migraine, but none yet are fully successful, perhaps due to the many and different antecedents for this condition. The vascular theory of migraine proposed that the aura of a migraine is associated with intracranial vasoconstriction and the headache with a sterile inflammatory reaction around the walls of dilated cephalic vessels (J.R. Graham and Wolf, 1938; Lauritzen, 1994). This theory is supported by the pain’s pulsating aspect, occurrence of headaches with other vascular disorders, successful treatment of some headaches with vasoconstrictors, and evidence pointing to the blood vessels as the source of pain. Yet the vascular theory does not explain all aspects of migraine. For instance, in migraine with aura there appears to be a wave of oligemia (reduced blood flow) similar to the “spreading cortical depression of Leao,” which starts in the posterior part of the brain and spreads to the parietal and temporal lobes at the rate of 2 to 3 mm/min for 30 to 60 minutes and to a varying extent (Lauritzen, 1987; Leao, 1944). This spreading oligemia follows the cortical surface rather than vascular distributions (Lauritzen, 1994). Thus arterial vasospasm does not appear to be responsible for the reduced blood flow (Goadsby, 1997; Olesen et al., 1990). Other hypotheses include increased platelet aggregability with microemboli, abnormal cerebrovascular regulation, and repeated attacks of hypoperfusion during the aura (R.W. Evans, 2009). The neurogenic theory of migraine proposes that the headache is generated centrally and involves the serotonergic and adrenergic pain-modulating systems (J.S. Meyer, 2010). Several lines of evidence implicate serotonin: its symptomatic relief of headaches, its drop in blood levels during migraine, and the production of migraines by serotonin antagonists (Sakai et al., 2008). Enhanced serotonin release increases the release of neuropeptides, including
substance P, which results in a neurogenic inflammation of intracranial blood vessels and migraine pain (Derman, 1994). Pain appears to arise from vasodilation, primarily of the intracranial blood vessels, and from activation of the central trigeminal system as well (Mathew, 2000). Cerebral atrophy rates in migraineurs of 4% to 58% have been reported, but many of these CT and MRI imaging interpretations may have been based on subjective criteria (R.W. Evans, 1996). Some imaging studies found an incidence of MRI abnormality no higher than in control subjects (deBenedittis et al., 1995; Ziegler et al., 1991). Gray matter shrinkage in areas associated with pain transmission has been reported (Schmidt-Wilcke et al., 2008) as have other subtle gray matter abnormalities (Rocca, Ceccarelli, et al., 2006). White matter abnormalities on MRI are seen in 12% to 46% of migraine patients, particularly involving the frontal region, while occurring in 2% to 14% of headache-free controls (R.W. Evans, 1996; Filley, 2001). Rates of these abnormalities are relatively high even for migraineurs under age 50 having no other risk factors (Fazekas et al., 1992; Igarashi et al., 1991). Various explanations for the presence of white matter abnormalities in migraine patients include increased water content due to demyelination or interstitial edema, multiple microemboli with lacunar infarcts, chronic low-level vascular insufficiency resulting from vascular instability, and release of vasoconstrictive substances such as serotonin (deBenedittis et al., 1995; Igarashi et al., 1991). Recent imaging studies report deep as well as periventricular white matter lesions (appearing as hyperintensities) in many migraine patients with a subset of them accumulating more lesions over time; cognitive decline was not associated with these lesions: “Migraine is certainly not a risk factor for dementia”(Paemeleire, 2009, p. 134). Transient global amnesia (TGA) is associated with an increased rate of migraine but this disorder differs from common migraine in age of onset and fewer symptoms such as nausea and headache. TGA tends to occur in middleaged to elderly individuals; it usually lasts for a few hours but generally less than a day. Patients typically have total (rarely partial) amnesia for the events during the attack when many do repetitive questioning and are disoriented for time and place (D. Owen et al., 2007). Complex routine tasks may be carried out during the episode. Whether stressful events and activities are precipitants is unclear. Focal neurologic signs are absent. The suggestion has been made that TGA and migraine are independent conditions involving a similar mechanism of paroxysmal dysregulation (Nichelli and Menabue, 1988; Schmidtke and Ehmsen, 1998) . Etiologies for TGA other than migraine have been proposed such as epilepsy and paradoxical emboli (D. Owen et al., 2007;
Marin-Garcia and Ruiz-Vargas, 2008). The migraine condition
Hours and even days before headache onset, migraineurs may experience a prodrome that involves one or more symptoms such as depression, euphoria, irritability, restlessness, fatigue, drowsiness, frequent yawning, mental slowness, sluggishness, increased urination, fluid retention, diarrhea, constipation, food craving, anorexia, stiff neck, a cold feeling, photophobia, phonophobia, and hyperosmia (Derman, 1994; Schoonman et al., 2006; Silberstein and Lipton, 1994). An aura of neurological symptoms localizable to the cerebral cortex or brainstem occurs around 5 to 30 minutes before the headache in about 20% to 25% of migraine episodes (J.K. Campbell, 1990; Derman, 1994; Silberstein and Lipton, 1994). Homonymous visual auras are most common and include scintillating lights forming a zig-zag pattern (techopsia), scotomas due to bright geometric lights or loss of vision, or blurred or cloudy vision (Rossor, 1993). Objects may even change in shape or size (micropsia or macropsia) or zoom in and out. Unilateral sensory disturbances such as paresthesias and dysesthesias are less common as are motor disturbances that include weakness of one limb or half the body (monoplegia, hemiplegia) and language deficits (Derman, 1994; J.S. Saper et al., 1993). Diplopia, vertigo, dysphagia, and ataxia provide evidence of brainstem involvement. Usually the aura lasts less than an hour but it can continue for several days. It is possible to have the aura without a headache. The more common unilateral pain during the headache phase typically involves one periorbital region—cheek or ear, although any part of the head and neck can be affected (Derman, 1994). Pain is generally associated with nausea, less often with vomiting. Facial pallor, congestion of face and conjunctiva, nasal stuffiness, light-headedness, painful sensations, impaired concentration, memory impairment, scalp tenderness, or any of the prodromal phase symptoms may occur. Orthostatic hypotension and dizziness have been reported (Mathew, 2000). Pain can be more or less severe and frequently has a pulsating quality. It may be aggravated by exercise or simple head movement (Derman, 1994; Lishman, 1997; Rossor, 1993; Silberstein and Lipton, 1994). The headache lasts a few hours to several days. Migraineurs often feel tired, listless, and depressed during the succeeding hours to days though the converse—feeling refreshed and euphoric—sometimes occurs (Derman, 1994). Migraines can develop at any time but begin most frequently on arising in
the morning. In a large study, 31% of migraineurs reported an attack frequency of three or more per month and 54% reported severe impairment or the need for bed rest (Lipton, Bigal, et al., 2007). Migraines often compromise functioning for hours to days (J.S. Meyer, 2010) and, in the rare instance, are life threatening (Ferguson and Robinson, 1982). Very occasionally they may be associated with permanent neurological sequelae from ischemic and hemorrhagic stroke (Estol, 2001; Kolb, 1990; Olesen et al., 1993). Migraine does appear to be a small risk factor for stroke (Buring et al., 1995; Etminan et al., 2005; Merikangas et al., 1997) although the relationship between stroke and migraine is not fully understood (Broderick, 1997; Milhaud et al., 2001; K.M.A. Welch, 1994). Concern has been raised about an increased risk for ischemic stroke in women of child-bearing age who have migraine with aura (Donaghy et al., 2002; Milhaud et al., 2001; Tzourio et al., 1995). Cognition
Findings from neuropsychological studies have been inconsistent. The performance of college students with classic and common migraines was similar to that of nonmigrainous students on the Halstead-Reitan Battery (HRB) as well as on memory tests (Burker et al., 1989). Sinforiani and his colleagues (1987) also reported no impairment on any of a set of tests that assessed a wide range of cognitive functions. These patients had normal CT scans, EEG findings, and neurological examination, and had not used any prophylactic treatment in the last month. Leijdekkers and coworkers (1990) studied women who had migraine with and without aura, comparing their performances on the Neurobehavioral Evaluation System (NES) to healthy controls and found no group differences on measures of attention, learning and memory, and motor tasks. A population-based study of Danish twins found no difference between the affected and nonaffected twin pairs on tests of verbal fluency, digit span, symbol digit substitution, and delayed word recall (Gaist et al., 2005). Similar findings have been reported for older migraine patients compared with matched controls using digit symbol, arithmetic problem solving, and spatial tests (Pearson et al., 2006). Also encouraging are data from a prospective longitudinal community-based study in which persons with migraine showed a slight increase in delayed recall scores while participants without migraine showed a slight decline when reexamined with a modified version of the Rey Auditory Verbal Learning Test over 12 years of follow-up (Kalaydjian et al., 2007). The group differences were small and likely clinically insignificant. The authors do not state the nature of the modification of the test but the mean
delayed recall scores for the groups (5.41 for migraineurs and 4.58 for nonmigraineurs) would be unusually low for the mean ages (47 and 52, respectively) on the standard administration. Using a small “mini”test battery (Mini-Mental State Examination + Cognitive Capacity Screening Examination), J.S. Meyer (2010) also found no evidence of cognitive decline in his migraineurs, but—not surprisingly—documented poorer performances for subjects examined when having a migraine than when pain free. In contrast, Hooker and Raskin (1986) found significantly higher Average Impairment Ratings on the HRB in patients with classic and common migraines compared to normal controls. Performance was particularly poor on several tests of motor speed, dexterity, tactile perception, delayed verbal recall, and aphasia screening. On many of the tests, mean scores of the migraine patients were worse than the control group’s means, but the large variances—most notably on tests with skewed distributions (e.g., Trail Making Test-B)— obliterated possible group differences (see Lezak and Gray, 1991, for a discussion of this statistical problem). Slower performance was obtained by migraineurs compared to controls on a computer set-shifting task although there was no difference between the groups on the Stroop test (Schmitz et al., 2008). Subject selection seems to be the factor that most clearly distinguishes the studies reporting cognitive deficits from those that do not in some but not all studies. In the Hooker and Raskin (1986) and Zeitlin and Oddy (1984) studies, some of the patients were using prophylactic or symptomatic treatments but this did not appear to account for the group differences. Yet these patients were receiving medical attention for their migraines, raising the possibility that they were experiencing more serious migraine-related symptoms and side effects. However, B.D. Bell and his colleagues (1999) recruited mostly patients with common migraines from specialty pain clinics and found that only about 10% of them showed mild cognitive impairment on five or more of 12 test variables. The migraineurs in the studies that found no differences between them and control subjects were mostly mildly affected individuals (e.g., not seeking medical attention, normal EEG records). Treatment
Common analgesics are effective for many if they are taken at the earliest onset of headache. Serotonin agonists have proven useful for treating some migraines. Prophylactic pharmacotherapy involving ¿-adrenergic blocking agents, tricyclic antidepressants, calcium channel blockers, 5hydroxytryptamine-2 antagonists, nonsteroidal anti-inflammatory medications,
antiepileptics, and magnesium replacement are indicated for other migraines (Ferrari and Haan, 2002; Mathew, 2000). Although botulinum toxin is used as a prophylaxis, research shows that it is not more effective than placebo (Shuhendler et al., 2009). Optimal treatment requires a differential diagnosis of migraine from tension type headaches and cluster headaches. Other disorders such as aneurysms, subarachnoid hemorrhage, subdural hematoma, brain tumor, or idiopathic intracranial hypertension need to be ruled out as well (Mathew, 2000).
EPILEPSY Etiology and diagnostic classifications
Epilepsy is not a single disease or condition but, more precisely, an episodic disturbance of behavior or perception arising from hyperexcitability and hypersynchronous discharge of nerve cells in the brain that can be associated with a variety of etiologies. The different syndromes associated with epilepsy are often collectively referred to as “epilepsies”to reflect this heterogeneity. The underlying causes are many, such as scarring or brain injury from birth trauma, traumatic brain injury, tumor, the consequences of infection or illness (e.g., complex febrile seizures), metabolic disorder, stroke, progressive brain disease, and a host of other conditions, including genetic factors. Many forms are simply idiopathic as no known source can be established. Epilepsy is among the most prevalent of the chronic neurological disorders, affecting approximately 1% of the U.S. population or some 2.5 million Americans (St. Louis and Granner, 2010); its incidence reaches 3% by age 75 (G.P. Lee, 2010). It is estimated that some 30 to 50 million persons worldwide have this condition (Wendling, 2008). Epilepsy is about equally prevalent for the sexes until older age, when elderly men have a somewhat higher incidence, making epilepsy the third most common disease affecting the brain in the elderly (Werhahn, 2009). Approximately 30% of new cases are younger than 18 at diagnosis (G.L. Holmes and Engel, 2001) . The annual total cost for the roughly 2.5 million Americans with epilepsy is on the order of tens of billions of dollars. Indirect costs due to the psychosocial morbidity of epilepsy account for roughly 85% of this total with direct costs concentrated among patients with intractable epilepsies (Begley et al., 2000): it has been estimated that about 30% of patients are pharmacoresistant, even with newergeneration antiepileptic medications. The public health implications of epilepsy are substantial and have been documented through targeted initiatives
and conferences sponsored by the National Institute of Neurological Disorders and Stroke (2002), the Centers for Disease Control and Prevention (1997; see also computer search for: epilepsy + CDC), and the Agency for Healthcare Research and Quality (2001). An epileptic seizure is a sudden, transient alteration in behavior caused by an abnormal, excessive electrical discharge in the brain due to a temporary synchronization of neuronal activity occurring for reasons which are not clearly understood (St. Louis and Granner, 2010) . The lifetime prevalence of experiencing a single seizure is approximately 10%. Seizures can arise from any condition that heightens the excitability of brain tissue. They are most often provoked by either extrinsic (systemic) or intrinsic (brain) factors. Provoked seizures may occur with high fever, alcohol or drug use, alcohol or drug withdrawal, metabolic disorders, or brain infections (e.g., brain abscess, cerebritis, encephalitis, acute meningitis). Epilepsy, in contrast, is characterized by recurrent, unprovoked seizures. The diagnosis of epilepsy requires the presence of at least two unprovoked seizures (i.e., occurring in the absence of acute systemic illness or brain insult). The main clinical signs and symptoms of epilepsy include ictal (during a seizure), postictal (immediately following a seizure), and interictal (between seizures) manifestations. The nature of ictal behavioral disturbances depends on the location of seizure onset in the brain and its pattern of propagation (St. Louis and Granner, 2010). Unfortunately, the diagnosis of “epilepsy”continues to carry with it a certain amount of psychosocial stigma; consequently, the term seizure disorder is often used to avoid the negative social connotation. The stigma dates back to antiquity—the term “epilepsy”stems from the Greek “epilepsia,” which refers to the notion of “being seized or taken hold of,” reflecting the erroneous and all too persistent belief that epileptic seizures have supernatural or spiritual causes. Epilepsies are generally classified along two dimensions—whether they are focal or generalized, and whether their etiology is known, suspected, or unknown (International League against Epilepsy, 1989). Seizures that have a localized area of onset (i.e., begin with symptoms of a localizable brain disturbance) are called partial or focal; seizures that appear to involve large regions of both hemispheres simultaneously are referred to as generalized. They may be characterized in three major etiologic categories: Idiopathic epilepsies have no known etiology and usually are not associated with any other neurological disorders; many of these patients do not have neuropsychological deficits (Perrine, Gershengorm, and Brown, 1991). Etiologies of cryptogenic epilepsy are also unknown, but neurological and
neuropsychological functions are usually not normal. Seizures from a known etiology are called symptomatic. In clinical practice, however, a syndrome diagnosis is often given (e.g., temporal lobe epilepsy [TLE]), which more narrowly characterizes individual patients with respect to prognosis and treatment options (Wyllie and Lüders, 1997). A classification system that attempted to combine EEG, etiology, and syndrome presentation (Hamer and Lüders, 2001) has not gained wide acceptance. A new classification system and terminology has been proposed by A.T. Berg and colleagues (2010). St. Louis and Granner (2010) emphasize that the seizure type and epilepsy syndrome diagnoses are crucial for patients with epilepsy because this information guides therapy (e.g., drugs, surgery) and determines prognosis. Neuroimaging is now commonly used to assist in diagnosing seizure type and epilepsy syndrome (la Fougère et al., 2009; M. Richardson, 2010). The two principal types of epileptic seizures are partial and generalized seizures. Partial seizures—also called “focal”or “localization-related”—arise from a specific area of the brain, may be simple (i.e., without alteration of consciousness), and may involve only one mode of expression (motor, somatosensory, autonomic, or psychic). Complex partial seizures, by definition, involve altered consciousness. In addition, it is not uncommon for a partial seizure to progress. For example, a simple partial seizure may be preceded by an aura (premonitory sensations common in true epilepsy) and then develop into a complex partial seizure. This may subsequently progress to involve the entire brain, a process called secondary generalization (e.g., producing a secondary generalized tonicclonic—successive phases of muscle spasms—seizure). Complex partial seizures most commonly originate from the temporal lobes, and second most commonly from the frontal lobes. In practice, however, it is sometimes difficult to distinguish frontal lobe from temporal lobe seizures due to the direct bidirectional projections between these areas. Primary generalized seizures involve all or large portions of both hemispheres beginning at seizure onset. They may be nonconvulsive, appearing as absence [pronounced “ahb-sawnce”] spells or (petit mal [pronounced “pehtee mahl”] attacks) in which consciousness is briefly lost while eyes blink or roll up; or convulsive, which involves major motor manifestations (generalized tonic-clonic seizures, also called grand mal seizures). The term “absence”is reserved for nonconvulsive primary generalized seizures and is not used when loss of awareness occurs with complex partial seizures. The distinction between partial (focal) and generalized seizures has practical
implications since different seizure types often respond to different anticonvulsant medications (antiepileptic drugs: AEDs). Specific EEG patterns are associated with many epilepsy syndromes and assist in formal diagnosis (e.g., 3 Hz spike and wave complexes in absence seizures; see Klass and Westmoreland, 2002), although some seizure patients may at times have normal EEG recordings (Muniz and Benbadis, 2010). EEG monitoring is also important for determining if a patient’s spells may be “psychogenic”(see p. 249) or due to a non-neurological condition such as fainting (syncope). EEG characteristics are also very important in evaluations of a patient’s candidacy for epilepsy surgery (Cascino, 2002), and for inferring the anatomical localization of seizure origins (Rossetti and Kaplan, 2010). Risk factors and vulnerabilities
Genetic predisposition. Epilepsy may run in families, appearing either in conjunction with an inheritable condition which makes the patient seizureprone or simply as an inherited predisposition to seizures (Lopes-Cendes, 2008). Different seizure types can occur in family members who have epilepsy (Berkovic et al., 1998; Ottman et al., 1998). Genetic factors appear to be more important in the generalized epilepsies but also play a role in some partial epilepsies (Berkovic et al., 1998) . Studies of twins have shown a higher concordance rate among monozygotic compared to dizygotic twins. However, the mode of inheritance is complex and varies with seizure types and epilepsy syndromes: it has been estimated that there are at least 11 human “epilepsy”genes, and many more are known from animal models (M.P. Jacobs et al., 2009). In pointing out that the importance of genetic heterogeneity has been relatively neglected, Pal and colleagues (2008) noted that very few genetic associations for idiopathic epilepsy have been replicated. Evidence is accumulating that pathogenesis of many forms of epilepsy reflects a channel pathology at the microphysiologic level, with K+, Na + , or Ca2+ channels being affected in different types of epilepsies (Kaneko et al., 2002). Developmental considerations. Seizure incidence over the human lifespan is highest during infancy and childhood. Each year, about 150,000 children and adolescents in the United States have a single, unprovoked seizure; about onefifth of these eventually develop epilepsy (Zupanc, 2010). Many studies have sought to determine what factors influence the development of seizures and the phenomenon of epileptogenesis in the developing brain (Rakhade and Jensen, 2009). Epidemiological studies have linked prolonged febrile seizures—which
are most common in early life—to the development of temporal lobe epilepsy, but whether long or recurrent febrile seizures cause temporal lobe epilepsy has remained unresolved (Dube et al., 2009). Seizures induce different molecular, cellular, and physiological consequences in the immature brain, compared to the mature brain; e.g., age-dependent differences in how seizures alter cell birth occur in the dentate gyrus (B.E. Porter, 2008). Children also respond differently to AEDs than do adults, and treatment of pharmacoresistant epilepsy in children can be especially complicated (Rheims et al., 2008; Wheless et al., 2007). Recent reviews suggest that newer generation AEDs have about the same effectiveness over seizure control in children as the older-generation drugs but are tolerated better and may have fewer side effects than the older drugs (Connock et al., 2006). Post-traumatic epilepsy. Traumatic brain injury is a major risk factor for epilepsy, and posttraumatic epilepsy represents a major societal problem (see pp. 192, 246–247). Posttraumatic epilepsy likely involves numerous pathogenic factors, but two factors termed “prime movers”have been identified —disinhibition and development of new functional excitatory connectivity (Prince et al., 2009). Thus, at the network level, epilepsy may be understood as a neural system’s abnormal learned response to repeated provocations (D. Hsu et al., 2008). However, the mechanisms by which a brain injury can lead to epilepsy are still poorly understood (Aroniadou-Anderjaska et al., 2008). The risk of developing epilepsy following penetrating head wounds is especially high (see p. 192). Interestingly, World War II survivors of missile wounds to the brain had a notably lower incidence of epilepsy (25% to 30%) than Vietnam War survivors (53%) (Newcombe, 1969; Salazar, Jabbari, and Vance, 1985; A.E. Walker and Jablon, 1961). This could reflect a lower survival rate for more severely injured patients as TBI in itself increases the risk of developing epilepsy, and severity contributes significantly to that risk (Jennett, 1990). Brain contusion, subdural hematoma, skull fracture, loss of consciousness or amnesia for more than one day, and an age of at least 65 years increased the risk of developing post-traumatic seizures in a civilian TBI patient study (Annegers, Hauser, et al., 1998). In general, the presence of any focal lesion, such as intracerebral hemorrhage and hematomas, increases the likelihood of post-traumatic epilepsy (Aroniadou-Anderjaska et al., 2008; D’Alessandro et al., 1988; Jennett, 1990). A slight seizure risk for patients following mild TBI does persist after five years (Annegers, Hauser, et al., 1998). In contrast, severe TBI is associated with a much higher posttraumatic seizure risk that is much
more long-standing; the chance of a first unprovoked seizure more than 10 years after the injury also increases with TBI severity (J. Christensen et al., 2009). Although a seizure in the first week after a penetrating head injury is not necessarily predictive of eventual post-traumatic epilepsy, 25% of TBI patients who have a seizure in the first week will have seizures later. Only 3% of patients who do not have an early seizure will develop late-onset seizures. The cognitive impairment seen in post-traumatic seizure patients probably reflects the effects of the brain injuries that give rise to seizures, rather than effects of the seizures per se (Haltiner et al., 1996; Pincus and Tucker, 2003). Other symptomatic epilepsies. Nearly any kind of insult to the brain can increase susceptibility to seizures (Aroniadou-Anderjaska et al., 2008; Lishman, 1997). Approximately 10% of all stroke patients experience seizures (T.S. Olsen, 2001; I.E. Silverman et al., 2002), with roughly half of these occurring during the first day and the other half peaking between 6 and 12 months post-stroke event. Seizures occur three times more often following hemorrhagic stroke than ischemic stroke and are usually associated with cortical involvement. Few stroke patients (3%–4%) develop epilepsy; those with late-onset seizures are at greater risk (Bladin et al., 2000) . Epilepsy can also occur with CNS infections, brain tumors, and degenerative dementia (Annegers, 1996) , including Alzheimer ’s disease (Palop and Mucke, 2009) . Brain inflammation can contribute to epileptogenesis and cause neuronal injury in epilepsy (Choi and Koh, 2008). The challenges of “growing old with epilepsy”are significant, as persons with chronic epilepsy are exposed to numerous risk factors for cognitive and behavioral impairment (Hermann, Seidenberg, et al., 2008). Precipitating conditions. Although most seizures happen without apparent provocation, some conditions and stimuli are associated with seizure likelihood. The disinhibiting effects of alcohol can provoke a seizure, as can the physiological alterations that occur with alcohol withdrawal during the “hangover”period and with alcohol interactions with medications (M. Hillbom et al., 2003; Kreutzer, Doherty et al., 1990). Alcohol withdrawal seizures usually develop after prolonged alcohol abuse; the alcoholic patient suddenly stops drinking and generalized convulsions typically occur 48 to 72 hours later. Physical debilitation, whether from illness, lack of sleep, or physical exhaustion increases the likelihood of seizures. In some women with epilepsy, seizure frequency varies with the menstrual cycle (i.e., catamenial epilepsy) (Reddy, 2009; Tauboll et al., 1991). This phenomenon appears to be related to
the ratio of estrogen to progesterone. Emotional stress, too, has been implicated as a provocative factor, and voluntary and spontaneous changes in behavior and thinking may also bring on seizures (Fenwick and Brown, 1989). Reflex epilepsy refers to epilepsies characterized by a specific mode of seizure precipitation, the most common of which is photosensitivity (Ferlazzo et al., 2005; Zifkin and Kasteleijn-Noist Trenite, 2000). Video games and television, too, have been purported to trigger seizures (Badinand-Hubert et al., 1998; Ricci et al., 1998). Cognitive functioning
Behavior and cognition in epilepsy patients can be affected by multiple factors, including: seizure etiology, type, frequency, duration, and severity; cerebral lesions acquired prior to seizure onset; age at seizure onset; ictal and interictal physiological dysfunction due to the seizures; structural cerebral damage due to repetitive or prolonged seizures; hereditary factors; psychosocial conditions; and antiepileptic drug effects (Elger et al., 2004). As a very general characterization, patients with epilepsy tend to have impaired cognition compared to matched nonepileptic comparison participants (Dodrill, 2004; Vingerhoets, 2006), although there are many exceptions. Seizure etiology is an important determinant of cognitive status (Perrine et al., 1991). Patients with seizures due to progressive cerebral degeneration typically have generalized cognitive impairment, patients with mental retardation have an increased incidence of epilepsy which is likely to be refractory (i.e., medication resistant) (Dodrill, 1992; Huttenlocher and Hapke, 1990), and patients with seizures due to a focal brain lesion may exhibit a specific neuropsychological pattern of deficits. In contrast, patients with idiopathic epilepsy are more likely to have normal mental abilities. Similarly, seizure type is strongly associated with cognitive performance (Huttenlocher and Hapke, 1990). Patients with juvenile myoclonic epilepsy (JME) showing classic 3 Hz spike and wave absence usually have normal cognitive abilities interictally; children with infantile spasms have generally depressed neuropsychological profiles. Earlier seizure onset age is associated with greater cognitive impairment (Hermann, Seidenberg, and Bell, 2002). However, on the flip side of this coin, early onset has been identified as a protective factor for cognitive side effects from anterior temporal lobectomy surgery, perhaps due to neural reorganization prompted by early onset seizures, or by the neural insult that gave rise to the seizures in the first place (e.g., Yucus and Tranel, 2007) . Many of the epilepsies of childhood are fairly benign, especially in regard to cognitive functioning (Panayiotopoulos et al.,
2008). Focal seizures and cognitive dysfunction. Focal seizures originate from one side of the brain, although seizure activity may subsequently spread to other brain areas. In some cases, patients with focal seizure onset display a pattern of test performance like that of patients with nonepileptogenic lesions in similar locations. Thus, seizure onset from the left hemisphere may be associated with impaired verbal functions, such as verbal memory deficits and compromise in verbal abstract reasoning. In contrast, patients with right hemisphere seizure onset are more likely to display visuoperceptual, visual memory, and constructional disabilities. However, the magnitude of the deficits is often less than with comparable nonepileptic lesions. Atypical cerebral language reorganization resulting from early seizure onset may affect the lateralizing and localizing patterns on neuropsychological tests (S. Griffin and Tranel, 2007; Loring, Strauss, et al., 1999; Seidenberg, Hermann, Schoenfeld, et al., 1997). In addition, many AEDs depress neuropsychological test performance, particularly for measures that are timed or have a prominent motor component (Dodrill and Temkin, 1989; Loring, Marino, and Meador, 2007; Meador, 1998a,b). The magnitude of lateralized behavioral deficits may be more pronounced when testing occurs during the immediate postictal period (Andrewes, Puce, and Bladin, 1990; Meador and Moser, 2000; Privitera et al., 1991). A review of relevant literature can be found in Loring (2010). Memory. Memory and learning disorders are common among epilepsy patients (Helmstaedter and Kurthen, 2001; G.P. Lee and Clason, 2008; Milner, 1975). They become most pronounced with temporal lobe epilepsy, reflecting the degree of medial temporal lobe pathology (Helmstaedter, Grunwald, et al., 1997; Rausch and Babb, 1993; Trenerry, Westerveld, and Meador, 1995). Material specific memory deficits occur primarily for verbal memory in association with left TLE; the association between right TLE and visuospatial, nonverbal memory deficits is less consistent (Barr, Chelune, et al., 1997; Hermann, Seidenberg, Schoenfeld, and Davies, 1997; T.M. Lee, Yip, and Jones-Gotman, 2002). As with other neuropsychological functions, a risk to memory with some AEDs increases with multiple medications (polypharmacy) (Meador, Gilliam, et al., 2001). Yet, many memory complaints by p