We've been talking about the number one question professionals ask about report-writing: "How do I to write shorter, more reader-friendly reports?" Now let's talk the second most commonly-asked question:
How do I write better recommendations?
Let's talk about a 10-step process for interpreting assessment data that leads directly to evocative case conceptualization and practical recommendations. Each step will get its own post in this series. For now, we're just going preview the 10 steps.
These 10 steps are tips, techniques, and suggestions I've found helpful in my clinical work. They are the steps that those I consult with or supervise have found most useful in honing their ability to conceptualize cases and write meaningful recommendations for families.
I know. Ten steps sounds like a lot of steps.
But we’ll find that going the 10 steps makes everything else so much easier. Once we go through the 10 steps, the other pieces fall into place. We'll write shorter reports, and we'll write them faster. Our feedbacks will hold more power.
And our recommendations will be easier to think of, and more useful for families.
Here's a little more about each step.
Prior to testing:
Step 1. Re-design your referral questions
Generic, static questions lead to generic, static answers. When you partner with families, children, and referral sources to develop meaningful, dynamic questions, you’ll naturally provide meaningful, dynamic answers.
This isn’t the same old “just answer the referral question in your reports” advice. This is about asking more interesting, useful questions. Questions which will change the quality of the answers you uncover.
Step 2. Go through your pre-flight checklist
Every textbook on child assessment emphasizes taking a wider view. One that interprets the child within their full context. The context that includes their family, school, and community. A matrix that includes physical health and emotional safety, and has room for individual differences in personality and temperament. A delightfully tangled mix of strengths, interests, fears, hopes, dreams, and child-environment match.
Yet, when staring at a lot of “hard data” in the form of a summary sheet of test scores, it’s easy to lose sight of that harder-to-quantify context. It’s easy to reduce the child to their pattern of neurocognitive strengths and weaknesses. Having a [literal] checklist helps bring all that background to the foreground, enriching your evaluation.
While You’re Testing:
Step 3. Measure everything you’re measuring
Like every professional who works with children, you’re skilled at combining data from many different sources. You already – and probably automatically – integrate a vast array of data from interviews, record review, collateral sources, and questionnaires to reach conclusions. What child assessment specifically adds to this pool of info is actual time spent testing and interacting in a structured with the child. So make the most of that unique source of data.
This step is about measuring the 5 streams of information you get while you’re testing: content, process, behavior, interaction, and “white space.” In other words, what they said and did in response to the tests. How they approached the tests. How they behaved during the testing situation. How they interacted with you and in response to your actions. And what I call the “white space” – what was missing or what you didn’t see. It's about using these streams of info to inform how you're thinking about the child and what you'll recommend.
Step 4. Evaluate while you're evaluating
We know assessment is the process of generating and testing hypotheses. But what does that mean in the context of a real evaluation? Say a child gets a 127 on the Visual Spatial Index of the WISC-V and a 92 on the Fluid Reasoning Index. Is that 35-point gap important? Can it tell you something about her or guide your recommendations? We know that by itself gap is just a singular data point and does not “mean” anything. But can these scores propel you towards testable hypotheses about what will help this child thrive?
This step is about using hypothesis testing as an effective and pragmatic assessment tool. It involves conducting 2 types of task analysis to generate hypotheses. And then integrating the data you obtained in Step 3 to refine, refute, or re-contextualize your hypotheses. Followed by testing your hypotheses in a systematic way. For this step to work well, you must discard many more hypotheses than you keep.
Step 5. Attend to the “conditions under which”
When you assess a child and find she has a cognitive, academic, or interpersonal weakness, it’s rare to discover a complete lack of skill in that area. A child who has trouble paying attention is not fundamentally unable to focus. Instead, there are conditions under which she has a harder time concentrating. There are also conditions under which she finds herself riveted.
When you help the parents, teachers, or child see the “conditions under which” she shines, and the conditions under which she struggles, you provide them with so much more than a problem list. You give them a roadmap. They can see the dynamic range of behavior that you see. They can predict when she'll need more support. And they’ll know intuitively how to build the conditions under which she’ll thrive. This step is about methodically assessing those “conditions under which.”
Note: I’ve borrowed the idea of “conditions under which” from Bram & Peebles. We’ll also be talking about Finn’s related concept of “assessment interventions” here (see sources at end).
Step 6. Checks and Balances
Hypothesis testing and inference-making are essential steps in the assessment process. They are also fertile ground for cognitive biases and projection. Your intuition and clinical wisdom are of paramount importance in the process, if reined in by the rigor of science.
This step is about checking yourself before you wreck yourself. It's about using your knowledge of base rates, pathognomonic signs, normal variability, and error variance as "guard rails" around your intuition. It involves subjecting your "internal norms" to disciplined data collection. It's about searching for the 4 factors that help us feel confident in our inferences: repetition, convergence, representativeness, and singularity. It's about reviewing your pre-flight checklist.
When you ground your clinical wisdom in empirical methods, families get the best of your art and your science.
Step 7. Themes, Contradictions, and White Space
This step is where the magic happens. This is when you search for themes and patterns in the data. It's where you study the contradictions so you can find a hypothesis that better fits all the data. It's where you see what's missing -- the "white space" -- that further refines your theories.
In Psychological Testing That Matters, Bram & Peebles urge us to "develop a theory for that particular person, which uniquely explains how he or she works," rather than fit a child's complexity into a pre-existing box. This step is where you'll combine all your previous steps in deductive reasoning. You'll vigorously mix them with Erickson's concept of "disciplined subjectivity." Out of this mixture you'll create something unique and uniquely helpful.
Step 8. Make a Rectangle
You’re familiar with the Rey-Osterrieth Complex Figure. It’s a bunch of details, connected into a hard-to-describe whole. The Rey-O has so many details it can be overwhelming at first. At least, until you see that big rectangle. Once you get that gestalt, you can easily add on, complexify, and fill in. You can see how the rectangle divides into smaller sections. You can see where there are large, essential additions to the rectangle, and where there's only small, tacked-on details.
In case it’s not obvious, this is a metaphor for case conceptualization. The data is the details, and the rectangle is how you arrange them into a "big picture." Get the rectangle first, and then augment, complexify, and fill in. This step is all about finding the right rectangle. We'll discuss some common and advanced rectangles that other psychologists have found helpful for their cases, as well as how to develop your own.
When Writing the Report or Giving Feedback:
Step 9. Zoom
No, not the online meeting platform. This step is about zooming in and out between various levels of analysis. The big picture and the details. The concrete and the abstract. The information the family already knows (what Therapeutic Assessment calls "Level 1" information), and the ideas that will be completely new to them ("Level 3" information).
Zooming in and out between different levels of analysis helps you contextualize the test results. It helps you generalize the results to the child's real life. It helps you "match the specificity of your recommendations to the specificity of the situation" as the authors of Essentials of Assessment Report Writing, Second Edition suggest. It helps you prioritize the test results, so you can progress from Level 1 to Level 3 in feedbacks.
Zooming in and out also allows you to model a new approach to the child's problems right in your recommendations. Need parents to be more concrete and specific with their child? Write concrete and specific recommendations.
Step 10. Ask Yourself Some “Simple” 2-Word Questions
The authors of Essentials of Assessment Report Writing, Second Edition, note that recommendations must follow the 4 Ps: they must be prescriptive, positive, practical, and possible to implement. Families -- quite sensibly -- won't follow recommendations that are too vague, too numerous, or too complex. They also won't follow recommendations that are too time-consuming, too hard, or which seem unnecessary.
This step is about writing recommendations families can, will, and want to follow. We'll talk about some 2-word questions you can ask yourself (like "What's next?") that will help your recommendations write themselves. You'll be able to provide the best next step for families, now that they truly understand their child's needs and the conditions under which he excels.
So... what's next?
Stay tuned for next week's post on Step 1! In the meantime, comment below with your best tip for conceptualizing cases or writing recommendations.
A note on sources:
Surprisingly, many texts on assessment, child evaluations, and/or school or pediatric neuropsychology do not include any content on case conceptualization or recommendations that are specific to the child (rather than specific to the diagnosis – most texts do include bullet lists of, for example, recommendations for dyslexia).
Other texts cover these topics in a few pages, or at most, one or two chapters. Sometimes those few pages or chapters, while brief, are exceptional and provide key insights. Where appropriate, I will source those references directly.
Occasionally and wonderfully, some texts devote much more coverage to these topics. I would like to highlight those references here.
The following is a list of some of the best and most comprehensive sources for I’ve found on these topics, all of which have shaped my thinking and contributed to the model described in this blog post series:
Psychological Testing That Matters, by Anthony Bram and Mary Jo Peebles
In Our Clients’ Shoes, by Steven Finn
Revealing Minds, by Craig Pohlman
School Neuropsychology, James Hale and Catherine Fiorello
Conducting Psychological Assessment, by Jordan Wright