Special thanks to Sara Kyle as a co-author for this piece. You can read more about her experience in senior living here.
Over the last several months, the senior living industry has seen more published data on wellness offerings. A few examples include this report from Senior Housing News (SHN), and the 2017 ICAA/Promatura Wellness Benchmarks report. I'm thrilled that organizations are taking a stronger and more consistent approach to measuring the impact of wellness for older adults in senior living. We can all benefit by being more informed; but I want to suggest a few cautionary notes about the data.
As you read the reports, articles, and posts, it's easy to get swept up in the headlines and colorful images. Instant validation seems logical when the numbers back up our own experiences. But just beneath those captivating soundbites are sample size issues, a lack of consistent definition of terms and problematic comparisons between a study population and real world groups. We've seen these research challenges for years in corporate wellness (check out this blog for a consistent digest of how the corporate wellness industry has routinely gotten it wrong). I'd hate to see senior living go down that same path.
Here are a few examples from the above noted reports that spark additional questions when you dig a little more deeply into the numbers:
Who makes up the sample and how many of them are there?
In the SHN report, authors note that 308 adults age 65 and older were polled using a Google survey. We lack key information about these 308 respondents. For example, we don't know if those surveyed are employed, if they're community-dwelling, if they have health issues, if they're living with government assistance, what their faith background is, etc. And while 308 respondents seems like a significant contribution, it may/may not be enough to declare data from that sample to be statistically significant. These missing elements don't mean the survey findings are unimportant, but it does mean we need to take a measured approach to digesting what's offered.
We also need to ensure that study limitations (like sample size) are included in the write up because those limitations impact how we process the information for validity, reliability, and transferability into other populations. Limitations don't necessarily render the research incorrect or useless, but they do provide important context for the findings as well as how we might move forward to study a similar topic.
What do we mean by engagement?
It's common to see terms like engagement and participation when reviewing data related to wellness in senior living, but those terms often aren't clearly defined. In one case, I found (after some digging and discussion with the publishing organization) that participation was defined as residents choosing at least one activity per month. When NIFS staff report to communities about participation rates in the fitness program, we're providing data on resident who visit 1x, 5x, and 8x per month. It's easy to see how a lack of standard definition for participation could skew a comparison between the two different data sets.
You might think participation is fairly cut and dry. And I suppose if our single focus is measuring the number of behinds in the seats, then participation is clear. But, we also know that headcounts don't always mean the individuals are involved in the activity. I would argue that sleeping through a stretching class requires a very generous view of participation to assume that the resident received the intended benefit from the class. And that's where engagement comes in; it's definitely a moving target. It's highly subjective and very individual. But the individual who is engaged in the stretching class is moving his body, making eye contact with the instructor, and is responsive to feedback or changes in the activity. While some people use engagement and participation synonymously, they are not the same thing.
Is selection-bias an issue?
It might be. Here are a few ways I saw it play out in the two reports I've mentioned:
- The ICAA notes that 89% of older adults living in Life Plan communities who are tracked through their bench marking tool, self-rate their health as good or excellent while only 68% of age-matched older adults who are non-community dwelling, rate themselves the same. That's a huge boon for housing operators, but this data suffers from a self-selection bias where a variety of factors well beyond the community's control may contribute to the higher scores for residents and the lower scores for non-residents.
- The SHN report profiles a fall prevention program where the program operators note the baseline data showed that 38% of residents in the community had suffered one or more falls. One year following the implementation of their initiative targeted at reducing falls, they noted that the incidence rate had gone down 10%. What wasn't noted in the report was a listing of potential reasons for the decreased rate of falls that are completely unrelated to the initiative such as variations in the pre and post-sample, and the increased likelihood for residents to not report falls (particularly when they know they're being watched for falls). The program providers indicate that they've saved the community $500,000 with this fall prevention initiative, but that savings would indicate that we can assign value to that which we prevented. I'm not aware of a concrete way to value prevention; it's one of the great shortcomings of preventive health strategies.
How can we do better?
While there are some holes in the data that has been coming out on wellness in senior living, I think the research should continue, and below are a few areas where we could all improve the quality of what we're releasing for the greater benefit of the residents we're serving.
- Let's ’s get industry clarity about how we define wellness because right now we see it as the “wellness gym”, the “wellness nurse”, the “wellness staff” who are really fitness center staff, the “resident wellness committee” who plans activities that may or may not be tied to purposeful living. Gaining a more clear and shared definition of what we mean when we say resident wellness gets us all started on the same page.
- Let’s get clarity about how we define engagement and participation. To me, defining participation as 1x per month to seems kind of low, but if we’re going to agree to that baseline, then at least it's a starting point.
- Let's find value beyond hard numbers. The ICAA does a great job of profiling and recognizing fantastic programming provided by 3rd party providers as well as directly by housing operators. There are similarly interesting initiatives throughout the SHN report. Continuing to share meaningful lifestyle offerings is a win for everyone.
- Let’s use data where it’s significant and less subjective. For example, one of the programs outlined in the SHN report showed where one operator demonstrated a 50% improvement on average for residents who did baseline fitness testing and repeat testing. In-between their testing periods, participants engaged in exercise prescribed for them by a trained fitness professional. This isn't a complicated initiative, our staff offer something similar in our client communities, and the data is hard to dispute.
When you're paying to download a report that promises reliable numbers, and meaningful information, it's okay to ask questions about what's being offered and whether it will translate to your environment. It's also okay to question the study design to better understand definitions inherent to the outcomes.
We have a long way to go as an industry to tighten up research so that our evidence-based practices are better. Do you have other areas in senior living research or in wellness specifically where you think we can all do a little better? Comment below to keep the discussion going.