8 thoughts on “Hyaluronic acid injection for arthritis of the knee.”
Pat Walsh
Would you consider this a flawed study? It sounds like the study was shifted slightly in favor of finding the Hyaluronic treatment of benefit. What’s the approval process for studies?
All studies are flawed as doing good science is hard. But, the biggest problems with studies are; 1) The outcome is hard to measure. All science must start with accurate outcome measures. This study used pain as the outcome and pain is highly subjective, and difficult to accurately measure. The scale they used was Visual Analogue Scale (VAS); a patient places a point on a scale of 0 (no pain) to 100 (worst pain). Patient’s often change their point on the scale hourly. In addition, this sort of outcome measure is subject to bias, like social expectation. If they know they get something, they expect it to work, so they measure it differently even if there is no difference at all (placebo effect). The other issue about the outcome measurement is, when should it be measured? In this study, the only difference between the study treatment and the placebo was at 26 weeks.Why 26 weeks, and not 12 or 8 or 3 like other studies? The authors provide no reason for the timing of the outcome measure.
2) Small numbers of patients in the trial. This is a common problem and 250 people in study groups is a small number. The small number often leads to the following; unequal, unbalanced numbers of patients with important confounding prognostic conditions. For example, in this study, people with more severe arthritis were in the placebo group, and fewer people in the placebo group had fluid removed from their knees. These sorts of imbalances limit the interpretation that the treatment (hyaluron) is better.
3) Failure to “mask” people to the treatment. If a doc or patient knows what they are giving or getting, the study is unblinded and therefore of no value when the outcome measure is subjective. A good trial will assess how well the docs and patients were blinded (don’t know what they are getting) to the treatment. This study did not do that and since the outcome was only better at 26 weeks and not before, it makes us wonder if the patients/docs uncovered which treatment was given.
So, for a person making a decision, you must know if the study is good enough to use. 1) If the study uses subjective, hard to measure outcomes with little justification of how/when the outcome was measured, and 2) if there are imbalances in the compared groups, there has to be concern that the study is too flawed to be used to inform a patient’s choice.
There are other reasons where studies go wrong and I will post a more thorough discussion in the next weeks.
Who decides if a study is too flawed to be accurate? Would it be some governing body like the FDA, or would it be a group like the AMA? For example: if a doc tells the patient they are receiving the real drug or the placebo I would expect the results of the study to be thrown out entirely. Is that what happens?
Without being a researcher myself how can I discern which studies are worth my time?
First, I am not sure that a patient can, but am willing to believe they can. I have been doing research and have been an editor for over 27 years. I look at a paper and go to the flaws. but, I have no special intellectual gifts, just expertise. I am a practiced evaluator of evidence. That is why I have this site. I hope to be the voice of the data for people who must decide.
However, anyone might be able to learn to evaluate science better, for sure. I hope to teach middle school, high school in medical data evaluation. Your question will spur me to write and video on aspects of reading the medical literature. But, for now, here are some hints.
Basically, if study is not a randomized trial, don’t read it. Period. Observational studies are useless for you. When you hear things like, exercise is good for you, coffee is bad for you, 2 wine glasses a day is good for you, you are reading junk. These untruths come from observing people who do/don’t do these things and then following to see what happens to them (observe). These sorts of studies abound and are unhelpful. An outline:
1) Ask, is this a randomized trial comparing clearly defined alternative options for my care? If no, stop. (Observational trials are unhelpful for decision makers).
2) Ask what clinical outcome is being measured? Is it measurable in a reasonably accurate way such that others would measure it the same? If no to those, stop reading.
3) If patients being studied were gather by expediency (volunteer or via Internet, for examples), cast a critical eye.
4) Read only articles in PUBMED; at the National Library of Medicine. (These are found by going to Google, type in your disease or question, and add then, :pubmed.gov).
5) For cancer care, only read PDQ NCI which gives up to date information on all cancers.
Another way to figure out if a study is worth while is to ask, How would I study this? If you can’t imagine getting an answer, the study won’t either.
I really liked that you provided feedback about the two patients. Does it make a difference that each is in his 80’s? Do you need to say something about that? Or do you need to say something about their activity levels or their pain scores? Just asking.
Do you think you need to explain terms like “saline”, “placebo”, or “randomized”? You did, a little, on ‘randomized’, when you mentioned flipping a coin; I’m a believer in the power of coherent narratives, which is how we humans apparently process information; so, something like, “I said ‘randomized’, but what does that mean? Well, imagine I’m trying to decide where to take my wife for our 50th anniversary and it is really important… there are 2 restaurants we could go to, and I need to choose… well, would you tell each restaurant you were coming in? or would you just eat at each restaurant maybe 3 times, at different times on different days, and choose different items each day? well, yes, because that would be a more careful test, or experiment, of the 2 restaurants… you might ‘randomize’ the dish and the time and the day because that would give you a better chance of pleasing your sweetie on this very special day… well, doctors want to please their patients, so they do tests just like this to see which medicine or injection or surgery works best”; this kind of explanation through narrative also worked for Socrates, prophets, messiahs, and Ronald Reagan, and we doctors fit right in there. Oh, also, you used the term “mean differences” which also might need a definition.
Thank you for your comments. I have decided to post on the language of science in terms of random, versus randomization, benefit and harm. In my book, but must be better at helping patients with my videos understand the concept of trade-offs. That is the problem with your restaurant example; subjective outcomes, driven by expectation, not good quality. You do not need to randomize to find the food you like and the trade-off is that if don’t like, can try something else. Not so easy in a medical decision. In medical decisions we are not in charge of the outcomes, just the choice. After the choice, we may get good food or bad, and we won’t be able to revisit. Thanks for your wonderful comments and continued teaching.
Wisdom; knee pain is common complaint. Limits mobility, not fun for a patient. The problem for a patient is that what works best is elusive. Seems the literature follows a pattern; drug versus placebo, drug better. Then drug versus drug, same. Then injection of steroid versus drug, same, then steroid injection versus hyaluronic, same, then, finally, hyaluron versus placebo and nearly same. Big problem seems to be they are all equal to placebo. We have to learn to use cheap placebos that do not harm? A patient has a tough decision to know what is best. Thanks for your insights. Seems the literature of subjective outcomes for ortho procedures is problematic and unclear for a patient. Patients must discuss the limits of the literature with their docs to better understand what might be best for them.
Would you consider this a flawed study? It sounds like the study was shifted slightly in favor of finding the Hyaluronic treatment of benefit. What’s the approval process for studies?
This is a great question.
All studies are flawed as doing good science is hard. But, the biggest problems with studies are; 1) The outcome is hard to measure. All science must start with accurate outcome measures. This study used pain as the outcome and pain is highly subjective, and difficult to accurately measure. The scale they used was Visual Analogue Scale (VAS); a patient places a point on a scale of 0 (no pain) to 100 (worst pain). Patient’s often change their point on the scale hourly. In addition, this sort of outcome measure is subject to bias, like social expectation. If they know they get something, they expect it to work, so they measure it differently even if there is no difference at all (placebo effect). The other issue about the outcome measurement is, when should it be measured? In this study, the only difference between the study treatment and the placebo was at 26 weeks.Why 26 weeks, and not 12 or 8 or 3 like other studies? The authors provide no reason for the timing of the outcome measure.
2) Small numbers of patients in the trial. This is a common problem and 250 people in study groups is a small number. The small number often leads to the following; unequal, unbalanced numbers of patients with important confounding prognostic conditions. For example, in this study, people with more severe arthritis were in the placebo group, and fewer people in the placebo group had fluid removed from their knees. These sorts of imbalances limit the interpretation that the treatment (hyaluron) is better.
3) Failure to “mask” people to the treatment. If a doc or patient knows what they are giving or getting, the study is unblinded and therefore of no value when the outcome measure is subjective. A good trial will assess how well the docs and patients were blinded (don’t know what they are getting) to the treatment. This study did not do that and since the outcome was only better at 26 weeks and not before, it makes us wonder if the patients/docs uncovered which treatment was given.
So, for a person making a decision, you must know if the study is good enough to use. 1) If the study uses subjective, hard to measure outcomes with little justification of how/when the outcome was measured, and 2) if there are imbalances in the compared groups, there has to be concern that the study is too flawed to be used to inform a patient’s choice.
There are other reasons where studies go wrong and I will post a more thorough discussion in the next weeks.
Thank you for your question.
Who decides if a study is too flawed to be accurate? Would it be some governing body like the FDA, or would it be a group like the AMA? For example: if a doc tells the patient they are receiving the real drug or the placebo I would expect the results of the study to be thrown out entirely. Is that what happens?
Without being a researcher myself how can I discern which studies are worth my time?
Greater question.
First, I am not sure that a patient can, but am willing to believe they can. I have been doing research and have been an editor for over 27 years. I look at a paper and go to the flaws. but, I have no special intellectual gifts, just expertise. I am a practiced evaluator of evidence. That is why I have this site. I hope to be the voice of the data for people who must decide.
However, anyone might be able to learn to evaluate science better, for sure. I hope to teach middle school, high school in medical data evaluation. Your question will spur me to write and video on aspects of reading the medical literature. But, for now, here are some hints.
Basically, if study is not a randomized trial, don’t read it. Period. Observational studies are useless for you. When you hear things like, exercise is good for you, coffee is bad for you, 2 wine glasses a day is good for you, you are reading junk. These untruths come from observing people who do/don’t do these things and then following to see what happens to them (observe). These sorts of studies abound and are unhelpful. An outline:
1) Ask, is this a randomized trial comparing clearly defined alternative options for my care? If no, stop. (Observational trials are unhelpful for decision makers).
2) Ask what clinical outcome is being measured? Is it measurable in a reasonably accurate way such that others would measure it the same? If no to those, stop reading.
3) If patients being studied were gather by expediency (volunteer or via Internet, for examples), cast a critical eye.
4) Read only articles in PUBMED; at the National Library of Medicine. (These are found by going to Google, type in your disease or question, and add then, :pubmed.gov).
5) For cancer care, only read PDQ NCI which gives up to date information on all cancers.
Another way to figure out if a study is worth while is to ask, How would I study this? If you can’t imagine getting an answer, the study won’t either.
A reader asks:
I really liked that you provided feedback about the two patients. Does it make a difference that each is in his 80’s? Do you need to say something about that? Or do you need to say something about their activity levels or their pain scores? Just asking.
Do you think you need to explain terms like “saline”, “placebo”, or “randomized”? You did, a little, on ‘randomized’, when you mentioned flipping a coin; I’m a believer in the power of coherent narratives, which is how we humans apparently process information; so, something like, “I said ‘randomized’, but what does that mean? Well, imagine I’m trying to decide where to take my wife for our 50th anniversary and it is really important… there are 2 restaurants we could go to, and I need to choose… well, would you tell each restaurant you were coming in? or would you just eat at each restaurant maybe 3 times, at different times on different days, and choose different items each day? well, yes, because that would be a more careful test, or experiment, of the 2 restaurants… you might ‘randomize’ the dish and the time and the day because that would give you a better chance of pleasing your sweetie on this very special day… well, doctors want to please their patients, so they do tests just like this to see which medicine or injection or surgery works best”; this kind of explanation through narrative also worked for Socrates, prophets, messiahs, and Ronald Reagan, and we doctors fit right in there. Oh, also, you used the term “mean differences” which also might need a definition.
Thank you for your comments. I have decided to post on the language of science in terms of random, versus randomization, benefit and harm. In my book, but must be better at helping patients with my videos understand the concept of trade-offs. That is the problem with your restaurant example; subjective outcomes, driven by expectation, not good quality. You do not need to randomize to find the food you like and the trade-off is that if don’t like, can try something else. Not so easy in a medical decision. In medical decisions we are not in charge of the outcomes, just the choice. After the choice, we may get good food or bad, and we won’t be able to revisit. Thanks for your wonderful comments and continued teaching.
A reader commented:
“Cognitive physicians” will starve talking to elderly patients about coping with knee pain. Synvisc is a profitable placebo.
Orthopods like it too. For them, it’s small change. But they inject it with the understanding that if this doesn’t work, there’s always a TKR.
Wisdom; knee pain is common complaint. Limits mobility, not fun for a patient. The problem for a patient is that what works best is elusive. Seems the literature follows a pattern; drug versus placebo, drug better. Then drug versus drug, same. Then injection of steroid versus drug, same, then steroid injection versus hyaluronic, same, then, finally, hyaluron versus placebo and nearly same. Big problem seems to be they are all equal to placebo. We have to learn to use cheap placebos that do not harm? A patient has a tough decision to know what is best. Thanks for your insights. Seems the literature of subjective outcomes for ortho procedures is problematic and unclear for a patient. Patients must discuss the limits of the literature with their docs to better understand what might be best for them.