The great digital age has also brought to light the rapid growth of fake news. Wrong information can quickly travel to every web browser for consumption and reaction by those who wish to believe. We would expect the bar to be set a little higher for research, unfortunately it can be riddled with bias, personal opinion, and at times lacking in diligence.
Quantitative research may appear to have the advantage just by the type of data that is generated. The cold hard bottom line appears to be trustworthy, or at least solid enough to make a case for a particular position. Of course, we need to know how the researchers bias was inserted in collection and analysis of this data. Which still requires additional efforts to determine the trustworthiness.
Regardless of quantitative or qualitative research a few questions need to be answered to determine if the research has any potential of truth.
Does this research sound logical?
A little common sense goes a long way. An empirical study details methodologies and tools, as well as the handling and analysis of all collected data. If this information isn’t noted it is difficult to trust any points the researcher makes.
The context should be logical. The research challenge and approach should be in the same context as the findings. We can’t cut the puzzle pieces to fit, instead the puzzle should be assembled and looked at as it exists.
Correlations should also be logical. For example, we could say there was a worldwide increase in jellybean sales when the coronavirus became a pandemic. Can we draw the conclusion that jellybeans create pandemics? Not really, there might be another reason for high jellybean sales, like Easter. Any research that makes erroneous or spurious correlations to make their point, lacks truth.
Did the researcher note credible resources?
Most journals post how many times a paper has been cited and downloaded. This can add some level of credibility; however, this should be put into a realistic perspective. Newer research might be credible but obviously won’t be cited as much as older research, since it didn’t exist.
Being new to academic research I wouldn’t read a name or study and know if a true expert is being cited. On my first literature review I found papers that noted other studies that I had found in my search. In my limited experience this created a level of credibility for both the paper citing and the paper cited.
What evidence is there of validity?
In looking at empirical evidence there should be an actual measurement with significance to back up any findings. For example, I have found a lot of gamification research noting the need to measure how working memory is influenced by game play. Many researchers even elude to successfully finding how the working memory is influenced, even though there was no measure taken. First, this finding can’t be trusted, since there was no data to support it. Second, this study loses credibility since findings were made on opinions and wishes instead of by measurement.
It is easy to imagine how this could happen, a researcher would not put the time into a particular topic if they didn’t have an opinion or outcome they wanted to find. Add this passion to corporate partnerships. When a business entity monetizes research, they want to draw conclusions that will promote their stance. The bottom line is findings should reflect the actual data that was gathered, with the context it was intended.
Can the research be repeated?
The research should detail measurement tools and methodologies that can be used to repeat the study. Being able to repeat the research with the same or similar findings shows validity. However, different findings shouldn’t completely discredit the initial research. Any repeating studies require scrutiny to determine similar and differing aspects from the original study, as well as overall validity.
This post is more of a summation of what I have observed, experienced, and learned so far. I believe I will revisit this topic and edit over time.