Blog Entry

Information Hierarchies in the Hard Sciences

One comment

When conducting research and writing learning objects for a new course curriculum, an instructional designer sometimes learns something about inherent information hierarchies by how the subject matter experts (SMEs) respond to various sources of information. In other words, some sources are considered much more reputable and preferable to others.

It may help to begin with minimal assumptions. It is assumed that whatever sources are used are generally reputable and factual. The data must be mainline and not political. The authors should apply clean logical analysis. They should be transparent. They should define all terms and methods with clarity.

Empirical Scientific Research

The gold standard for research has to be the laboratory experiment with solid practices and clear recounting of the findings. As a colleague of mine who is an engineer says, most scientists are guaranteed publication as long as the research is new (and done correctly). In laboratories, various types of work may be achieved with a fairly high level of control. While such work may be expensive to execute, “in vitro” research is the gold standard.

For other types of learning, “in vivo” research is often the only way to achieve the observation of a phenomena in the real world. Further, it may be that “in vitro” would not be a possibility given ethical standards for certain types of research. Or the information in the world may offer more statistically significant findings than the limited research in laboratories.

All said, the assumption is that quantitative methods will be used interspersed with some qualitative insights (mixed methods). The hard sciences deal with issues of such precision that the proofs have to be quantitative and nuanced. The subjectivity of qualitative insights is not generally valued.

Analyses

Then, the next level down would be meta-analyses (quantitative, qualitative, or mixed methods) which include collected research over many years (most of which are decadal). Quantitative meta-analyses involve in-depth collection of research and the coding of the findings into various schemas to draw conclusions about a certain swath of research in a particular field. Reviews are a lighter version of meta-analyses, in the sense that they contain analyses of multiple research articles. These summarize the most current state-of-the-art (almost like “executive summaries”), and they seem to be used to point the way to new research.

These may have interpretive value, but they don’t offer new research per se. These build on existing research. Any researcher or academic in the field should already be up on the latest research. It may be that such synopses are read by administrators who do not make the time to delve in the research and could do well with such executive summaries (of sorts). Or researchers and academics, who are busy, may dip into such analyses just to stay atop the field in areas that are not their central areas of concern (but are still peripherally relevant).

Analytical whitepapers that evaluate particular issues for policy implications or other purposes can be very solid sources. Here are professionals in the field who are offering their best take on the state of the field. There are also books, such as by the National Academies Press, that summarize professionals’ insights from conferences, and those are edited sufficiently deftly to be cited without much anxiety.

Local Cases

Next best would seem to be unique local cases, or direct experiences of the researchers who have observed particular phenomena in their lines of work. Cases are not very generalizable, so there is very limited scope to this information. There may be high reader interest level, but in the context of research, cases are considered fairly limited.

Theoreticals, Models, and Simulations

Another category of publications include theoretical papers. These may propose new models. They may offer simulations. They may offer a game theory depiction of particular science-based phenomena. These are written with varying levels of rigor. For some reason, because these are hypotheticals and projections, they are not as respected. Certainly, there is a fundamental weakness in that these are not facts that map to the world per se. The ideal is that they do map to the world, but that is an assumption.

Position Papers

Position papers are those in which a notable individual in the field presents and defends an educated (hopefully) opinion. These are like the op-eds or editorials in newspapers. Many of these come across as show pieces. They take advocacy stances, which rankle more pure scientists, who prefer facts over opinions. (There is an understanding that such works are needed to move a field forward, but those are not works that have value in a learning object that is objective and science-based.)

Delphi Studies and Expert Interviews

Then, there are expert interviews, packaged as Delphi studies or panel discussions. These, too, are show pieces. They may capture intriguing insights, and as such, they are popular with readers. Given the celebrity culture of people, there is a good curiosity value to “star” researchers / writers / thinkers / professors in a field.

And Crowd-Sourced

Finally, at the lowest level would be open-source information. An example could be Wikipedia, which is deeply appreciated for its open-source imagery, convenience, and accessibility, but the crowd-sourcing is problematic. Its contents are not vetted by an authority in the field. The writing is sufficiently ambiguous in that a novice approaching a topic may come away with some pretty major misconceptions.

While it may seem like going to “crowd sourced” is going to be tough given the large amounts of information available, I will own up to having at least a dozen or more citations that went to Wikipedia directly in a recent course build. Sometimes, that’s the only source that has the level of generality I need…and is accessible. (From the Wikipedia citations, I will drill down and pursue original research sources whenever possible. If I have any other choices, I will use those ahead of Wikipedia—but this source is an excellent place to get an initial sense of a topic and to scrape a few research leads.)

Another example would be public blogs and wikis—without an overarching editorial hand. These are often collections of information only.

All the other sources above (except for the crowd-sourced data) are understood to be double-blind peer-edited generally. Or, at least there is professional editorial oversight.

What This Means for the Applied Research Work

All this means is that one should go for the optimal research available and work down as there are fewer and fewer sources. This means starting with the most reputable journals and then following leads wherever they lead the researcher. If this means going back in time for an older source (which may have a valid citation), then one should. If this means going to defunct journals and using interlibrary loan, that’s totally doable, too. Of course, all development work happens within a strict time frame, so I should clarify that this limitation also drives the reality.

Finally, the research literature in any field has areas that are “silent,” where issues haven’t yet been addressed, or there is wide debate and speculation and tangling. To be fair and accurate, any learning objects addressing those issues have to reflect the holes in the literature. Given that fact that most articles are about a half-year to a year old before they make it into publication, anything cited will almost necessarily be dated.

Comments

erwin 1 year, 10 months ago

I admit it's not easy to create a new course curriculum. There are many things that must be studied and tested truth and benefits before being applied to the real life. But whatever it is for me the most important part is the learning process.

Post a comment Please do not post URLs in the text of your response. Thanks.

Comments are closed and no additional comments will be allowed.