Monday, August 13, 2012

AEJMC Chicago Day 1

After the CDC conference I hopped on a plane to Chicago for the Association for Education in Journalism and Mass Communication's annual conference. It was the centennial conference for the organization and was an amazing week. I am a student member of the organization and am a graduate student member of three divisions: Mass Communication and Society, Communication Technology, and Communication Theory and Methodologies. I have never attended the full conference, although I have taken part in the southeast colloquium. To add to the excitement of the week, my husband was able to join me in Chicago and attend some of the events.

I decided to write a separate post for the first day of the conference because of a very interesting panel I attended and this was the night of the keynote speaker.

The first panel I attended at the conference was "Experimental Methodology in Mass Communication: How to Improve as Scholars and Reviewers," led by Rob Wicks (Arizona), Esther Thorson (Missouri) and Glenn Leshner (Missouri). This was a great decision on my part because it resonated with every session and presentation I attended the rest of the conference and can serve as a practical checklist as I continue in my own research. The panelists outlined 7 attributes that should be included in any experimental research paper, which they also outlined in their December 2011 article in Journalism & Mass Communication Quarterly:

  1. Clear  explication of the theory being tested and explanation of how the posited relations among independent, dependent, moderator, mediator and control variables relate to that theory.
  2.  Causal relationships (with a very clear explication of how the experimental design will demonstrate causal relationships between a and b)
  3. Clarity in conceptualizing media stimuli
    1. According to Dr. Thorson, we “...need to move toward understanding the physical structures in messages that create psychological processes” 
  4.  Clear identification of hypotheses and research questions (they also note that if there are too many RQs then the project needs re-thinking)
  5. Clear specification of the sample and acknowledgement of its limitations
  6. Correct specification of effect size, power, number of participants and alpha levels
  7. Eliminating alternative explanations
There were a lot of components to think about during this panel session. While some of these 7 criteria seem obvious, they brought up different ways of thinking about experiments. I'll touch on a few of the main things that stand out to me from this session. 

First, the panelists reminded the audience several times that our main goal in experimental research is to understand causality. This should always be at the forefront as we think through the theories, planning, execution, and results of our experiments. This will permeate all areas of our studies. Therefore, even when thinking about our sample and generalizations this should come into play. This is interesting to me. We often think about the samples we use, which many times college students in a somewhat convenient sample. Even when we randomize conditions, the sample is not truly random. This can be a limitation and is usually listed as a limitation in studies. However, by focusing on causality, as well as our theory and constructs, we will be more concerned with generalizing the constructs and causes more than the population. In fact, our goal is usually to generalize the construct/cause/theory, rather than to a population. Perhaps we shouldn't be so hard on ourselves and so quick to point out population limitations? We need to think about the overall goal of the experiment AND we need to clearly point all of these out in our paper. 

Second, the 5th point in the outline provided points to a couple of things I've often wondered about. The panelists really allude to the need to understand what our numbers mean. With the use of computer programs, it's easy enough to produce results and plug in our Fs and p values. If the p is greater than .05 we say we found no effect and generally we don't budge from that guideline. However, there are cases in which a .06 p shouldn't be ignored. As one panelist said "surely God loves the .06 as much as He loves the .05." The problem is that many researchers do not really know what all of the numbers tell us, so they won't argue for the use of a .06 in a paper. What have we missed because of this? I'm definitely glad I took the stats course I did last semester as it may assist me in this process at some point!

After 4 pages of notes, this panel was very helpful to me. I will be hanging onto this list and notes for the foreseeable future! I'm also anxious to get back into experimental studies with a renewed passion! Bring on the semester!

End of Day
Finally, the end of the day brought about the celebration of the centennial at AEJMC (check out twitter with #AEJMC12 for more highlights) and the key note speaker, Richard Gingras of Google News Products. He was quite a captivating speaker that explored the future and necessary changes in journalism and journalism education. A wonderful recap of his talk can be found here! My husband and I both recommend reading through it! 

After stuffing ourselves with Chicago treats at the reception, the first full day of the conference came to a close with excitement in the air. 

Stay tuned for a post on the rest of the conference, including my own research shortly!



No comments:

Post a Comment