Jakob Ohme

Jakob Ohme is a Postdoc at the Amsterdam School of Communication Research (ASCoR), University of Amsterdam. As part of the Digital Communication Method Lab, he develops and tests new methodological approaches to study media exposure, political behavior and civic attitudes.

Cornelia Mothes

Cornelia Mothes is a Professor for media management and journalism at Macromedia University of Applied Sciences, Leipzig. Her main research interests lie in the field of political psychology, with a focus on experimental research testing communication strategies to increase political tolerance and participation in digital democracies.

During the Covid-19 outbreak, the spread of false information on social media has often been linked to citizens’ motivation to participate in protests against lock-down measures of governments or to attend ‘corona parties’. Although such relationships may exist, we should be aware that spread of (dis-)information does not equal exposure, and hypothesized media effects can only arise through exposure.  

Yet, in year 14 of Facebook’s existence, knowledge about exposure mechanisms on social media is still sparse. Only recently, insights such as that people click on only 4% of the posts they see in a newsfeed have advanced our understanding of how people attend to information on social media. One reason for this sparsity is the lack of experimental studies taking the natural behavior of social media users into account. To remedy this shortcoming, it can be helpful to move social media experiments closer to real world exposure settings by increasing their ecological validity, meaning that findings can be generalized to users’ everyday behaviors outside the experimental context.

Realistic stimuli, natural exposure situations, lifelike responses

Ecological validity refers to the extent to which the experimental environment that research subjects are exposed to relates to participants’ real-world settings, so that observed behavior within the experimental context can be generalized to non-observed behavior outside this context (see Bronfenbrenner, 1977). Ecological validity can be increased on three levels: the stimuli used, the experimental setting, and the responses recorded.

Regarding stimuli used, materials employed in experiments should resemble the natural appearance of stimuli. In a recently conducted study, we aimed for this natural appearance by developing the Newsfeed Exposure Observer (NEO) Framework that allowed us to record viewing time, click-decision, and reading/watching time of social media posts.

Example of newsfeed created by NEO Framework

The original look and feel of the Facebook newsfeed was used to increase realism of the exposure platform. It also allowed us to randomly select and display 100 items that were posted on Facebook on the very same day of the experiment in order to uphold the stimuli’s actual news value. To account for the original ‘social’ meaning of each post, we additionally stored the genuine type and order of emojis that each post had received on social media.

Beyond stimuli, subjects should experience study settings as close to their everyday environment as possible to achieve high ecological validity. Naturally, forced exposure studies—and even forced selective exposure studies—can never score fully on this scale. However, it may be a valuable first step to re-create complex online exposure environments by having participants take surveys at a location and on a device they usually use for the studied type of behavior. Therefore, in a second study, we investigated attention to posts in a dynamic newsfeed on smartphones by using mobile eye tracking. In addition to the lab setting, we also conducted the study in the university cafeteria, where participants encountered a crowded and noisy environment and thereby more realistic exposure conditions.

Finally, in creating realistic stimuli and testing them in more natural exposure situations with the help of the numerous opportunities that digital technology provides researchers nowadays, it should also be possible to observe more natural responses. Give people a newsfeed with an unforeseeable amount of posts and they will start scrolling—and thereby select contents in a more realistic manner. Applying this degree of realism in our studies led us to observe the existence of two different levels of selective exposure: Before users choose information they want to engage with, they are selective at a first level by giving different amounts of attention to posts as they browse the newsfeed. First-level exposure could thus be identified as an independent user pattern that does not necessarily lead to second-level exposure, that is, clicking on posts and spending time with linked contents.

In our mobile eye-tracking setting, we continue to put these mechanisms to test: Does smartphone use lead to further changes in online browsing patterns? For instance, does news exposure on smartphones lead to different styles of cognitive processing as compared to using a desktop PC? And what does that imply for learning from the news and for political knowledge acquisition?

Can you have it all?

No. Improvements in ecological validity bring struggles, have limits, and complicate goals of internal validity. For example, conducting an eye-tracking experiment on smartphones with more than 100 participants over several weeks is almost incompatible with the aim of relying on steadily updated, current news posts. Hence, it is necessary to weigh high control of the (more natural) study setting against the realism of stimuli. From an analytical point of view, a greater number of stimuli—as included in the NEO framework—comes at the expense of statistical power in subsequent analyses. Additionally, from a practical angle, scholars’ attempt to increase ecological validity is prone to error. One day before our first study using the NEO framework, for instance, we needed to adapt the whole framework to Facebook’s sudden change of profile picture shapes from square to circle. And despite pretesting the framework extensively, we underestimated the IT processing capacity required to track several hundred participants within the more natural NEO environment, so we had to move the whole study to a different server before we were able to continue data collection. Moreover, giving up control leads to greater uncertainty

Running an eye-tracking study in a cafeteria filled with hundreds of students who closely pass by the expensive equipment with coffee and food makes you feel a certain unease. And you are almost bound to encounter this one student that unplugs the whole experimental setup because she wanted to charge her phone on this specific power plug.

Is it worth it?

Yes. Increasing ecological validity is an important avenue for social media studies, and communication research in general, to explore further. Media of all kinds have become a constant part of most citizens’ lives in a variety of environments and situations. Assuming that outcomes of media use are unaffected by these quickly changing circumstances falls short of communication science’s task to provide evidence that helps society understand the consequences of living in a new, digital media environment.

In our studies, we found that political interest only determines first-level, but not second-level selective exposure. The number of likes, views or shares drives news selection on social media, and we find that these social endorsements may even mitigate partisan bias. The high ecological validity of these studies allows for these novel findings to be more likely generalizable to the everyday life of users. Combining the both of best worlds by increasing the realism in stimuli, study settings, and responses observed, while still keeping a high level of control over the setting, is a fruitful path towards more realistic assumptions about user behaviors and effects in digital news environments. Not each and every study needs to strive for high ecological validity, but if findings are to prevail in literature, they must also be tested under more realistic conditions. To achieve this goal, ecological validity should become an important and systematically reviewed assessment criterion in communication research—and journal editors and reviewers should give value and credit to scholars who take their research ‘into the wild’.