Overview
At Lyngo Lab, we are committed to scientific rigor and evidence-based practices. In an era of misinformation and unsubstantiated claims, we take pride in providing reliable and trustworthy insights to help individuals improve their communication skills.
Our research methodology is grounded in state-of-the-art scientific principles, employing randomized controlled trials with large and diverse sample sizes. Unlike other sources that rely on anecdotal evidence or personal experience, we utilize the power of empirical experimentation to uncover the most effective strategies for enhancing communication.
By conducting experiments that systematically test different linguistic choices, writing styles, and communication formats, we are able to provide our readers with evidence-backed recommendations to achieve real and measurable improvements in their writing and speaking.
Study Design
To ensure our research is as accurate and reliable as possible, we use between-subjects randomized controlled trials (i.e., randomized experiments) to investigate our communication questions of interest. Randomized experiments have long been considered the “gold standard” of research, as evidenced by their pervasive use in fields like medicine and psychology.
Specifically, we use survey experiments in which we randomly assign participants to a certain block of text, an image, or a task, then measure our audience outcomes of interest via survey question. Each study takes a few minutes to complete, depending on the tasks, and participants are paid approximately $0.15 per minute. We strive to recruit at least 400 participants per experiment, as this provides statistical power strong enough to detect small effect sizes, each with about 80% likelihood.
We use two primary platforms when recruiting participants for our online surveys and experiments. These platforms are Amazon Mechanical Turk (MTurk) and Prolific.
MTurk is a well-known platform where everyday people perform simple Human Intelligence Tasks (HITs) in return for payment. It is commonly used by university professors for academic research, as well as market researchers and others with short, straight-forward online tasks (e.g., image tagging). The population has historically been fairly representative of the U.S. population (Paolacci, et al., 2010) and reliable for establishing internal validity (Berinsky, et al., 2012).
Similarly, Prolific is an online research platform specifically focused on providing participants for rigorous academic and practitioner research. Prolific includes additional features such as demographic targeting and balanced samples (e.g., 50% male, 50% female, etc.).
Once the allotted number of participants have completed the study, we download the data from our survey platform as an Excel file and clean each variable so the data is ready for analysis.
Analysis and Reporting
We use Stata statistical software to analyze our data. Each analysis is chosen based on the specific research design and question being asked. Most of our questions are straight-forward comparisons of two conditions in a between-subjects experiment. In these cases, we use independent (i.e., two-sample) t-tests or ordinary least squares (OLS) regression analysis. For interactions (e.g., where the results are influenced by another variable like gender or age) we use multiple regression analysis with an interaction term.
In the event we use a within-subjects experimental design (i.e., when each participant sees both experimental conditions, but the order is randomized), we use a paired-samples t-test. For interactions of a within-subjects variable and a between-subjects variable, we use mixed effects regression models.
The primary results of interest, which we report for all of our studies, are the outcome averages for each experimental condition and the "p-value," which tells us the probability of finding results at least as significant as those in our sample if the real effect was actually zero. Lower p-values generally mean that we can be more confident that the results are not just due to chance sampling. We use a 95% confidence level for our statistical tests and treat p-values below 0.05 as statistically significant, though results with small effect sizes and p-values greater than 0.01 are caveated.
To graphically present the results, we use Stata's graphing program to produce a bar chart with one bar for each experimental condition, along with standard error bars to show the level of uncertainty, as is the norm in psychology research.
We use Stata statistical software to analyze our data. Each analysis is chosen based on the specific research design and question being asked. Most of our questions are straight-forward comparisons of two conditions in a between-subjects experiment. In these cases, we use independent (i.e., two-sample) t-tests or ordinary least squares (OLS) regression analysis. For interactions (e.g., where the results are influenced by another variable like gender or age) we use multiple regression analysis with an interaction term.
In the event we use a within-subjects experimental design (i.e., when each participant sees both experimental conditions, but the order is randomized), we use a paired-samples t-test. For interactions of a within-subjects variable and a between-subjects variable, we use mixed effects regression models.
The primary results of interest, which we report for all of our studies, are the outcome averages for each experimental condition and the "p-value," which tells us the probability of finding results at least as significant as those in our sample if the real effect was actually zero. Lower p-values generally mean that we can be more confident that the results are not just due to chance sampling. We use a 95% confidence level for our statistical tests and treat p-values below 0.05 as statistically significant, though results with small effect sizes and p-values greater than 0.01 are caveated.
To graphically present the results, we use Stata's graphing program to produce a bar chart with one bar for each experimental condition, along with standard error bars to show the level of uncertainty, as is the norm in psychology research.
Best Practices
In addition to rigorous research designs, we follow several best practices recently recommended by the American Psychological Association and other top academic institutions. These best practices include the following:
- Report the results of all studies, regardless of the outcomes
- Make available all research materials
- Make available all data
- Use large samples to help detect small effect sizes
- Replicate our own studies
- Be clear about the limitations of our research
References
Berinsky, A. J., Huber, G. A., & Lenz, G. S. 2012. Evaluating online labor markets for experimental research: Amazon.com's Mechanical Turk. Political Analysis, 20(3): 351-368.
Bohannon, J. 2015. Many psychology papers fail replication test. Science, 349(6251), 910-911.
Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T.-H., Huber, J., Johannesson, M.,… Wu, H. 2018. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour, 2(9), 637-644.
Paolacci, G., Chandler, J., & Ipeirotis, P. G. 2010. Running experiments on Amazon Mechanical Turk. Judgment and Decision making, 5(5): 411-419.