What do researchers manipulate in an experiment




















However, researchers must prioritize and often it is not possible to have high validity in all four areas. Morling points out that most psychology studies have high internal and construct validity but sometimes sacrifice external validity. Again, to manipulate an independent variable means to change its level systematically so that different groups of participants are exposed to different levels of that variable, or the same group of participants is exposed to different levels at different times.

As discussed earlier in this chapter, the different levels of the independent variable are referred to as conditions , and researchers often give the conditions short descriptive names to make it easy to talk and write about them. Notice that the manipulation of an independent variable must involve the active intervention of the researcher.

Comparing groups of people who differ on the independent variable before the study begins is not the same as manipulating that variable. For example, a researcher who compares the health of people who already keep a journal with the health of people who do not keep a journal has not manipulated this variable and therefore not conducted an experiment.

This distinction is important because groups that already differ in one way at the beginning of a study are likely to differ in other ways too. For example, people who choose to keep journals might also be more conscientious, more introverted, or less stressed than people who do not. Therefore, any observed difference between the two groups in terms of their health might have been caused by whether or not they keep a journal, or it might have been caused by any of the other differences between people who do and do not keep journals.

Thus the active manipulation of the independent variable is crucial for eliminating the third-variable problem. Of course, there are many situations in which the independent variable cannot be manipulated for practical or ethical reasons and therefore an experiment is not possible.

For example, whether or not people have a significant early illness experience cannot be manipulated, making it impossible to conduct an experiment on the effect of early illness experiences on the development of hypochondriasis. This caveat does not mean it is impossible to study the relationship between early illness experiences and hypochondriasis—only that it must be done using nonexperimental approaches.

We will discuss this type of methodology in detail later in the book. In many experiments, the independent variable is a construct that can only be manipulated indirectly. In such situations, researchers often include a manipulation check in their procedure. A manipulation check is a separate measure of the construct the researcher is trying to manipulate. As we have seen previously in the chapter, an extraneous variable is anything that varies in the context of a study other than the independent and dependent variables.

In an experiment on the effect of expressive writing on health, for example, extraneous variables would include participant variables individual differences such as their writing ability, their diet, and their shoe size.

They would also include situational or task variables such as the time of day when participants write, whether they write by hand or on a computer, and the weather. Extraneous variables pose a problem because many of them are likely to have some effect on the dependent variable. This influencing factor can make it difficult to separate the effect of the independent variable from the effects of the extraneous variables, which is why it is important to control extraneous variables by holding them constant.

Extraneous variables make it difficult to detect the effect of the independent variable in two ways. Imagine a simple experiment on the effect of mood happy vs.

Participants are put into a negative or positive mood by showing them a happy or sad video clip and then asked to recall as many happy childhood events as they can. Table 6. Every participant in the happy mood condition recalled exactly four happy childhood events, and every participant in the sad mood condition recalled exactly three. The effect of mood here is quite obvious. In reality, however, the data would probably look more like those Table 6.

Even in the happy mood condition, some participants would recall fewer happy memories because they have fewer to draw on, use less effective recall strategies, or are less motivated. And even in the sad mood condition, some participants would recall more happy childhood memories because they have more happy memories to draw on, they use more effective recall strategies, or they are more motivated.

Although the mean difference between the two groups is the same as in the idealized data, this difference is much less obvious in the context of the greater variability in the data. Thus one reason researchers try to control extraneous variables is so their data look more like the idealized data in Table 6.

One way to control extraneous variables is to hold them constant. This technique can mean holding situation or task variables constant by testing all participants in the same location, giving them identical instructions, treating them in the same way, and so on. It can also mean holding participant variables constant.

For example, many studies of language limit participants to right-handed people, who generally have their language areas isolated in their left cerebral hemispheres. Left-handed people are more likely to have their language areas isolated in their right cerebral hemispheres or distributed across both hemispheres, which can change the way they process language and thereby add noise to the data.

In principle, researchers can control extraneous variables by limiting participants to one very specific category of person, such as year-old, heterosexual, female, right-handed psychology majors. The obvious downside to this approach is that it would lower the external validity of the study—in particular, the extent to which the results can be generalized beyond the people actually studied. For example, it might be unclear whether results obtained with a sample of younger heterosexual women would apply to older homosexual men.

In many situations, the advantages of a diverse sample outweigh the reduction in noise achieved by a homogeneous one. The second way that extraneous variables can make it difficult to detect the effect of the independent variable is by becoming confounding variables. A confounding variable is an extraneous variable that differs on average across levels of the independent variable.

But as long as there are participants with lower and higher IQs at each level of the independent variable so that the average IQ is roughly equal, then this variation is probably acceptable and may even be desirable.

What would be bad, however, would be for participants at one level of the independent variable to have substantially lower IQs on average and participants at another level to have substantially higher IQs on average. In this case, IQ would be a confounding variable. To confound means to confuse , and this effect is exactly why confounding variables are undesirable. Because they differ across conditions—just like the independent variable—they provide an alternative explanation for any observed difference in the dependent variable.

Figure 6. But if IQ is a confounding variable—with participants in the positive mood condition having higher IQs on average than participants in the negative mood condition—then it is unclear whether it was the positive moods or the higher IQs that caused participants in the first condition to score higher.

Sometimes, due to resource constraints, some cells in such factorial designs may not receive any treatment at all, which are called incomplete factorial designs. Such incomplete designs hurt our ability to draw inferences about the incomplete factors. In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors.

No change in the dependent variable across factor levels is the null case baseline , from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor.

Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant. Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomized bocks design, Solomon four-group design, and switched replications design.

Randomized block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups called blocks within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full -time working professionals two homogeneous blocks , subjects in both blocks are randomly split between treatment group receiving the same treatment or control group see Figure Solomon four-group design.

In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not.

This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs but not in posttest only designs.

Switched replication design. This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure By the end of the study, all participants will have received the treatment either during the first or the second phase.

This design is most feasible in organizational contexts where organizational programs e. Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment.

For instance, one entire class section or one organization is used as the treatment group, while another section of the same class or a different organization in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of a certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias.

Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat the treatment and control groups maturing at different rates , selection-history threat the treatment and control groups being differentially impact by extraneous or historical events , selection-regression threat the treatment and control groups regressing toward the mean between pretest and posttest at different rates , selection-instrumentation threat the treatment and control groups responding differently to the measurement , selection-testing the treatment and control groups responding differently to the pretest , and selection-mortality the treatment and control groups demonstrating differential dropout rates.

Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible. Many true experimental designs can be converted to quasi-experimental designs by omitting random assignment. For instance, the quasi-equivalent version of pretest-posttest control group design is called nonequivalent groups design NEGD , as shown in Figure Likewise, the quasi -experimental version of switched replication design is called non-equivalent switched replication design see Figure In addition, there are quite a few unique non -equivalent designs without corresponding true experimental design cousins.

Some of the more useful of these designs are discussed next. Regression-discontinuity RD design. This is a non-equivalent pretest-posttest design where subjects are assigned to treatment or control group based on a cutoff score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group.

In another example, students who are lagging behind on standardized test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program.

The design notation can be represented as follows, where C represents the cutoff score:. Because of the use of a cutoff score, it is possible that the observed results may be a function of the cutoff score rather than the treatment, which introduces a new threat to internal validity. However, using the cutoff score also ensures that limited or costly resources are distributed to people who need them the most rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment.

The control group scores in the RD design does not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups.

Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect. Proxy pretest design. This design, shown in Figure A typical application of this design is when a researcher is brought in to test the efficacy of a program e. Separate pretest-posttest samples design. This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason.

As shown in Figure For instance, you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group.

If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction with a different set of customers after the program is implemented.

Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data are not available from the same subjects. Nonequivalent dependent variable NEDV design. This is a single-group pre-post quasi-experimental design with two outcome measures, where one measure is theoretically expected to be influenced by the treatment and the other measure is not.

However, the posttest algebra scores may still vary due to extraneous factors such as history or maturation. Hence, the pre-post algebra scores can be used as a control measure, while that of pre-post calculus can be treated as the treatment measure. The design notation, shown in Figure This design is weak in internal validity, but its advantage lies in not having to use a separate control group. Next, manipulation checks, an important step in ensuring that the manipulation of the IV has been Show page numbers Download PDF.

Search form icon-arrow-top icon-arrow-top. Page Site Advanced 7 of Disciplines : Communication and Media Studies , Sociology.



0コメント

  • 1000 / 1000