﻿ Paper范文-Control versus Contrast Group--论文代写范文精选-51Due留学教育

24小时客服

#### 关于我们

51Due提供Essay，Paper，Report，Assignment等学科作业的代写与辅导，同时涵盖Personal Statement，转学申请等留学文书代写。

51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标

# Control versus Contrast Group--论文代写范文精选

2016-01-30 来源: 51due教员组 类别: Paper范文

51Due论文代写网精选paper代写范文：“Control versus Contrast Group” 对照组和实验组的所有控制变量是相同的。两组都有合理的假设，具有可比性的无关变量，适当的随机分配过程可以成功地进行。这篇社会paper代写范文谈的是统计上的实验组，然而,如果让男性和女性学生执行不同的任务,性别作为一个额外的控制变量，当然性别可能被用作另一个独立变量,在这种情况下,性别的相关性可能被测试。任何变量的选择是根据实验的理论基础。

Abstract
That no contrast group can replace the control group may also be seen from Table 1. The control group and the experimental group are identical in terms of all the control variables. It is reasonable to assume that the two groups are comparable in terms of the extraneous variables to the extent that the completely randomized design is appropriate and that the random-assignment procedure is carried out successfully. Being different from the control group, the contrast group envisaged in the report has to be a group that differs from the experimental group in something else in addition to being different in terms of the independent variable. The additional variable involved cannot be excluded as an alternative explanation. That is, there is bound to be a confounding variable in the contrast group; otherwise it would be a control group. The subject's gender is treated as an extraneous variable in Table 1.

However, if there is a theoretical reason to expect that male and female students would perform differently on the task, gender would be controlled in one of several ways. First, gender may be used as an additional control variable (e.g., only male or female students would be used). Second, gender may be used as another independent variable, in which case the relevancy of gender may be tested by examining the interaction between acoustic similarity and gender. The third alternative is to use gender as a blocking variable, such that equal number of male and females are used in the two groups. Which male (or female) is used in the experimental or control condition is determined randomly. In other words, the choice of any variable (be it the independent, control or dependent variable) is informed by the theoretical foundation of the experiment. This gives the lie to the report's treating matching or blocking variables as 'nuisance' variables.

Giving the impossible meaning of "total control of variables" (Wilkinson & Task Force, 1999, p.545) to `control' is an example of a striking feature in the report, namely, its indifference to theoretical relevancy. It is objectionable that the confusing and misleading treatment of the control group is used in the report as the pretext to "forgo the term ["control group"] and use 'contrast group' instead" (Wilkinson & Task Force, 1999, p.595, explication in square brackets added). As had been made explicit by Boring (1954, 1969), the control group serves to exclude artifacts or alternative explanations. The Task Force's recommendation of replacing the control group by the contrast group is an invitation to weaken the inductive principle that underlies experimental control. Such a measure invites ambiguity by allowing confounds in the research. The ensuing damage to the internal validity of the research cannot be ameliorated by explaining `the logic behind covariates included in their designs' (Wilkinson & Task Force, 1999, p.600) or by describing how the contrast group is selected (pp. 594-597). Explaining or describing a confound is not excluding it.

Experimenter's Expectancy Effects Revisited
Despite the long-established findings of the effects of experimenter bias (Rosenthal, 1966), many published studies appear to ignore or discount these problems. For example, some authors or their assistants with knowledge of hypotheses or study goals screen participants (through personal interviews or telephone conversations) for inclusion in their studies. Some authors administer questionnaires. Some authors give instructions to participants. Some authors perform experimental manipulations. Some tally or code responses. Some rate videotapes. An author's self-awareness, experience, or resolve does not eliminate experimenter bias.

In short, there are no valid excuses, financial or otherwise, for avoiding an opportunity to double-blind. (Wilkinson & Task Force, 1999, p. 596) As may be seen from the quote above, the report bemoans that psychologists do not heed Rosenthal's (1976) admonition about the insidious effects of the experimenter's expectancy effects (or EEE henceforth). Psychologists are faulted for not describing how they avoid behaving in such a way that they would obtain the data they want. Given the report's faith in EEE, it helps to examine the evidential support for EEE by considering Table 2 with reference to the following comment: But much, perhaps most, psychological research is not of this sort [the researcher collects data in one condition only, as represented by A, B, C, M, P or Q in Panel 1 of Table 2]. Most psychological research is likely to involve the assessment of the effects of two or more experimental conditions on the responses of the subjects [as represented by D, E, H or K in Panel 2 of Table 2]. If a certain type of experimenter tends to obtain slower learning from his subjects, the "results of his experiments" are affected not at all so long as his effect is constant over the different conditions of the experiment. Experimenter effects on means to do necessarily imply effects on mean differences. (Rosenthal,. 1976, p. 110, explication in square brackets and emphasis in italics added).

The putative evidence for EEE came from Rosenthal and Fode (1963a, 1963b), the design of both of which is shown in Panel 1 of Table 2. In their 1963a studies, students in the "+5" expectation and "-5" expectation groups were asked to collect photo-rating data under one condition. Again, students collected `rate of conditioning' data with rats in two expectation conditions in their 1963b study. Of interest is the comparison between the mean ratings of the two groups of students. A significant difference in the expected direction was reported between the two means, X 5 and + X 5, in both studies. − Note that the said significant difference is an effect on means, not an effect on mean difference, in Rosenthal's (1976) terms. Moreover, Rosenthal (1976) also noted correctly that the schema depicted in Panel 1 is not the structure of psychological experiments. That is, Individuals A, B, C, M, P and Q in Panel 1 should not be characterized as `experimenters' at all because they did not conduct an experiment. While the two studies were experiments to Rosenthal and Fode (1963a, 1963b), the studies were mere measurement exercises to their students. In other words, Rosenthal and Fode's (1963a, 1963b) data cannot be used as evidential support for EEE.

What is required, as noted in the italicized emphasis above, are data collected in accordance with the meta-experiment schema depicted in Panel 2 of Table 2. While Chow (1994) was the investigator who conducted a meta-experiment (i.e., an experiment about conducting the experiment), D, E, H and K were experimenters because they collected data in two conditions which satisfied the constraints depicted in Table 1. When experimental data were collected in such a meta-experiment, Chow (1994) found no support for EEE. There was no expectancy effect on mean difference in the meta-experiment. That is, EEE owes its apparent attractiveness to the casual way in which `experiment' is used to refer to any empirical research. The experiment is a special kind of empirical research, namely, a research in which data are collected in two or more conditions that are identical (or comparable) in all aspects, except one (viz., the aspect represented by the independent variable).

Summary and Conclusions
It is true that "each form of research has its own strengths, weaknesses, and standard of practice" (Wilkinson & Task Force, 1999, p. 594). However, this state of affairs does not invalidate the fact that some research methods yield less ambiguous data than others. Nor does it follow that all methodological weaknesses are equally tolerable if the researcher aims at methodological validity and conceptual rigor. Having a standard of practice per se is irrelevant to the validity of the research method. To introduce the criteria of being valuable or credible in methodological discussion is misleading because "being valuable" or "being credible" is not a methodological criterion. Moreover, "being valuable" or "being credible" may be in the eye of the beholder. This state of affairs is antithetical to objectivity.(paper代写)

51Due网站原创范文除特殊说明外一切图文著作权归51Due所有；未经51Due官方授权谢绝任何用途转载或刊发于媒体。如发生侵犯著作权现象，51Due保留一切法律追诉权。(paper代写)