Methods in Psychological Research. Annabel Ness Evans. Читать онлайн. Newlib. NEWLIB.NET

Автор: Annabel Ness Evans
Издательство: Ingram
Серия:
Жанр произведения: Зарубежная психология
Год издания: 0
isbn: 9781506384917
Скачать книгу
of subsections, usually separated with subheads (for more detail on writing a method section, see Chapter 14). The first subsection, typically called Participants, if human, and Subjects, if animal, will provide information about who or what was included in the research. You will read how the participants were recruited into the study or how the animals were obtained. Demographic information will be included, such as age, gender, race, and education level. If nonhuman subjects were used, details of care and housing will be described. Of course, you will also read how many participants were included and what type of sampling method was used. You can find more information about sampling in Chapter 6.

      You may find a subsection called Materials and/or Apparatus. Here, you will find manufacturers and model numbers of equipment and apparatus, details of tests and measures and the conditions of testing, and often a description of any stimulus materials used. Somewhere in the method section, you will read a description of the research design and the statistical tests. It does not necessarily make for good reading, but the purpose is to provide fine detail, not to entertain. It is in the method section that we find out exactly what was done. The procedure is described in a subsection called, well, Procedure. This is often written as a chronological sequence of what happened to the participants. Again, as with all the subsections of the method section, it is written in painstaking detail.

      In our example article, Knez (2001) tells us he studied 54 women and 54 men, all 18 years old and all in high school. The basic design is described as a factorial between-subjects (his word, not ours) design (see Chapter 7) with three different lights and two genders. He also describes the testing conditions, details of the lighting (the IV), and the various dependent measures. He describes the procedure of the experiment, providing the time of day of testing, the information participants were given, and the order in which the tests were administered. At the end of the section, we have enough information that we could probably replicate the study exactly as he had done.

      After reading the introduction and the method section, we now know quite a bit about the research area, the current researcher’s study, and how he or she carried it out. It is time to find out what happened.

      The Results

      The results section is the part of the article that is the most exciting. This is where we learn whether or not the data support the research hypothesis. Typically, the section begins with a general statement addressing that very point (i.e., did the data support or fail to support the researcher’s hypothesis?). As with the other sections, more detail on writing an article is presented in Chapter 14.

      The results section is the most important section of a research article. Unfortunately, students can become overwhelmed by all the statistics. Even students who have done very well in their statistics courses can find the results sections of most research articles impossible to understand. The problem is that basic statistics courses cover basic statistics such as measures of central tendency (the mean, median, and mode) and measures of variability (the range, variance, and standard deviation). Of course, these statistics will appear in the results section and are widely used for describing and summarizing the data.

      The problem is often in the tests of significance. Your basic statistics course probably covered the z test, t test, analysis of variance (ANOVA), and associated F test. You may have also learned about correlation and simple regression and, perhaps, chi-square tests. These are good statistics and are used sometimes in research, but, unfortunately, when you go to read the literature, you will find statistical tests that you may have never heard of. We do not intend to teach advanced statistics here, but we do want to provide you with a conceptual understanding of these statistics, so that when you read the literature, you will have at least a basic grasp of these procedures. So, as briefly as we can, we are going to review statistics. No, you do not need a calculator; this review is at a conceptual level only, but in Chapter 13, we provide you with more of the nitty-gritty of basic statistics, which you may need to do a research project of your own.

      In research, statistics are used for two purposes. The first is to summarize all the data and to make it simpler to talk about the outcome of research. These are typically called descriptive statistics. The second purpose is to test research hypotheses, and these are called inferential statistics.

      Descriptive Statistics

      Descriptive statistics include measures of central tendency, variability, and the strength of the relationship between variables. The mean, median, and mode are the most common measures of central tendency. The mean (symbolized as M) is the arithmetic average. It is what we report when talking about the class average on a test, for example. The median (symbolized as Mdn) is the value that half the observations (or scores) exceeded and half were below. It is the middle score in a distribution of scores arranged from lowest to highest. The median is often reported when a distribution of scores is not bell shaped (i.e., not a normal distribution). The mode (symbolized as Mo) is the most frequently occurring score or value in your data. The mode gives us a measure of the typical value in the distribution. For example, if you were making a “one-size-fits-all” pair of eyeglasses, you would want the mode for head size. Each measure of central tendency uses a different approach to describe the average of a group of scores.

      The most common statistics used for describing variability in data are the range, variance, and standard deviation. The range either is reported as the highest and lowest score or is reduced to a single value that is the distance between these two scores. On an exam, you may ask what was the highest score attained, and perhaps out of morbid curiosity, you may want to know the lowest score as well. The range is an appropriate measure of variability for some types of data, but it is quite crude. For example, there may be one very high score and one very low score, and the range will not indicate that perhaps all the other scores were concentrated very near the mean. Two related measures of variability provide this information. The variance and its square root, the standard deviation (symbolized as SD), provide a measure of the average distance scores are from the mean. With data that are bell shaped or normally distributed, the standard deviation tells us where the bulk of the scores fall; about two thirds of the scores fall between 1 standard deviation above the mean and 1 standard deviation below the mean. More detail on the calculation and appropriate selection of these statistics is given in Chapters 4 and 13.

      Often you will read research articles that describe the degree to which variables are related to one another. The most common measure of association is the Pearson product–moment correlation (symbolized as r). This statistic describes how strongly (or weakly) variables are related to one another. For example, if two variables are perfectly correlated, the r value will be 1 or –1. The sign of the number indicates the direction of the relationship. A positive correlation tells us that the variables are directly related; as one variable increases, so does the other, and as one variable decreases, so does the other. A negative correlation tells us that the variables are inversely related. That is, as one variable increases, the other decreases, and as one variable decreases, the other increases. The magnitude of r tells us how strongly the variables are related. A zero correlation tells us that the variables are not related at all; as the value increases to +1 or decreases to –1, the strength of the relationship increases. A correlation of 1 (either positive or negative) is called a perfect correlation.Be aware that perfect correlations never actually occur in the real world. If they do, it usually means that you have inadvertently measured the same variable twice and correlated the data. For example, you would likely get a correlation