Socrates

"The only true wisdom is in knowing you know nothing." 

Socrates

"To find yourself, think for yourself."

Nelson Mandela

"Education is the most powerful weapon which you can use to change the world."

Jim Rohn

"Success is nothing more than a few simple disciplines, practiced every day." 

Buddha

"The mind is everything. What you think, you become." 

Wednesday, 28 September 2016

LONGITUDINAL STUDIES




In the mass media area, the first major longitudinal study was done by Lazarsfeld, Berelson, and Gaudet (1944) during the 1940 presidential election.
TYPES OF LONGITUDINAL STUDIES
The three main types of longitudinal studies are trend study, cohort analysis, and panel study. Each is discussed in this section.
Trend Studies
The trend study is probably the most com­mon type of longitudinal study in mass media research. Recall that a trend study samples different groups of people at differ­ent times from the same population.

Trend studies are useful, but they have limitations. Suppose that a sample of adults is selected three months before an election and 57% report that they intend to vote for Candidate A and 43% for Candidate B. A month later, a different sample drawn from the same population shows a change: 55% report that they are going to vote for A and 45% for B.



This is a simple example of a trend study

3 month before election
one month before
Before one week
A
57%
55%
45%
B
43%
45%
55%
 To determine both the gross change and the net change, a panel study is necessary.
Advantages.
·         Trend studies are valuable in describing long-term changes in a popu­lation.
·         They can establish a pattern over time to detect shifts and changes in some event.
·         They can be based on a comparison of survey data originally constructed for other purposes.


Disadvantages.
If data are unre­liable, false trends will show up in the results. Trend analy­sis must be based on consistent measures.
Examples of Trend Studies.
the trend study  about newspaper reading and attitudes to­ward ethnic minorities that spanned five years.
Cohort Analysis
 Cohort analysis attempts to identify a cohort effect: Changes in the dependent variable due to aging, or are they present because the sample members belong to the same cohort?

To illustrate, suppose that 50% of college seniors report that they regularly read news magazines, whereas only 10% of college freshmen in the same survey give this answer. How might the difference be accounted for? One explanation is that freshmen change their reading habits as they progress through college. Each survey has different participants—the same people are not questioned again, as in a panel study—but each sample represents the same group of people at different points in their college career.

Reading habit
Seniors
Freshers
2014
68
32
2015
55
45
2016
60
40
.
Typically, a cohort analysis involves data from more than one cohort. It displays news magazine readership for a number of birth cohorts. Note that the column variable (read down) is age, and the row variable (read across) is the year of data collection. This type of table allows a researcher to make three types of comparisons. First, reading down a single column is analogous.

Percentage of Adults Who Regularly Read News Magazines
Age
1992
1996
2006

18-21
15
12
10
22-25
34
32
28
26-29
48
44
35
A "pure" period effect. There is no variation by age at any period; the columns are identical, and the varia­tions from one period to the next are identi­cal. Furthermore, the change in each cohort (read diagonally to the right) is the same as the average change in the total population.


Table 8.2
Cohort Table Showing Pure
Age Effect




Year
Age
1992
1996        2000
18-21
15
10
22-25
20
15
26-29
25
20
Average
20
15

Second, reading across the rows shows trends at each age level that occur when cohorts replace one another. Second, influences associated with members in a certain birth cohort are called cohort effects.

It  shows a "pure" cohort effect. Here the cohort diagonals are constant, and the variation from younger to older respon­dents is in the opposite direction from the variation from earlier to later survey peri­ods. In this table, the key variable seems to be date of birth. Among those who were born between 1971 and 1974, news maga­zine readership was 15% regardless of their age or when they were surveyed.


Third, reading diagonally toward the right reveals changes in a single cohort from one time to another (an intracohort study). Finally, in­fluences associated with each particular time period are called period effects.

18-21
15
10
5
22-25
20
15
10
26-29
25
20
15
Average
20
15
10



Advantages.
1.      Cohort analysis is an ap­pealing and useful technique because it is highly flexible.
2.      It provides insight into the effects of maturation and social, cultural, and political change.
3.      A cohort analysis can be less expensive than experiments or surveys.
Disadvantages.
 The major disadvantage of cohort analysis is that the specific effects of age, cohort, and period are difficult to un­tangle through purely statistical analysis of a standard cohort table.
1.      In survey data, much of the variation in percentages among cells is due to sampling variability. There are no uni­formly accepted tests of significance appro­priate to a cohort table that allow researchers to estimate the probability that the observed differences are due to chance.

2.      A second disadvantage of the technique is sample mortality. If a long period is involved or if the specific sample group is difficult to reach, the researcher may have some empty cells in the cohort table or some that contain too few members for meaningful analysis.
Examples of Cohort Analysis. Cohort analy­sis is useful in the study of public opinion.


Sunday, 18 September 2016

RESEARCH ERROR- SAMPLING ERROR

RESEARCH ERROR
There are two broad types of error pres­ent in all research:
 (1) SAMPLING ERROR, or error related to selecting a sample from a population; and
(2) NONSAMPLING ERROR, such as
Measurement er­rors,
Data analysis errors,.

Mea­surement error : Some of the most common measurement errors include:
·        A poorly designed measurement instrument
·        Asking respondents the wrong questions or asking questions incorrectly.


It  is further divided into two categories:

Random error: Random error relates to problems where mea­surements and analyses vary inconsistently from one study to another—

Systematic error: sys­tematic error consistently produces incor­rect (invalid) results in the same direction, or same context, and is, therefore, predictable.
The cause of systematic errors and eliminate their influence.


·       Faulty data collection equipment
·       Untrained data collection personnel
·       Using only one type of measurement instead of multiple measures
·       Data input errors

SAMPLING ERROR
Sampling error find out by  an estimate of the difference between observed and expected measurements and is the foundation of all research interpretation.
Sampling error provides an indication of how close the data from a sample are to the population mean. A low sampling error indi­cates that there is less variability or range in the sampling distribution.

A theoretical sampling distri­bution is the set of all possible samples of a given size. This distribution of values is described by a bell-shaped curve or normal curve (also known as a Gaussian distribu­tion, after Karl F. Gauss, a German math­ematician and astronomer who used the concept to analyze observational errors).

There are two important terms related to computing errors due to sampling:
standard error (designated as SE) and
sampling error, which is also referred to as margin of error or confidence inter­val (designated as se or m, or Cl).
(1) Stan­dard error relates to the population and how samples relate to that population. Standard error is closely related to sample size—as sample size increases, the standard error decreases.


Confidence Level and   Confidence Interval
Sampling error involves two concepts:
Con­fidence Level  
and
Confidence Interval.

The confidence level indicates a degree of certainty (as a per­centage) that that the results of a study fall within a given range of values. Typical con­fidence levels are 95% and 99%.

The confi­dence interval is a plus-or-minus percentage that is a range within the confidence level. For ex­ample, if a 5% confidence interval is used, and 50% of the sample gives a particular answer for a question, the actual result for that question falls between 45% and 55% (50 ± 5).
  
In every normal distribution, the standard deviation defines a standard unit of distance from the mean of the distribution to the outer limits of the distribution. These standard deviation interval units (z-values) are used in establishing the confidence in­terval that is accepted in a research project. In addition, the standard deviation units in­dicate the amount of standard error. For ex­ample, using a confidence level of + 1 or –1 standard deviation unit-1 standard error—says that the probability is that 68% of the samples selected from the population will produce estimates within that distance from the population value (1 standard deviation unit; see Figure 4.3).
Computing Sampling Error
There are several ways to compute sam­pling error, but no single method is appropri­ate for all sample types or all situations.

Sampling error is an important concept in all research areas because it provides an indication of the degree of accuracy of the research, provides some type of explanation about error.

POPULATION AND SAMPLE.TYPES OF SAMPLING PROCEDURES

A group or class of subjects, variables, concepts, or phenom­ena is called sample.
In some cases, an entire class or group is investigated; The process of examining every member in a population is called a census.
A sample is a subset of the population that is representative of the entire population.

SAMPLE SIZE
Determining an adequate sample size is one of the most controversial aspects of sampling. The size of the sample required for a study depends on at least one or more of the following seven factors:
(1) project type,
(2) project purpose,
 (3) project complexity,
(4) amount of error tolerated,
(5) time constraints,
(6) financial constraints, and
(7) previous research in the area.

1. A primary consideration in determining sample size is the research method used. Focus groups (see Chapter 5) use samples of 6-12 people, but the results are not intended to be generalized to the population from which the respondents are selected. Samples with 10-50 subjects are commonly used for pretesting measurement instruments and pi­lot studies, and for conducting studies that will be used for only heuristic value. 

2.    Researchers often use samples of SO, 75, or 100 subjects per group, or cell More than likely, the client would accept SO respondents in each of the eight cells, producing a sample of 400 (8 X SO).
3.    Cost and time considerations always control sample size. Although researchers may wish to use a sample of 1,000 for a survey, the economics of such a sample are usually prohibitive.
4.    Most research is conducted using a sample size that conforms to the project's budget. Researchers may be wise to consider using smaller samples for most projects.
5.    Multivariate studies require larger samples than do univariate studies because they involve analyzing multiple response data.
6.    For panel studies, central location test­ing, focus groups, and other pre recruit proj­ects, researchers should always select a larger sample than is actually required. If a survey is planned and similar research indicates that a representa­tive sample of 400 has been used regularly with reliable results, then a sample larger than 400 may be unnecessary.
7.    Generally speaking, the larger the sample, the better. However, a large un­representative sample (The Law of Large Numbers) is as meaningless as a small un­representative sample.

TYPES OF SAMPLING PROCEDURES
There are a variety of sampling methods available for researchers. We first need to discuss the two broad categories of sam­pling: probability and nonprobability.
Probability and Nonprobability Sampling
Probability sampling uses mathematical guidelines whereby each unit's chance for se­lection is known.
 Nonprobability sampling does not follow the guidelines of mathemati­cal probability.

There are four issues to consider when deciding whether to use probability or non-probability sampling:
·  Purpose of the study.
·  Cost versus value.
·  Time constraints.
·  Amount of acceptable error.

Types of Non probability Sampling
Mass media researchers frequently use non prob­ability sampling, particularly in the form of available samples.
An available sample (also known as a convenience sample) is a collection of readily accessible subjects, ele­ments, or events for study.
In most situations, available samples should be avoided because of the bias in­troduced by the respondents' proximity to the research situation, but available sam­ples can be useful in pretesting question­naires or other preliminary (pilot study)

The purposive sample, Purposive samples are used frequently in mass media studies when researchers select respondents who use a spe­cific medium and are asked specific questions about that medium. A purposive sample is chosen with the knowledge that it is not repre­sentative of the general population.
For example, a researcher interested in find­ing out how other internet providing agents  differ from geo  accessibility.

Snow­ball Sampling. A researcher  randomly contacts a few qualified respondents and then asks these people for the names of friends, relatives, or acquaintances they know who may also qual­ify for the research study.

Types of Probability Samples
The most basic type of probability sampling is the simple random sample, where each sub­ject, element, event, or unit in the population has an equal chance of being selected.
 Sampling with replacement is often used in more complicated research studies such as nationwide surveys. For example, a researcher who wants to analyze 10 prime-time television programs out of a population of 100 programs to de­termine how the medium portrays elderly people can take a random sample from the 100 programs by numbering each show from 00 to 99 and then selecting 10 numbers from a table of random numbers, such as the brief listing in Table 4.1
SIMPLE RANDOM SAMPLING
Advantages
1.     Detailed knowledge of the population is not required.
2.     External validity may be statistically inferred.
3.     A representative group is easily obtainable.
4.     The possibility of classification error is eliminated.
Disadvantages
1.     A list of the population must be compiled.
2.     A representative sample may not result in all cases.
3.     The procedure can be more expensive than other methods.

SYSTEMATIC SAMPLING
Advantages
1.     Selection is easy.
2.     Selection can be more accurate than in a simple random sample.
3.     The procedure is generally inexpensive.

Disadvantages
1.     A complete list of the population must be obtained.
2.     Periodicity (arrangement or order of list) may bias the process.

ADDRESS-BASED SAMPLING (ABS):  A third type of random selection method called address-based sampling (ABS) to recruit sample households. The method uses randomly selected addresses. There are several methods to develop ran­dom numbers or households, but two rules always apply:
(1) each unit or subject in the population must have an equal chance of be­ing selected,
(2) and  the selection procedure must be free from subjective intervention by the researcher.

A STRATIFIED SAMPLE is the ap­proach used to get adequate representation of a subsample. The characteristics of the sub-sample (strata or segment) may include almost any variable: age, gender, religion, income level, or even individuals who listen to specific radio stations or read certain magazines. The strata may be defined by an almost unlimited number of characteristics; however, each ad­ditional variable or characteristic makes the subsample more difficult to find and costs to find the sample increase substantially.
Stratified sampling ensures that a sample is drawn from a homogeneous subset of the population—that is, from a population that has similar characteristics. Homogeneity helps researchers to reduce sampling error.
Stratified Sampling
Advantages
1.     Representativeness of relevant variables is ensured.
2.     Comparisons can be made to other populations.
3.     Selection is made from a homogeneous group.
4.     Sampling error is reduced.
Disadvantages
1.     Knowledge of the population prior to selection is required.
2.     The procedure can be costly and time-consuming.
3.     It can be difficult to find a sample if incidence is low.
4.     Variables that define strata may not be relevant.

CLUSTER SAMPLING : Select the sample in groups or categories; this procedure is known as cluster sampling. For example, analyzing magazine readership habits of people in  Tamilnadu would be time-consuming and com­plicated if individual subjects were randomly selected. With cluster sampling, the state can be divided into districts, and groups of people can be selected from each area.

Cluster Sampling
Advantages
1.     Only part of the population need be enumerated.
2.     Costs are reduced if clusters are well defined.
3.     Estimates of cluster parameters are made and compared to the population.
Disadvantages
1.     Sampling errors are likely.
2.     Clusters may not be representative of the population.

3.     Each subject or unit must be assigned to a specific cluster.