Sunday, 18 September 2016


There are two broad types of error pres­ent in all research:
 (1) SAMPLING ERROR, or error related to selecting a sample from a population; and
Measurement er­rors,
Data analysis errors,.

Mea­surement error : Some of the most common measurement errors include:
·        A poorly designed measurement instrument
·        Asking respondents the wrong questions or asking questions incorrectly.

It  is further divided into two categories:

Random error: Random error relates to problems where mea­surements and analyses vary inconsistently from one study to another—

Systematic error: sys­tematic error consistently produces incor­rect (invalid) results in the same direction, or same context, and is, therefore, predictable.
The cause of systematic errors and eliminate their influence.

·       Faulty data collection equipment
·       Untrained data collection personnel
·       Using only one type of measurement instead of multiple measures
·       Data input errors

Sampling error find out by  an estimate of the difference between observed and expected measurements and is the foundation of all research interpretation.
Sampling error provides an indication of how close the data from a sample are to the population mean. A low sampling error indi­cates that there is less variability or range in the sampling distribution.

A theoretical sampling distri­bution is the set of all possible samples of a given size. This distribution of values is described by a bell-shaped curve or normal curve (also known as a Gaussian distribu­tion, after Karl F. Gauss, a German math­ematician and astronomer who used the concept to analyze observational errors).

There are two important terms related to computing errors due to sampling:
standard error (designated as SE) and
sampling error, which is also referred to as margin of error or confidence inter­val (designated as se or m, or Cl).
(1) Stan­dard error relates to the population and how samples relate to that population. Standard error is closely related to sample size—as sample size increases, the standard error decreases.

Confidence Level and   Confidence Interval
Sampling error involves two concepts:
Con­fidence Level  
Confidence Interval.

The confidence level indicates a degree of certainty (as a per­centage) that that the results of a study fall within a given range of values. Typical con­fidence levels are 95% and 99%.

The confi­dence interval is a plus-or-minus percentage that is a range within the confidence level. For ex­ample, if a 5% confidence interval is used, and 50% of the sample gives a particular answer for a question, the actual result for that question falls between 45% and 55% (50 ± 5).
In every normal distribution, the standard deviation defines a standard unit of distance from the mean of the distribution to the outer limits of the distribution. These standard deviation interval units (z-values) are used in establishing the confidence in­terval that is accepted in a research project. In addition, the standard deviation units in­dicate the amount of standard error. For ex­ample, using a confidence level of + 1 or –1 standard deviation unit-1 standard error—says that the probability is that 68% of the samples selected from the population will produce estimates within that distance from the population value (1 standard deviation unit; see Figure 4.3).
Computing Sampling Error
There are several ways to compute sam­pling error, but no single method is appropri­ate for all sample types or all situations.

Sampling error is an important concept in all research areas because it provides an indication of the degree of accuracy of the research, provides some type of explanation about error.


A group or class of subjects, variables, concepts, or phenom­ena is called sample.
In some cases, an entire class or group is investigated; The process of examining every member in a population is called a census.
A sample is a subset of the population that is representative of the entire population.

Determining an adequate sample size is one of the most controversial aspects of sampling. The size of the sample required for a study depends on at least one or more of the following seven factors:
(1) project type,
(2) project purpose,
 (3) project complexity,
(4) amount of error tolerated,
(5) time constraints,
(6) financial constraints, and
(7) previous research in the area.

1. A primary consideration in determining sample size is the research method used. Focus groups (see Chapter 5) use samples of 6-12 people, but the results are not intended to be generalized to the population from which the respondents are selected. Samples with 10-50 subjects are commonly used for pretesting measurement instruments and pi­lot studies, and for conducting studies that will be used for only heuristic value. 

2.    Researchers often use samples of SO, 75, or 100 subjects per group, or cell More than likely, the client would accept SO respondents in each of the eight cells, producing a sample of 400 (8 X SO).
3.    Cost and time considerations always control sample size. Although researchers may wish to use a sample of 1,000 for a survey, the economics of such a sample are usually prohibitive.
4.    Most research is conducted using a sample size that conforms to the project's budget. Researchers may be wise to consider using smaller samples for most projects.
5.    Multivariate studies require larger samples than do univariate studies because they involve analyzing multiple response data.
6.    For panel studies, central location test­ing, focus groups, and other pre recruit proj­ects, researchers should always select a larger sample than is actually required. If a survey is planned and similar research indicates that a representa­tive sample of 400 has been used regularly with reliable results, then a sample larger than 400 may be unnecessary.
7.    Generally speaking, the larger the sample, the better. However, a large un­representative sample (The Law of Large Numbers) is as meaningless as a small un­representative sample.

There are a variety of sampling methods available for researchers. We first need to discuss the two broad categories of sam­pling: probability and nonprobability.
Probability and Nonprobability Sampling
Probability sampling uses mathematical guidelines whereby each unit's chance for se­lection is known.
 Nonprobability sampling does not follow the guidelines of mathemati­cal probability.

There are four issues to consider when deciding whether to use probability or non-probability sampling:
·  Purpose of the study.
·  Cost versus value.
·  Time constraints.
·  Amount of acceptable error.

Types of Non probability Sampling
Mass media researchers frequently use non prob­ability sampling, particularly in the form of available samples.
An available sample (also known as a convenience sample) is a collection of readily accessible subjects, ele­ments, or events for study.
In most situations, available samples should be avoided because of the bias in­troduced by the respondents' proximity to the research situation, but available sam­ples can be useful in pretesting question­naires or other preliminary (pilot study)

The purposive sample, Purposive samples are used frequently in mass media studies when researchers select respondents who use a spe­cific medium and are asked specific questions about that medium. A purposive sample is chosen with the knowledge that it is not repre­sentative of the general population.
For example, a researcher interested in find­ing out how other internet providing agents  differ from geo  accessibility.

Snow­ball Sampling. A researcher  randomly contacts a few qualified respondents and then asks these people for the names of friends, relatives, or acquaintances they know who may also qual­ify for the research study.

Types of Probability Samples
The most basic type of probability sampling is the simple random sample, where each sub­ject, element, event, or unit in the population has an equal chance of being selected.
 Sampling with replacement is often used in more complicated research studies such as nationwide surveys. For example, a researcher who wants to analyze 10 prime-time television programs out of a population of 100 programs to de­termine how the medium portrays elderly people can take a random sample from the 100 programs by numbering each show from 00 to 99 and then selecting 10 numbers from a table of random numbers, such as the brief listing in Table 4.1
1.     Detailed knowledge of the population is not required.
2.     External validity may be statistically inferred.
3.     A representative group is easily obtainable.
4.     The possibility of classification error is eliminated.
1.     A list of the population must be compiled.
2.     A representative sample may not result in all cases.
3.     The procedure can be more expensive than other methods.

1.     Selection is easy.
2.     Selection can be more accurate than in a simple random sample.
3.     The procedure is generally inexpensive.

1.     A complete list of the population must be obtained.
2.     Periodicity (arrangement or order of list) may bias the process.

ADDRESS-BASED SAMPLING (ABS):  A third type of random selection method called address-based sampling (ABS) to recruit sample households. The method uses randomly selected addresses. There are several methods to develop ran­dom numbers or households, but two rules always apply:
(1) each unit or subject in the population must have an equal chance of be­ing selected,
(2) and  the selection procedure must be free from subjective intervention by the researcher.

A STRATIFIED SAMPLE is the ap­proach used to get adequate representation of a subsample. The characteristics of the sub-sample (strata or segment) may include almost any variable: age, gender, religion, income level, or even individuals who listen to specific radio stations or read certain magazines. The strata may be defined by an almost unlimited number of characteristics; however, each ad­ditional variable or characteristic makes the subsample more difficult to find and costs to find the sample increase substantially.
Stratified sampling ensures that a sample is drawn from a homogeneous subset of the population—that is, from a population that has similar characteristics. Homogeneity helps researchers to reduce sampling error.
Stratified Sampling
1.     Representativeness of relevant variables is ensured.
2.     Comparisons can be made to other populations.
3.     Selection is made from a homogeneous group.
4.     Sampling error is reduced.
1.     Knowledge of the population prior to selection is required.
2.     The procedure can be costly and time-consuming.
3.     It can be difficult to find a sample if incidence is low.
4.     Variables that define strata may not be relevant.

CLUSTER SAMPLING : Select the sample in groups or categories; this procedure is known as cluster sampling. For example, analyzing magazine readership habits of people in  Tamilnadu would be time-consuming and com­plicated if individual subjects were randomly selected. With cluster sampling, the state can be divided into districts, and groups of people can be selected from each area.

Cluster Sampling
1.     Only part of the population need be enumerated.
2.     Costs are reduced if clusters are well defined.
3.     Estimates of cluster parameters are made and compared to the population.
1.     Sampling errors are likely.
2.     Clusters may not be representative of the population.

3.     Each subject or unit must be assigned to a specific cluster.

Wednesday, 31 August 2016

Content analysis.USES ,LIMITATIONS

The method is popu­lar with mass media researchers because it is an efficient way to investigate the content of the media, such as the number and types of commercials or advertisements in broadcast­ing or the print media. Content analysis can be traced back to World War II, by comparing the mu­sic played on German stations with that on other stations in occupied Europe,
After the war, researchers used content analysis to study propaganda in newspapers and radio.
An informal content analysis of three journals that focus on mass communication research (Journal of Broadcasting & Electronic Media, Journal­ism and Mass Communication Quarterly, and Mass Communication and Society) from 2007 to 2008 found that content analysis was still a popular method, used in about one-third of all published articles.
There are many definitions of content analysis.
Kerlinger's (2000) definition is fairly typi­cal: Content analysis is a method of studying and analyzing communication in a system­atic, objective, and quantitative manner for the purpose of measuring variables.  Kerlinger's definition involves three con­cepts that require elaboration.

First, content analysis is systematic.
This means that the content to be analyzed is selected accord­ing to explicit and consistently applied rules: Sample selection must follow proper pro­cedures, and each item must have an equal chance of being included in the analysis. Moreover, the evaluation process must be 

systematic: All content under consideration is to be treated in exactly the same manner.
Second, content analysis is objective;
The researcher's personal biases should not enter into the findings. The analysis should yield the same results if another researcher replicates the study.

Third, content analysis is quantitative. The goal of content analysis is an accurate representation of a body of messages. Quan­tification is important in fulfilling that ob­jective and it aids researchers in the quest for precision,  sum­marize results and to report them concisely.  It  gives research­ers additional statistical tools that can aid in interpretation and analysis.
Using content analysis, conducted for one of five purposes.
Describing Communication Content

These studies demonstrate content analysis used in the traditional, descriptive manner: to identify what exists. For example, Cann and Mohr (2001) examined the gender of journalists on Australian TV newscasts.
One of the advantages of content analysis is its poten­tial to identify developments over long time periods. Cho (2007) illustrated how TV newscasts portrayed plastic surgery over the course of three decades.
These descriptive studies also can be used to study societal change. For example, chang­ing public opinion on various controversial issues could be gauged with a longitudinal study (see Chapter 8) of letters to the editor or newspaper editorials.
Testing Hypotheses of Message Characteristics
Content analysis has been used in many studies that test hypotheses of form: "If the source has characteristic A, then messages containing elements x and y will be pro­duced; if the source has characteristic B, then messages with elements w and z will be produced
Comparing Media Content to the "Real World"
In content analyses,  the portrayal of a certain group, phe­nomenon, trait, or characteristic is assessed against a standard taken from real life. The congruence (comparison) of the media presentation and the actual situation is then discussed.
Assessing the Image of Particular Groups in Society
The content analyses have focused on exploring the media im­ages of certain minority or otherwise notable groups and to assess changes in media policy toward these groups, to make inferences about the media's responsiveness to demands for better coverage, or to document social trends.
Establishing a Starting Point for Studies of Media Effects
The use of content analysis is used for cultivation analysis.  Content analysis is also used in studies of agenda setting and cultivation effect.

Content analy­sis cannot serve as the sole basis for claims about media effects.
Another potential limitation of content analysis is a lack of messages relevant to the research. Many topics or characters re­ceive little exposure in the mass media.
Content analysis is frequently time consuming and expensive. The task of examining and categorizing large volumes of content is often laborious and tedious.

In general, a content analysis is conducted in several discrete stages. The following steps may be used as a rough outline:
1.  Formulate the research question or hypothesis.
2.  Define the universe in question.
3.  Select an appropriate sample from the population.
4.  Select and define a unit of analysis.
5.  Construct the categories of content to be analyzed.
6.  Establish a quantification system.
7.  Train coders and conduct a pilot study.
8.  Code the content according to established definitions.
9.  Analyze the collected data.
10.   Draw conclusions and search for indications.

1. Formulating a Research Question
A content analysis should be guided by well-formulated research questions or hypotheses.  A basic review of the litera­ture is a required step. It is possible to generate a research question based on existing theory, prior research, or practical problems, or as a response to changing social conditions.
2. Defining the Universe
To "define the universe" is to specify the boundaries of the body of content to be considered, which re­quires an appropriate operational definition of the relevant population. If researchers are interested in analyzing the content of popular songs, they must define what is meant by a "popular song": They must also ask what time period will be considered: The past 6 months? Two dimensions are usually used to de­termine the appropriate universe for a con­tent analysis—the topic area and the time period.

3.  Selecting a Sample
Once the universe is defined, a sample is se­lected. Most content analysis in mass media in­volves multistage sampling. This process typ­ically consists of two stages. The first stage is usually to take a sampling of content sources.
4.   Selecting a Unit of Analysis
The next step in the content analysis process is to select the unit of analysis, which is the smallest element of a content analysis but also one of the most important. In written content, the unit of analysis might be a single word or symbol, a theme, or an entire article or story. In televi­sion and film analyses, units of analysis can be characters, acts, or entire programs. Specific rules and definitions are required for deter­mining these units to ensure closer agreement among coders and fewer judgment calls. Certain units of analysis are simpler to count than others.

5.     Constructing Content Categories At the heart of any content analysis is the category system used to classify media content. The precise makeup of this system, of course, varies with the topic under study.

There are two ways to go about estab­lishing content categories. Emergent coding establishes categories after a preliminary examination of the data. The other hand, a priori coding establishes the categories before the data are collected, based on some theoretical or conceptual rationale.

To be serviceable, all category systems should be mutually exclusive, exhaustive, and reliable. A category system is mutually exclu­sive if a unit of analysis can be placed in one and only one category.

The categorization system should also be reliable; that is, different coders should agree in the great majority of instances about the proper category for each unit of analy­sis. This agreement is usually quantified in content analysis and is called intercoder reli­ability.
6.   Establishing a Quantification
Quantification in content analysis can involve all four of the levels of data measurement nominal, interval, and ratio data are used.
 At the nominal level, researchers simply count the frequency of occurrence of the units in each category. Thus Signorielli, McLeod, and Healy (1994) analyzed commercials on MTV and found that 6.5% of the male characters were coded as wearing somewhat sexy cloth­ing among the female characters, however, the corresponding percentages were 24% and 29%.
At the interval level, it is possible to de­velop scales for coders to use to rate certain attributes of characters or situations. For example, in a study dealing with the images of women in commercials, each character might be rated by coders on several scales like these:
Independent _:_:_:_:_ Dependent Dominant : : : : Submissive

At the ratio level, measurements in mass media research are generally applied to space and time. In television and radio, ratio-level measurements are made concern­ing time: the number of commercial minutes, the types of programs on the air, the amount of the program day devoted to programs of various types, and so on.

7.   Training Coders and  Doing a Pilot Study
Placing a unit of analysis into a content cate­gory is called coding. Individuals who do the coding are called coders. The number of coders involved in a content analysis is typically small; typically two to six coders are used.  Next, a pilot study is done to check in­tercoder reliability. The pilot study should be conducted with a fresh set of coders who are given some initial training to impart famil­iarity with the instructions and the methods of the study.

8.     Coding the Content according to established definitions
Standardized sheets are usually used to ease coding. These sheets allow coders to classify the data by placing check marks or slashes in predetermined spaces

Code all characters that appear on the screen for at least 90 seconds and/or speak more than 15 words (include cartoon narrator when applicable). Complete one sheet for each character to be coded.
A.  Character number, code two-digit program number first (listed on page 12 of this instruction book), followed by two-digit character number randomly assigned to each character (starting with 01).
B.  Character name: list all formal names, nicknames, or dual identity names (code dual identity behavior as one character's actions). List description of character if name is not identifiable.
C.  Role
1-Major: major characters share the majority of dialogue during the program, play the largest role in the dramatic action, and appear on the screen for the longest period of time during the program.
2-Minor: all codeable characters that are not identified as major characters.
3-Other (individual): one character that does not meet coding requirements but is involved in a behavioral act that is coded.
4-Other (group): two or more characters that are simultaneously involved in a behavioral act but do not meet coding requirements.
D.  Species
1-Human: any character resembling man, even ghost or apparition if it appears in human form
(e.g., the Ghostbusters)
2-Animal: any character resembling bird, fish, beast, or insect; may or may not be capable of
human speech (e.g., muppets, smurfs, Teddy Ruxpin)
3-Monster/Ghost: any supernatural creature (e.g., my pet monster, ghosts)
4-Robot: mechanical creature (e.g., transformers)
5-Animated object: any inanimate object (e.g., car, telephone) that acts like a sentient being

When a computer is used in tabulating data, the data are usually transferred di­rectly to a spreadsheet or data file, or per­haps to mark-sense forms or optical scan sheets (answer sheets scored by computer). These forms save time and reduce data er­rors. There are many software programs available that can aid in the con­tent analysis of text documents. Some of the more common are TextSmart, VBPro, and ProfilerPlus.
9.  Analyzing the Data
The descriptive statistics such as percentages, means, modes, and medians, are appropriate for con­tent analysis.. The chi-square test is the most commonly used because content analysis data tend to be nominal in form; however, if the data meet the requirements of interval or ra­tio levels, then a t-test, ANOVA, or Pearson's r may be appropriate.
10.              Interpreting the Results
If the study is descriptive, however, questions may arise about the meaning or importance of the

The concept of reliability is crucial to con­tent analysis. If a content analysis is to be objective, its measures and procedures must be reliable. A study is reliable when repeated measurement of the same material results in similar decisions or conclusions. 


Mass Media Research: An Introduction, Ninth EditionRoger D. Wimmer and Joseph R. DominickSenior Publisher: Lyn Uhl Publisher: Michael Rosenberg