Chapter VI.
DESIGN AND IMPLEMENTATION OF BASELINE STUDIES
The first stage in building an evaluation system typically involves design, execution and analysis of the baseline studies in order to establish the frame of reference for subsequent comparisons on which evaluation will be based. Since for these comparative purposes the data to be collected subsequently must be similar to those collected in the baseline studies, the methods of selecting and conducting these baseline studies and their content are extremely important. In effect, the principal conceptual work for the evaluation of a programme must occur at this stage, since the nature of the entire monitoring and evaluation system will be determined here. Moreover, the largest volume of data that is collected at any one time is obtained in the baseline studies. This stage will therefore involve the largest number of personnel and require the greatest amount of time. As a result, it is the most costly stage in the design and implementation of the system. This cost factor led to the decision to make a double use of the baseline studies in each of the four cases under review here. On the one hand, data would be used for subsequent comparisons; on the other, data would be used. for a diagnosis of the existing situation of potential beneficiaries of the programmes for planning and programming purposes. In several cases in which programmes were already under way, the baseline data would affect implementation rather than formulation of the programme plan.
The importance of this stage makes it necessary to proceed in a logical and efficient manner through a sequence of steps in order to define the content of each system. Many of these steps are common to any form of applied social research, but since the main purpose of the effort is practical evaluation of programmes, the details and procedures in each step differ somewhat from academic practice. Thus, in addition to normal care to ensure that the study design yields scientifically valid information, it was necessary in the case of the four country studies to keep in mind three rather practical questions:
Is all the necessary information included?Is all the information to be included necessary? (That is, does all the information to be included have a predefined practical use, so that only information whichean and will be used is obtained?)Within the pre‑established limits on time and personnel available, are the most rapid and efficient data collection procedures possible being used?
The objective of the design process was to ensure that the system remained practical, rapid, efficient and inexpensive. Achieving this involved certain costs. First, the usefulness of the data for purposes of deriving or testing theories was limited. Since certain short cuts were made in order to achieve the wide coverage necessary to monitor and evaluate complex programmes, measurement of individual variables could not be very precise. Moreover, sampling procedures did not yield, within cost constraints, the most precise sample possible for individual variables.
On the other hand, while research procedures used in each system were rather unsophisticated, they had the advantage that relatively unspecialized people could utilize them with a minimum amount of training. The use of non‑researchers for implementing the system had an inherent advantage in that a practical approach to data collection would be maintained, since the people responsible were not likely to be detoured into theoretical or methodological discussions. [1]
The first step in any design of a baseline study consists in determining what variables to measure, i.e., specifying the substantive content of the study. In academic research this usually means elaborating a set of hypotheses. By contrast, for the type of research necessary for a monitoring and evaluation system, the step consists in determining what information policy makers, programme planners and administrators require in order to ascertain whether or not the programme is functioning properly and why this is so. This step is perhaps the most difficult, since programmes do not always specify their objectives clearly and in measurable terms. Indeed, many of the objectives of a given programme are not even stated formally. In addition, although most development programmes are multidisciplinary, the programme personnel often exhibit a particular professional bias towards obtaining one or another type of information deemed necessary for monitoring and evaluation. Thus economists may tend to be interested only in economic data, while sociologists may be interested only in social data.
In each of the four countries, the procedure for determining variables on which data were to be collected consisted of a dialogue between those charged with evaluation and those responsible for the operation of the programme to be evaluated. For the evaluation system of the ABCAR social welfare projects in three Brazilian states, [2] the research team held several day‑long meetings with state‑affiliated agency heads and field‑level supervisors to discuss what information was needed. On this basis, a specific list of necessary information was prepared. The dialogue centred on a series of questions:
What are the specific objectives that the programme is attempting to achieve?What are the targets of each objective? That is, what are you attempting toaccomplish concretely during the period covered by the present planning period?What information would you need in order to tell whether or not these objectives were being achieved?If you had this information, how could you use it? (This question is designed to eliminate all information for which a practical use cannot be found.)
In these discussions the evaluation personnel from ABCAR had a dual role, that of questioning as well as orienting the field supervisors to a multidisciplinary approach to project monitoring and evaluation.
In Panama, similar meetings were held with the directors of planning units of DIGEDECOM and MIDA, the two agencies involved. For the Guarapiche river basin study in Venezuela, separate meetings were held with local programme heads of the various agencies in the integrated rural development programme. In Mexico, the system design was developed as part of a seminar on evaluation techniques sponsored by UNICEF for the local operational supervisors of the agencies that were involved in the integrated development programme (PRODESCH). [3] As part of this seminar, the information needs of each agency and sector of the programme were discussed and defined.
As a result of such discussions with the programme staff, a master list of required information was put together for each programme. The items on the list naturally varied from programme to programme according to the objectives of each but there were a number of common items, since each of the four programmes was concerned with rural development. In general, the following types of information were required:
Economic information Production per hectare and crop, levels of income, distribution of income, levels of employment, use of technological inputs in agriculture, use of economic services, e.g., credit and extension services, marketing and production organization (Mexico, Panama and Venezuela);
Demographic information Sex and age distributions, family size, infant mortality, migration patterns (Brazil, Mexico, Panama and Venezuela);Information on living conditions Type of housing, amenities, communications (Brazil, Mexico, Panama and Venezuela);Information on health and nutrition practices Practices of and knowledge about nutrition, health and sanitary conditions, access to and use of health services (Brazil and Mexico);Information on group and community participation Leadership patterns and type, group participation in terms of quantity and quality, degree of participation in self‑help activities, contact with community development promoters (Brazil, Mexico, Panama and Venezuela);Information on cognitive structure Problem‑solving skills, aspirations and attitudes towards change (Brazil, Mexico, Panama and Venezuela.
Each item of required information was included because of its relation to a project goal. For example, in Mexico health programmes executed under PRODESCH involved community work through the use of food incentives which should have resulted in dietary changes in the Chiapas highlands. In Brazil, if ABCAR programmes to develop a system of small health posts (mini‑posts) in rural areas were effective, there should have been change in such health practices as use of latrines, home cleanliness and personal hygiene. In Venezuela, the integrated rural development programme that works through credit users associations (uniones de prestatarios) should have resulted in increases in use of certain technological inputs and in production per hectare. In Panama, the effectiveness of leadership training programmes should have been observed in a qualitative change in the type of local leadership, from the traditional chief, whose role was defined ascriptively, to a modern democratic leader, whose role was defined by achievement.
In the case study approach, once the information to be obtained is known, the next step is to determine the number of community case studies which will comprise the monitoring and evaluation system. In the same way that the determination of variables is critical for the definition of the content of a system, so the number of cases is critical for the definition of the structure of a system. This number is determined by two distinct factors: methodological considerations of sample representativeness and resources available to implement the baseline case studies.
Perfect representativeness would be guaranteed only if every community included in a programme were studied. However, in each country the number of communities involved in the programme was too large for a complete count: in Venezuela, the PRIDA programme in the Guarapiche valley directly or indirectly involved some 45 agrarian reform settlements; in Mexico, PRODESCH involved 600 highland communities and some 150,000 persons; in Panama, the DIGEDECOM‑MIDA rural development programmes involved 421 submunicipal units, and over 1,000 specific villages and agrarian reform settlements; and in Brazil, some 3,000 communities were involved in the three states evaluated by ABCAR. Clearly, a sample of communities had to be made. The question then became, how large a sample?
Statistical sampling basically seeks to select a sample of cases in such a way that it is possible to guarantee in advance that the sample will be representative of the entire population. The rule in sampling is that the degree to which representativeness can be guaranteed depends on the absolute number of cases in the sample. [4] A general rule of thumb is that the larger the number of cases in a sample, the greater the guaranteed representativeness. [5] However, this general rule is conditioned by two other factors. First, the degree of representativeness of a sample does not increase proportionately as the size of a sample increases. That is, in an infinite population, assuming that 95 times out of 100 the sample accurately represents the population, a simple random sample of 196 yields a 7 per cent probable error, a sample of 384 yields a 5 per cent probable error, a sample of 600 yields a 14 per cent probable error, a sample of 1,061 yields a 3 per cent probable error and only with 9,604 cases can one achieve a 1 per cent probable error. [6] Thus, as sample size increases, each additional case contributes proportionately less to the reduction of probable error. There is, of course, a minimum sample size required for any given level of data precision.
Secondly, the representativeness of a sample will also depend on the type of sampling procedure used. This factor is related to how much is known about the population at the outset. If, for example, nothing is known about a population in advance, a sample of a certain size must be taken in order to ensure a certain level of precision. However, when a great deal is known, so that the population can be grouped in advance by its principal socio‑economic characteristics, a smaller sample will give the same level of precision. For example, in an evaluation of a health programme, if one knows that 30 per cent of the communities have large health centres, 140 per cent have small health centres, and 30 per cent have no health centres, 30 per cent of the sample can be drawn from communities with large health centres, 40 per cent from communities with small health centres, and so on. This would require fewer cases to ensure representativeness than drawing a sample where it is not known how many communities have health centres.
To summarize, the number of cases to be included in the system depends on the degree of precision desired and on what is known in advance about the universe of all communities affected by a programme. To these factors must be added the primary determining factor; the amount of resources available. There is no use in contending that one ought to undertake 100 case studies if there are only enough resources for 20.
The estimate of resources available is usually based on a calculation of two factors; personnel and time. The time available for a study affects the number of personnel required. If there is a long period of time in which to make the studies, fewer persons working longer will suffice; if results must be known quickly, more persons will be required. In the four country systems, time was determined by programme requirements. For example, in Panama, base‑line data had to be collected and analysed between April and July 1972 in order to affect the design of a training programme to be initiated in September 1972.
In addition to determining how many weeks or months are available to execute base‑line studies, it is also necessary to estimate how long each case will require. Initially, this can be a rough estimate based on the types of information required. For example, in Venezuela on the basis of previous experience it was assumed that each case study could consist partly of a number of individual interviews with peasant heads of families. These could be done at the rate of three per day per interviewer, allowing an hour or so for each interview and additional time for locating the peasants. In addition, it would be necessary to do a number of in‑depth interviews with local officials and representatives of government agencies. Assuming a team of six persons per case study, given the amount of information to be collected and taking into consideration such other factors as travel time, it was estimated that data collection in each case would require seven or eight working days and that tabulation and analysis of each community would require an additional four or five days. [7]
Once this time estimate is made, it must be adjusted to the actual number of personnel who can be made available from other tasks to conduct the studies. In Panama, the question of how many people could be made available was posed to the two agencies involved. As a result, it was determined that for the base‑line studies, the following personnel could be made available:
Some 150 lower‑level community development promoters of DIGEDECOM and social co‑ordinators of MIDA to conduct individual interviews, each person available to the study for between one and two weeks;Ten middle‑level provincial co‑ordinators to direct field work, undertake in‑depth interviews and direct tabulation of data, each available to the study for about one month; andSix professionals drawn from planning and training units of the agencies central offices to participate in the design of the studies, training of field teams, data analysis and editing of reports, each available for up to six months.
Basing the estimate on an average of 20 individual interviews per case and given a maximum of one month to collect and tabulate the field data, it was calculated that 30 would be the maximum number of cases possible (one week each for a team of 6, with each provincial co‑ordinator directing 3 cases). In Mexico, a similar calculation based on six weeks for the cases with 15 teams also led to a calculation of 30 cases. [8] In Brazil, 90 communities were studied among three states based on three months of field work for 45 people (9 teams of 5). As can be seen, the rough calculation of the number of cases involved a relationship between the number of personnel and the time available to the study.
Having decided on the number of cases to be studied, based on the time and personnel available, it is necessary to determine how to draw the sample of cases. As has been noted earlier, representativeness of a sample depends on two factors: the absolute number of cases in the sample and the method by which the sample is drawn. Since in all four programmes, time and personnel constraints limited the size of the sample, the method of drawing cases was the key factor. The first characteristic of the sampling method was that it should be random. In theory, this means that the selection process must contain at least one completely chance element not dependent on human judgement, which can be biased. In practice, this means that at some time during the procedure, usually at the point of selecting the specific case, a chance factor must be introduced. Without it, there is no possibility of guaranteeing that the sample will be free of bias. For example, in an earlier set of studies on the Mexican programme, prior to establishing the evaluation system, a sample of 120 communities was drawn for an initial assessment of socio‑economic conditions. Although a sample of that size should have guaranteed representativeness, it did not because the communities were not selected according to random procedures. As a result, the 120 communities selected were all near main roads. Thus, about half of the communities in the programme ‑ in this case, the poorest and least affected by cosmopolitan influences ‑ were excluded from the study and the data obtained were not too reliable. Introducing a random procedure can help avoid this type of problem.
A second consideration in selecting the sampling method is that the more that is known about a population prior to drawing the sample, the fewer cases will be needed to achieve acceptable representativeness. The logic behind this can be seen from the following example. Suppose it is known in advance that some communities in the programme area raise only cattle, some produce only cotton and some only corn, and that ethnic background, cultural patterns and types of local organization are strongly related to type of production. It is important that each type of community be represented in the sample. If the sample is drawn randomly without any further classification (a simple random sample), a rather large number of cases would be needed to ensure that the sample distribution is the same as the universe. If only a few cases were drawn, it is quite possible that none of the extreme cases would be represented or that one or another type of community would be overrepresented, thereby biasing the sample. This could be avoided if a large number of cases are drawn, since, with a large number, it is likely that a few extreme cases would be included and that all types of communities would be represented more or less in proper proportion. On the other hand, if an estimate of the distribution of community types were known and the total set of communities could be classified in advance, as shown in figure IX, it is clear that fewer cases would be needed. By drawing cases according to community type, each type is represented in the sample in proportion to its existence in the population. If, for instance, 20 per cent of the total number of communities in the programme area depend on cattle‑raising, the same proportion of communities in the sample can be drawn from among cattle‑raising communities. In this way, it is possible to ensure in advance that the distribution of cases in the sample will approximate the distribution of cases in the population, according to the production variable.
This type of sample is known technically as a stratified random sample, since sampling is done according to classifications or strata. The key to this procedure is to be able to define the strata in such a way that a large part of the variation among communities is accounted for in advance by the strata chosen. Selection of strata should be made on the basis of three considerations:
(a) Stratification factors should be those characteristics of communities which are known to produce differences among communities and which explain a large part of the variance among them;(b) Stratification factors should be independent of each other in the sense that they do not duplicate each other; and(c) Factors should be known in advance for each community in the universe of communities so that each community can be classified.
In practice, this last consideration is perhaps the most important since:, although it is possible to define a large number of factors that might be important, the values of most of these for each community probably would not be known in advance. For example, one can suppose that levels of income would be highly important. But this type of data is not usually available on a community‑bycommunity basis even in the best of censuses. Frequently, all that is known in advance about a community is its geographical location and, perhaps, its population size.
The procedures for stratifying can be illustrated by the experience of the programmes in two of the countries. In Panama, where it was necessary to draw 30 submunicipalities from a universe of over 400, it was decided to stratify according to three factors: region, size of population and degree of dependence on agriculture. [9]
In terms of region, it was known that Panama has relative homogeneity in agricultural production, infrastructural development, physical access and culturalethnic characteristics according to the geographical area in which communities were located. After examining national census maps and consulting with specialists, five "natural" regions were defined: (a) the Province of Chiriquí, (I) the Province of Veraguas, (c) the two provinces on the Azuero peninsula (Herrera and Los Santos), (a) the Province of Coclé and that part of the Province of Panama lying to the west of the Canal Zone, and (a) the Province of Col"n and that part of the Province of Panama east of the Canal Zone.
Next, it was assumed that submunicipalities (corregimientos) would differ according to population size, especially as regards local organizational structure. Thus, a small submunicipality would have different characteristics than a large one. To classify submunicipalities, a ranking order was made of all submunicipalities on the basis of population (as indicated by the 1970 census). The resulting distribution was divided into three equally sized classes: small ‑ under 1,200 inhabitants; medium ‑between 1,200 and 2,600 inhabitants; and large ‑ more than 2,600 inhabitants. Finally, it was thought that the socio‑economic conditions in submunicipalities would vary according to dependence on agriculture. To estimate this figure, 1970 census data were used to calculate the percentage of people who were economically active in agriculture. [10] To determine the categories, the distribution of communities was examined and it was found that the "natural" cutting points were 60 per cent and 10 per cent. [11] Submunicipalities with 60 or more per cent of the economically active population engaged in agriculture were classified as rural, 10 per cent to 59 per cent as "urban influenced" and under 10 per cent as metropolitan. Submunicipalities in this latter category were eliminated as not of interest to the study since they were all located within major cities.
On the basis of these three factors, it was possible to classify all non‑metropolitan submunicipalities covered by the community development programme, as can be seen in table 1. According to this distribution, 145 per cent of the 30 cases in the sample (or 7 cases) should be selected from small rural submunicipalities. Since Chiriqui included 12.5 per cent of the small rural submunicipalities, 12.5 per cent of the 7 cases (or 1 case) should be selected from that province, and so on for all of the categories, some of which will not include any cases because of their small size.
| Table 1. Distribution of non-metropolitan submunicipalities serviced by the community development programme in Panama by region, size and degree of dependence on agriculture (number in each category to be selected for the sample given in parentheses) |
|||||||
Size and degree of dependence on agriculture |
|||||||
| Region |
Small rural |
Small urban |
Medium rural |
Medium urban |
Large rural |
Large urban |
|
| Chiriquí |
18 (1) |
3 (0) |
13 (1) |
2 (0) |
1 (0) |
(0) |
|
| Herrera-Los Santos |
57 (3) |
7 (0) |
20 (1) |
11 (1) |
2 (o) |
2(0) |
|
| Veraguas |
26 (1) |
2 (0) |
21 (2) |
1 (0) |
10 (1) |
1(0) |
|
| Coclé/Western Panama |
23 (1) |
9 (1) |
10 (0) |
10 (0) |
6 (0) |
1(i) |
|
| Col"n/Eastern Panama |
20 (1) |
5 (0) |
6 (0) |
0 (0) |
2 (a) |
2(o) |
|
| Total |
144 (7) |
26 (1) |
76 (4) |
24 (1) |
214 (1) |
26(1) |
|
| Percentage |
45.0 (147) |
8.1 (7) |
23.8 (27) |
7.5 (7) |
7.5 (7) |
8.1(7) |
|
In Mexico, where the task was to monitor and evaluate the PRODESCH programme in the State of Chiapas, 30 cases were to be drawn from about 600 communities. Two stratification factors were used: municipality and relative population size of the municipality. [12] Municipality was chosen because in the highlands of Chiapas, municipal boundaries were originally drawn to correspond roughly to traditional linguistic, ethnic and clan boundaries. There were over 20 municipalities in the highlands, which would have meant an excessive number of categories Therefore, smaller municipalities were combined. Within the municipalities, village size was highly variable, ranging from the municipal capital to small isolated villages. To take this into account, villages were classified into large and small categories according to whether they were above or below the median village size in the municipality. Thus, half of the sample would be drawn from large and half from small villages. The resulting distribution according to strata is shown in table 2, as is the number of cases to be drawn from each stratum.
| Table 2. Distribution of cases by strata (Chiapas, Mexico) |
|||||
| Municipality or municipality group |
Total number of villages |
Percentage of total |
Number below median |
Number of cases to be drawn for sample |
|
| Amatenango del Valle-Teopisca-Chanal-Huistín |
55 |
9.2 |
28 |
3 |
|
| Larrainzar-El Bosque |
40 |
6.7 |
20 |
2 |
|
| Huitiupan |
25 |
4.2 |
13 |
1 |
|
| Chamula |
68 |
11J |
34 |
3 |
|
| Chenalhá |
39 |
6.5 |
19 |
2 |
|
| Chil"n-Sitalí-Yajal"n |
112 |
18.9 |
56 |
6 |
|
| Mitontic-Tenejapa |
44 |
7.4 |
23 |
2 |
|
| Ocosingo-Pantelhá |
102 |
17.1 |
51 |
5 |
|
| Oxchuc |
33 |
5.5 |
16 |
2 |
|
| San Crist"bal de las Casas |
19 |
3.2 |
10 |
1 |
|
| Simojovel-Cha1chihuitán |
41 |
6.9 |
21 |
2 |
|
| Zinacantín de Allende |
18 |
3.0 |
9 |
1 |
|
| TOTAL |
596 |
100.0 |
300 |
30 |
|
Source: "El sistema de evaluaci"n continua del PRODESCH", Course Notes No. 10/73 (San Cristobal de las Casas, Chiapas, Mexico, August 1973) (mimeographed), p. 10
Prior to beginning field work, it is important to know the degree to which data collected in the sample will approximate the true values existing in the population. Since data are often unavailable at the community level, this is not always possible for all types of information to be obtained. Nevertheless, it is useful to compare sample values with true population values on the basis of data which do exist, in order to establish the probable degree of precision that the sampling method will achieve. In Mexico, PRODESCH files contained information about the number of specific projects initiated in each village in the Chiapas highlands. The percentage of villages with no or one, two, three, four, five and more projects was therefore known in advance. A random sample was drawn according to the stratification factors, and the percentage of villages in the sample with no projects and with one to five and more projects was calculated. Comparing percentages obtained from the sample with those known for the universe of villages, as shown in table 3, permits estimation of the sample's precision in representing the conditions of all the villages in the highlands. As can be seen, the maximum difference between sample values and population values is about 6 per cent. This could be termed a satisfactory degree of precision given the small number of cases in the sample. The implication is that, in analysing other data from the 30 baseline studies, the analyst can be relatively certain that any value obtained (such as average income production or participation in groups) is within 6 per cent of the value which would have been obtained had all 600 villages been studied. [13]
| Table 3. Population values compared with sample values in terms of number of projects (Chiapas, Mexico) |
|||||||||||
| Population |
Sample |
Difference (percentage) |
|||||||||
| Number |
Percentage |
Number |
Percentage |
||||||||
| Number of projects |
|||||||||||
| None |
355 |
59.6 |
16 |
53.3 |
-6.3 |
||||||
| 1-2 |
108 |
18.1 |
1 |
23.3 |
+5.2 |
||||||
| 3-4 |
50 |
8.4 |
3 |
10.0 |
+1.6 |
||||||
| 5 and more |
83 |
13,9 |
14 |
13.3 |
+0.6 |
||||||
| Average error = 3.4 per cent |
|||||||||||
Source: "El sistema de evaluación continua del PRODESCH", Course Notes No. 10/73 (San Cristobal de las Casas, Chiapas, Mexico, August 1973) (mimeographed), p. 11.
Much of the data to be obtained in each community has to be collected at the individual level. However, individual interviews with all families in each selected case are usually not possible. For example, in Mexico, villages in the programme area varied in size from 20 families to 1,000 families. A sample of families was therefore required. The question was how many families to interview in each community.
There are basically two methods of determining the number of interviews in each case: either a constant proportion of families (for example, 20 per cent) could be sampled in each case, or a fixed number (for example, 25 families) could be sampled. Both procedures have certain advantages and disadvantages, and both were used in one or another of the four countries. In Mexico Panama and Venezuela, a constant number was used, and in Brazil, a fixed percentage. The decision on which procedure to use depended on whether individual data, as well as community data, would be generalized for the entire programme area, that is, whether it would be necessary to make such generalizations as "X per cent of the families living in the programme area have characteristic A".
The problem of which procedure to use may be further illustrated by considering the implications of sampling a constant number of families in each community. This procedure was adopted in Mexico, Panama and Venezuela. For example, in Panama, 20 families were interviewed in each case. However, some villages contained only 20 families, others around 40 and, in one case, 1,000 families. In the smallest village, each interview represented one family since every family was interviewed (20 families ‑ 20 interviews), while in the largest village, each interview represented 50 families (1,000 families ‑ 20 interviews).
If these individual interviews for all communities were added together ("pooled") to estimate the values of the whole population of families, disproportionate weight would be given to the data from smaller villages. This means that the information would overrepresent the individual data from small villages and grossly underrepreserit data from large villages, yielding a very distorted picture of the situation of individuals in the project area as a whole.
With a constant number of interviews per case, the only means of avoiding distortion is to add weighting factors at the data analysis stage to compensate for large villages and thus give each individual interview an equal weight relative to the population from which it was sampled. In the case of Panama, the values of each interview from the smallest village could have been multiplied by 1, of the middle‑sized village by 2, and the largest by 50 to achieve equal weight, and thereby avoid distortion. [14] This procedure would have made the computation of results rather difficult, because upwards of 30 weighting factors would have had to be added for the computation of each variable in the study. Since each study had upwards of 50 variables, the task of tabulation was already sufficiently large so that adding weighting factors would have significantly increased the amount of time required for processing each study.
The alternative procedure is to select a constant proportion of families to interview in each village. This was utilized in Brazil, where it was considered essential to be able to pool individual as well as community data. The percentage is calculated by first estimating the total number of individual interviews which can be obtained, taking time and resource constraints into consideration, and. dividing this by the total number of families in the selected villages. For example, suppose one determined that in 30 communities, the total number of individual interviews which are possible is 900, and the total number of families living in the 30 case‑study communities is ,500. In this example, the sampling proportion would be 20 per cent (900/1,500).
To achieve a self‑weighted sample, without excessive size differences among units, the ABCAR evaluation in Brazil arbitrarily divided the predefined "communities" into units of relatively equal size. Thus, a very large village would be considered as two villages, in order that its total size would vary between 80 and 150 families. On this basis, a self‑weighted sample with an adequate number of interviews in each case was obtained.
Whether sampling within cases is based on a fixed number of individual interviews or on a fixed percentage, it is necessary to determine both the optimum number of interviews to obtain per case and a sampling method. In countries where the constant number procedure was used, since that specific number is arbitrary anyway, a common‑sense decision was made to utilize a number which was easily convertible into percentages. In this sense, numbers divisible by 100 to produce an integer were more convenient for tabulation purposes. These possible numbers were 2, 5, 10, 20, 25, 33 and 50. In most cases 20 or 25 interviews were taken, since these numbers are sufficiently large to guarantee relative representativeness but sufficiently small to ensure that the study could be maintained within resource limits.
Moreover, for the same reasons as in the selection of cases, in all four countries a stratification procedure to select individual interviews was utilized.
In Panama, for example, each family in a community was classified in erins of its occupational and land tenure structure. These were considered to be key factors in explaining intra‑community differences. Five categories (or strata) were defined: farmers with less than one half hectare of land (microfarms); farmers with 0.5 to 1 hectares (subfamily sized farms); farmers with 1.1 to 35 hectares (family sized farms); landless agricultural labourers; and non‑agricultural occupations. In Brazil, Mexico and Venezuela, each community was stratified differently according to categories which, on the basis of field interviews, appeared most crucial in that community.
The actual sampling procedure in each case consisted of first obtaining a list of all families living in the community, either by means of official records, interviews with local leaders or conducting a census survey. Then, according to predetermined stratification criteria, each family was classified in a category and the required number of interviews was selected from each category in proportion to the weight of each category in the same way as community cases were selected. For example, if 20 per cent of the families occupied the "landless labourer" category, 20 per cent of the sample would be drawn from that category.
Converting information needs into measurement instruments constitutes the final, crucial conceptual step in developing a monitoring and evaluation system. Two overriding considerations influence this step: information must be collected as efficiently as possible, and measurements must have unambiguous interpretation. Questionnaires for monitoring and evaluation systems have by necessity rather broad coverage in terms of content. In designing them it is necessary to limit the number of questions for each type of subject to a maximum of one or two questions. To ensure that these few questions are valid, they should be defined almost exclusively in terms of behaviour. Thus, for example, in obtaining information about nutrition, questions should be restricted to what people actually eat rather than to such othor ccrsieraticns as their attitude towards different foods. This apprcach siriplifies questionnaire construction in that all questions are straightforward and thus have a simple interpretation at the time of analysis.
In most cases presented here, no attempt was made to measure "attitudes". As they are usually defined is social research, attitudes relate to behavioural predispositions indicated by such questions as: "Do you believe that children should always wash their hands before eating?". For a number of reasons, questions of this type should not be used. First, a long history of research has shown that the relationship between attitudes and behaviour is at best tenuous. As measured by this type of question, the presence of attitudes has seldom shown a strong relationship with behaviour. Secondly, the concept of attitude has certain flaws with regard to behaviour. Attitudes are presumed to exist over time and space ("transsituational") and are measured in isolation while behaviour is specific in time and space ("situational") and would normally involve a large number of attitudes. [15] Thirdly, it is difficult to construct attitude questions which are not obvious (that is, to which every person interviewed will not respond positively simply because of the phrasing), which have a simple interpretation (that is, that all respondents understand them in the same way) and which truly differentiate among persons interviewed. Because of these difficulties, the only quasi‑attitudinal questions used were of the type mentioned earlier, in which individuals are asked about their understanding of the underlying concept behind a practice. Here the interpretation is simple (either the individual knows or does not know) and the question form is unambiguous (the response is open‑ended).
As a second general procedure, data should be collected at the most efficient level. Individual interviews are the least efficient method of data gathering, since they involve a large amount of data processing. Therefore, only information that cannot be obtained otherwise should be sought at the individual family level. Where information can be gathered through interviewing local leaders and government agents or through observation, it should be obtained on the community rather than individual level. As a general rule, in the cases described here, every effort was made to reduce the amount of data to be collected to a minimum.
The first step in designing the questionnaires was to take the master list of information needed for monitoring and evaluation, which was prepared beforehand, and decide where the specific items of information could be obtained most easily, whether at the individual, community, regional or national level. This was usually done by using a worksheet, which then served to guide the construction of the questionnaires. In Brazil, for instance, a worksheet of the form shown in figure X was utilized to identify the sources of information.
After determining this master list of questionnaire variables, derived from the list of information needed, the next step was to draft the specific questionnaires. These normally included one for interviews with families, another for interviews and observations at the community level and a third for the other levels. Since the greatest care must be exercised in drafting the individual questionnaires, this was usually done first in each of the four countries. The procedure was to draft questions in working groups, and then submit them to a critical staff discussion to ensure that the questions were adequate for the specific measurement task. For each item included in the questionnaire, the drafters were obliged to state:
(a) What does the item intend to measure? (What will the response provide in terms of information?);
(b) How will the responses to the item be analysed? (Does the quality of the data permit the intended analysis?);
(c) Specifically, what purpose will be served by the information produced by this item in terms of the over‑all evaluation? (Is the question necessary?);
(d) Is this the most efficient way to obtain the information? (Could this question be asked at a different level?).
Although the specific questionnaires derived from this procedure differed in each country according to the purpose of the monitoring and evaluation exercise, the form of the questions was always similar. Some of the key questions are discussed below in more detail. Information on the other questions is presented in annex II.
The first section of each individual or family‑level questionnaire usually consisted of demographic data. These included, among other things, sex and age distributions, level of education and migration patterns. Here, the normal method of collecting information was on the basis of reconstructing family structure by asking a series of questions about each family member and recording it on a summary table such as that presented in figure XI.
The form of questioning was: "Now) I would like to ask you about your family, starting with yourself. How old are you? What is the highest grade which you passed in school? Can you read and write (if not obvious)? What do you do for a living?" This was followed by a sequence of similar questions on dependents.
The data from this table were used to provide at least two types of information. First, attributes of the head of family were provided (age,
educational level, literacy). Secondly, a number of indicators about the family as a whole were obtained. These included, among others) the following:
Number of dependents (for calculating per capita income);
Available manpower for family economic activities;
Educational achievement;
Intergenerational occupation nobility;
School attendance rates (by comparing age with whether child is attending school)
Infant mortality rates.
These demographic data could indicate whether or not a subsequent series of questions should be asked. For example, in Brazil and Mexico, where there was a concern with non‑attendance at school, the table would be searched to see whether there were any children who legally should be in school but who were not attending. If such children were found, the head of family was asked "Why is it that the child is not in school?", in order to obtain information on school absenteeism. The content of the response would indicate whether the cause is institutional (the school here only goes up to a certain grade), familial ("I need him to work with me in the fields") or personal ("he is sick' or "he doesn't want to go to school").
In addition, as in all questions) a normal procedure is to have the interviewer record any explanation given by the respondent which helps interpret the table or which might explain other variables. For example, in one interview in Mexico, while answering the question on education, a respondent indicated that his son was attending a trade school sponsored by the programme, a datum which was not contained in the questionnaire.
In three countries, a key concern of the monitoring and evaluation systems centred on measuring the economic impact of programme activities in terms of income, production and employment. While it would have been theoretically possible to simply ask how much money, was earned the previous year, how much was produced and how many days were worked, experience had shown that such estimates tend to be wildly inaccurate. For farmers as well as labourers, income may not be known precisely because the individual has never calculated it in any case, it probably does not reflect production costs (represents gross rather than net income), and usually does not include the value of production for home consumption.
The approach taken, deriving from other studies, [16] was to attempt to reconstruct the previous agricultural (or calendar) year on a crop‑by‑crop, operation‑by‑operation basis. The assumption was that, although a farmer may not know his total income, production or cost (since he had no reason to calculate then), he could remember the details of these factors with reasonable accuracy. Clearly, there still existed a wide margin of possible error but short of keeping specific records during a year (which is evidently not possible), there appeared to be no other way to estimate economic variables. The technical considerations in formulating these questions are described in detail in annex I.
In all four countries, data on the material quality of life were obtained. Thile levels of living in many studies can involve a large number of possible variables, in practice it has been found that a few are sufficient to give an adequate picture. [17] These include careful description of the type of housing, as well as existence of certain material amenities.
Type of housing is usually classified in terms of construction, in the sense that by knowing types of walls, roofs and floors, the house can be adequately described. In Venezuela, for example, four types of houses could he identified: "ranchos" (dirt floors, adobe walls, thatched roof), 'improved rancho (with either a tin or slate roof, a cement floor or block walls but not all three), 'rural housing" (a dwelling constructed by the Government with cement floors, concrete block walls and a tin or slate roof) or "modern housing" (cement floor, concrete block walls and a permanent roof not constructed under a government programme). Similar classifications were used elsewhere with roof, wall and floor characteristics defined in terms of customary building materials.
In additions the existence of other amenities is ascertained. These include type of illumination (candles, kerosene lamps, gas lamps, electricity), source of water for home use (non‑potable source, well, tank, communal tap or running water), type of waste disposal (none, latrine, toilet in house) and type of cooking (fire on floor, raised fire, kerosene or gas stove). In addition to using these variables to provide an over‑all indicator of level of living, many others can be used to measure the impact of specific programmes. For example, in Mexico, one objective of the health and home demonstration programme of PRODESCH was to induce villagers to construct raised fireplaces in order to avoid contamination through preparing food on the ground. Similarly, in Brazil and Mexico, an objective of health campaigns was to induce construction and use of latrines.
As a further element in measuring levels of living, an inventory of material goods was obtained. This was usually done by observing the presence of a selected number of items, including a radio, a bed with purchased mattress, living‑room furniture and a sewing machine. The particular items were determined in each country on the basis of experience as to what material goods seemed to be associated with status. In Venezuela, for example, increased wealth usually led to purchase of living‑room furniture. In Brazil, the first item tended to be diningroom furniture, while in Mexico it was a radio. Again, the items used to indicate level of living can have a dual measurement purpose. For example, presence of a radio indicates exposure to mass communications media, as well as level of living, and presence of a sewing machine also has economic implications.
Where programmes had as their objective the inducement of change in certain behaviour, especially as regards health and nutrition, a set of questions to measure this was included. The prototype questions attempted to measure adoption of desired practices and levels of information about it. For example, in Brazil, for a series of desired health and nutritional practices, [18] the sequence of questions for each was "Do you do such and such?", followed by "Some people say you ought to do such and such, why do you think they say this?". The underlying assumption is that in the adoption of a practice there are two factors involved: the adoption itself in terms of behaviour, and an understanding of the reason why one should adopt the practice. Although in reality the two are not necessarily related, because people may adopt a practice without understanding the underlying concept or understand the concept without adopting the practice, in effect the two are usually related.
With reference to nutrition patterns, a slightly different form of question was used. Two possible lines could be used with regard to eating habits and diet either one could ask on a food item‑by‑food item basis how frequently the family eats these types of food, as in figure XII, or one could reconstruct the previous day's diet, as in figure XIII. The disadvantage of the first type is that there is a tendency for people to overestimate frequency of certain types of food, because of poor memory or because respondents wish to please the interviewer. Thus, for example, people may overestimate the frequency with which they eat meat. In contrast, asking people "What did you eat this morning for breakfast? What did you eat for supper last night?" etc. gives a better picture of the usual diet in a way in which people can remember. The total day's menu can then be coded according to desirable food groups (proteins, fats and starches, vitamin‑rich foods etc.) to rate nutrition. The disadvantage of this procedure is that, in many areas diet varies by season, according to availability of foods. For example, green vegetables may not always be available year‑round. In some countries, where nutrition data were important, both types of questions were used.
Organizational participation by individuals was considered to consist of three aspects: number of organizations in which there was participation, frequency of participation and nature of participation. The way in which an individual participated could range from mere passive attendance, through contributing money or labour to activities, to active self‑expression on decisions. On this basis, organizational participation was recorded in a table such as that shown in figure XIV, using the procedure of first asking "What organizations exist in this community?", and then asking for each of the organizations mentioned "Do you belong to X organization? How often do you attend meetings? Do you contribute money or work to its activities? Do you express your opinions in open meetings?".
In Mexico, where a major programme component involved obtaining voluntary labour for communal self‑help projects through the use of food incentives, wage incentives and promotional activities, a series of questions was asked on the degree of participation, by activity and type of incentive, as shown in figure XV.
Leadership patterns constituted the second aspect of local organizations, and development of local leadership, through training and promotion, was a major objective of the programmes evaluated in all four countries. Two types of data were required from questionnaires addressed to individuals: leadership structure and leadership type. In the first case, the study sought to find out whether an identifiable structure existed in the sense of the presence of clearly defined leaders, and whether this was specialized by subject‑matter. The traditional research method for identifying leadership structure is to ask each member of a community the name of persons who exercise any influence. However, this cannot be done through a small sample. [19] Since only a sample of local residents was used in the base‑line studies, the sociometric analysis would have provided only a rough approximation. To determine leadership quality, it was therefore decided to find out the differences postulated in sociological theory between "traditional" leadership where status is based on ascription or personal attributes of the leader, and modern leadership, where status is based on achievement or what the leader does. [20]
Examples of the questions used for obtaining this information include "Who do you think is the one person who most helps the community resolve its problems? Why do you think that this person is the one who most helps the community resolve its problems?".
Similar information was obtained by asking who is the most respected person, who has the most influence in making decisions, who is the person to whom the respondents would go for help in specific types of problems (an illness, agricultural or economic problem). In each instance, the identification question was followed by a question asking why that person was selected. In analysing the first question, what was looked for was whether there were consistently a few names mentioned by all respondents indicating an identifiable leadership structure, or many or no names mentioned (no identifiable structure). In analysing the "why" question, reasons were coded as "ascriptive" characteristics that referred to what the leader was in terms of personal characteristics ("he/she is honest, intelligent, wise or old") or as "achievement" characteristics that referred to what the leader did ("He/she always helps us"; "He/she is always working"; "He/she is always in the forefront of the struggle").
While the studies did not try to measure attitudes, preferring instead to concentrate on behaviour, it was recognized that psychological and cognitive aspects of behaviour were of great importance. The cognitive aspect of greatest importance was deemed to be the mental ability of individuals to conceptualize and to resolve problems which they perceived. Development of such abilities was a key objective of promotional and educational programmes. Based on previous research, [21] it was decided to utilize a series of questions which would lead the person interviewed through a problem‑solving sequence. Then, on the basis of the structure of the person's responses, he/she would be led to draw conclusions about his/her cognitive structure as regards problem‑solving abilities.
This variable was then assigned the role of measuring the effects of group participation. The questions formulated and the coding used were somewhat new and have not been applied too widely. Nevertheless, wherever used, the questions yielded useful information about the effectiveness of promotional programmes and group participation. The question formulation and coding procedures are described in annex I.
The procedures for designing the community‑level questionnaire were similar to those for individuals and families, although less attention was given to the form of questions since, on any item, data would be obtained from a number of sources. Material to be obtained in the community‑level questionnaire fell broadly into four categories: general background information on the community itself, information about specific aspects of community life which served to explain individual responses, information about the actions of outside change agents and information about local leaders.
General background information about the community usually included such basic data as location, politico‑administrative status, population, size and types of existing community services (for example, roads, clinics, lighting, streets). In addition, it was found useful to include a descriptive section on the history of the community, including such information as when it was founded, patterns of migration, famous people who came from the community and major local traditions. Moreover, data were collected on major events of the past five years, which could help place the other information in a proper perspective. For example, in Mexico, in a number of villages there had been three successive years of drought, which helped explain the low economic achievements.
Information about specific aspects of community life varied according to the intent of the study. However, at a minimum, it was necessary to ascertain community characteristics in the main areas of interest for evaluation: economic conditions, social structure and services and organizational dynamics. For example, data were obtained about such economic aspects as the identity of large landlords, store owners, money‑lenders, agricultural buyers and suppliers of factor inputs. Similarly, data were required about major social institutions, such as local schools, medical services and churches, in terms of such factors as number of people attending, quality of equipment and problems perceived by the leaders of these institutions (the school‑teacher, doctor or nurse and priest).
Most information required for analysis of local organizations could not be obtained from individual interviews with heads of family. Therefore, data had to be acquired from local leaders about the purposes and goals of the organizations, types of activities undertaken, frequency of meetings and major problems. Usually this was obtained through rather unstructured in‑depth interviews with local leaders. In Venezuela, an attempt was made to attend one or more meetings of key organizations to observe the patterns of interaction between leaders and members and thus assess the quality of participation obtained by the organizations. In some countries, it was found useful to interview identified leaders to reconstruct the history of the organization over the past few years. However, to ensure consistency among cases, the general lines of interviewing, as well as systematic means for recording information, were specified, as was also the case in the community‑level questionnaire.
Since the main purpose of choosing the community as the level of generalization for the study was to relate activities of change agents with results observable in the community, it was essential to obtain information about the nature of these activities. Change agents can be considered to be of two types: outside agents such as government promoters or service officials (extension agents, social workers, community development promoters, nurses and school‑teachers) and local people who act as change agents (voluntary organization leaders, health auxiliaries). For both types, the questions to be answered included the following:
What are the specific objectives which you are seeking to achieve?How do you go about attempting to achieve these objectives? (That is, what approaches or techniques do you use?)What major gains have you perceived in the past year?What are the major problems which you have encountered and why do you think these problems have occurred?What types of training have you received for your work, when and for how long?
The purpose of this series of questions was to present, in broad outline, the types of actions which change agents undertake in their attempt to induce changes. In addition, for outside change agents, information was sought about frequency of contact with the community, facilities for work (transportation, equipment), supervision, reporting and administrative matters. For local change agents in Brazil, for example, the same types of questions measuring adoption and reasoning used in the individual interviews (see sect. E, subsect. it, above) were applied. It was assumed that if desired behaviour patterns were not adopted by local auxiliaries and the underlying rationale not understood by them, then it was doubtful that local residents being contacted by the change agents would themselves adopt these patterns. Similarly, if local leaders did not have a cognitive ability to solve problems, it would be unreasonable to expect this ability in their followers.
In practice, the community‑level questionnaire was less structured and longer than the individual questionnaires and, to a large extent, it was considered to be the "reference work" on the community. In addition, questionnaires were also drafted for regional and national levels where administrative and planning information was required. For example, if, as in the case of Mexico, one was dealing with an integrated regional development programme, data on how participating agencies differed in their perception of programme objectives or had incompatible administrative or planning procedures were assumed to be of great importance in analysing field‑level co‑ordination among agencies.
Since the base‑line studies consisted of a series of case studies, the model for field work procedures was not unlike that usually used in an academic case study. [22] In each sampled community, the studies were made by a team utilizing the combination of survey, depth interviews and participant observation previously mentioned. The objective of the team was to obtain data in such a way that it was possible to understand in a total sense the dynamics of the communities being studied. In order to achieve this, it was decided in all four countries that the tabulation and much of the analysis of the data would be made by the team which studied the community. In academic research, it is not normal practice to delegate the task of tabulation and analysis to the interviewers because it is considered that they will code subjectively, make tabulation errors or be unable to analyse the results adequately.
In each of the four countries, practical considerations dictated that the interview team, consisting of a supervisory‑level official (Brazil and Panama) or central office professional (Mexico and Venezuela) and four or five local‑level officials or promoters, would have to obtain the necessary information and also begin its analysis in the field. These considerations included a need to rapidly reduce the data to a manageable size, cut the time required for tabulation and analysis at the central level, minimize the tedium of tabulation by spreading the task among as many people as possible, obtain more adequate information about communities by providing explanations for apparent contradictions before the data reached the central office, as well as to ensure that field staff participating in the study learned from the experience. One key to making the evaluation system work rapidly and at a low cost was this team approach to conducting and analysing the case studies. The reasons for taking this approach are given in annex III.
In most research studies, it is not customary to inform the people being interviewed of the purpose of the study, except in the most general terms, under the assumption that this would "contaminate' the data. Moreover, the results of the study are almost never made known to the people who are the objects of the study. As a result, interviewers tend to descend upon a community like "thieves in the night", obtaining information for some vague purpose and then disappearing. Based on such experience, it was felt in each of the four countries that people would be much more likely to co‑operate in terms of time and information if they were aware of the purpose of the study. In addition, it was felt that the study, involving a certain allocation of scarce resources, should have, if possible, an impact on the programme operations in the communities.
As a result, a normal procedure was to begin the study by explaining its purposes in detail to all persons concerned. This tended to create a favourable research climate and to increase the level of candour in supplying information. Moreover, an attempt was made to involve the community in the data analysis by reporting its major conclusions to the community. This helped stimulate communities to think about their problems in an objective manner and, through dialogue, to refine conclusions through comments people made on the preliminary findings. The studies thus served as both a promotional and an organizational technique.
Upon completion of the questionnaires and sampling design, a number of steps were taken to execute the field work for the case study. The first was to pay an initial visit to the community. This visit had three objectives: to contact local leaders and schedule a community meeting, obtain a list of community residents for sampling purposes and gain an idea of the physical context of the community for planning the survey in terms of the time that should be allocated for conducting the case study. Following this visit, the next step was usually to interview the government officials working with the community but not residing there.
Once the team arrived in the community, the first stage of the field work consisted of interviewing local leaders and any government officials residing in the community. Having thus obtained the views held by both local leaders and outside change agents about the dynamics of the community, the team discussed these prior to embarking on the interviews with heads of family in order to take note of those aspects of community life which were to be especially observed during the interviews. For example, when the community‑level interviews indicated that a major problem was leadership, then team members paid particular attention to responses to questions dealing with leadership and participatory aspects of community life.
The next stage of field work consisted of the meeting with the community which was arranged during the first visit there by the study team. This meeting served a number of purposes: to motivate the community to co‑operate fully in the study, to explain the study procedures and to observe the behaviour of the community in a
group setting. In explaining the purpose of the study, it was important to convince the community that information it provided would benefit the residents, not directly but indirectly through improving the performance of government programmes that affected the community. It was usually prudent to stress that the study itself would not resolve their problems, but instead would help the community solve its own problems and also assist other government egencies in serving the community more effectively. Experience in the four countries suggests that this realistic approach to explaining the study was readily acceptable to local residents, eliminated fears of possible hidden purposes behind the collection of information and tended to motivate people to both express their problems frankly and supply economic and social information in full.
In many communities, so much motivation to assist the study was generated that everyone wished to be interviewed and have their views made known. Since, in most cases, only a sample of families would be interviewed, it was prudent to explain the whys and hows of sampling and interview procedures. With regard to interviews, it was stressed that the data collected from individual responses would remain anonymous and confidential, and would not be used for purposes other than the study itself. With regard to sampling, in those areas where lotteries were popular, it was found useful to make an analogy between sampling and the lottery. Another useful (and realistic) explanation offered was that for lack of time only a small number of families could be interviewed. As a result, each family being interviewed was to represent several other families, and accordingly had a responsibility to give honest answers. In the Chiapas highlands of Mexico, where lotteries were less familiar to people, sampling procedures were demonstrated by drawing the sample in the open meeting of the community. In other countries, the sample was drawn privately. Clearly, the precise way of handling this aspect of field work will vary from case to case, depending on local conditions.
During the general meeting with the community, at least one team member was entrusted with the task of noting reactions of the people present. Particular attention was paid to the way in which local leaders directed the meeting: Did they permit and stimulate participation? Were they demagogic? Did they maintain order? Attention was also paid to the way in which people responded: Were they attentive? Did they express their opinions openly? It was also found useful if, before and after the meeting, team members engaged in informal and unstructured discussions with community members, both to answer questions which might have arisen about the study and to obtain impressions about the community itself. In Venezuela, these discussions served to provide a view of the current problems bothering the community as a whole.
Family interviews, which constituted the core of the field work, were rather straightforward and consisted of filling out the prescribed questionnaire. This was primarily the task of team members, while the team leader completed the community‑level interviews. It was found preferable to conduct the interviews in the family home, since considerable information about living conditions could be observed directly rather than asked. Moreover, it was found preferable to undertake the interviews without having neighbours present, since otherwise there would have been a tendency to mix responses. In the structured part of the interview, the correct response was more important than the initial response. Thus, if in answering a subsequent question, a respondent contradicted or altered a previous response, he was asked to clarify the contradiction. Furthermore, if there was an impression that the respondent was not giving correct information, he was asked to clarify. This was particularly true of the economic data, where peasants either might not have known the information requested with any precision or might have been less than candid. For example, in Venezuela one respondent reported a level of production of corn which, in terms of the amount of land he said he cultivated, was plainly impossible given local conditions (as indicated by the interview with the local extension agent). When this was called to his attention, the respondent altered his response about the amount of land he had under cultivation.
After completing the structured aspect of the interview, it was found convenient to engage in informal discussions with the family to satisfy any doubts which the family might have had about the study and to clarify any points not included in the questionnaire. Frequently, a respondent was more open outside the framework of structured questions. These responses were subsequently recorded on the questionnaires.
It was also found useful to question the respondent about his/her reaction to the interview, in terms of whether he/she felt that any information had been omitted. This type of feedback was used to improve questionnaires in subsequent studies.
Once the field‑work was completed, the team turned to the task of tabulating the data. This consisted of calculating the responses for the community on a previously prepared tabulation form. The statistical techniques used consisted of those appropriate for the respective question. Generally, this was the percentage distribution of responses among various categories, such as housing (for example, X per cent have modern houses, Y per cent semi‑modern, and Z per cent rustic houses). Economic data (such as production, income and costs) were usually expressed in terms of the community mean and median as well as the maximum and minimum values. [23] These tabulations were usually made before the team left the community, since any missing or apparently erroneous information could be corrected by returning to the respondent for clarification.
In addition to tabulating the community data, the team tried to analyse and explain the outstanding results which were obtained. This explanation was usually based on discussion among team members about their observations. It often led to a series of tentative conclusions about the reason why certain phenomena were observed in the community. These conclusions were then recorded for transmittal to the central office.
Where, as a normal procedure, a report on the results of the study was given to the community, one of two steps was taken. Either a future time was agreed upon when someone would return to the community to make the report, or on the basis of the preliminary tabulation a report was given before the team left the area. In either case, it was felt important to meet with the community to thank them for their co‑operation and to have a final exchange of views.
[1] The overriding assumption in these systems is that scientific research is basically logical reasoning utilized systematically. This is discussed in detail in the field manual of Centro Nacional de Capacitación e Investigación Aplicada en el Desarrollo Regional y Local (CIADEC), El Modelo PREB/CIADEC para la Evaluación Socioeconomica de un Pegueño Proyecto de Desarrollo (Maracay, 1973).
[2] Rio Grande do Norte, Minas Gerais and Rio Grande do Sul.
[3] Jefatura de Operaciones de PRODESCH, Secretara de Educación Pblica (Federal), Dirección General de Educación (State of Chiapas), Secretaría de Agricultura y Ganadería (Federal), Dirección General de Agricultura (State), Secretaría de Salud y Asistencia (Federal), Instituto Nacional Indigenista (Federal), Secretaría de Obras Pblicas (Federal), and Dirección General de Asuntos Indigenas (State).
[4] See, for example, the discussion in William G. Cochran, Sampling Techniques (New York, John Wiley and Sons, 1953) and A Short Manual on Sampling Vol. I. Elements of Sample Survey Theory (United Nations publication, Sales No. E.72.XVII.5).
[5] If, among 1,000 communities only one is sampled, it is likely that the sample will not be as representative as a sample with 100 or 500 communities.
[6] Charles Backstrom and Gerald Hursch, Survey Research (Evanston, Illinois, Northwestern University Press, 1963), pp. 32‑33.
[7] CIADEC, El Modelo PREB/CIADEC
[8] In Mexico, available personnel consisted of 15 professionals, plus 50 promoters, primary school teachers, extension agents etc., to make up 15 teams of 6, and an average of 20 interviews per case.
[9] DIGEDECOM‑MIDA, "La evaluación ....
[10] Census data did not provide this figure directly, so the percentage was calculated according to the following formula: percentage dependent on agriculture = A/ (T-NE), where A is the number of residents of the submunicipalities economically active in agriculture, T is the total population over 10 years of age and NE is the number of people 10 years of age or older who are not economically active (housewives, students, unemployed, retired people etc.).
[11] Cutting points were natural in that there were few "ambiguous" cases. For example, there were no cases in the 50‑60 per cent range active in agriculture.
[12] El sistema de evaluación continua del PRODESCH", Course Notes No. 10/73 (San Cristobal de las Casas, Chiapas, Mexico, August 1973) (mimeographed, pp.3-4.)
[13] In Mexico several alternative stratification factors were tried and tested according to this procedure. The best level of precision with the use of other factors was a probable error of 13 per cent, too high for the purposes of the baseline studies. Had the set of factors yielding a 6 per cent probable error not been found, the only way of reducing error would have been to increase the number of cases to be sampled.
[14] Representativeness and absence of distortion in community data are ensured by the method of selecting cases to study.
[15] See the discussion in Richard F. Carter, "Communication and effective relations", Journalism Quarterly 5 (1965), pp. 202‑212; Milton Rokeach, "Attitude change and behaviour change", Public Opinion Quarterly 30 (1966), pp. 529‑550, and John R. Mathiason, "Patterns of powerlessness among urban poor: towards the use of mass communications for rapid social change", Studies in Comparative International Development, vol. 7 (spring 1972), pp. 64-84.
[16] See, for example Universidad Central de Venezuela, Centro de Estudios del Desarrollo, Reforma Agraria vol. 6, La Metodologia de la Encuesta Nacional de Beneficiarios (Caracas, 1969).
[17] Frank M. Andrews, "Social indicators and socioeconomic development', Journal of Developing Areas vol. 8, No. 1 (October 1973), pp. 3‑12.
[18] Use of latrines, brushing teeth, bathing regularly, receiving vaccinations and immunizations, pre‑ and post‑natal care, boiling or treating water, balanced diet etc. See, Associaco Brasileira de Credito e Assistncia Rural (ABCAR), Avaliaco dos Trabalhos nos Projetos de Bem‑Estar Plano Basico da Avaliao (Rio de Janeiro, 1973).
For other types of measures on nutrition, see Michael C. Latham, Planning and Evaluation...
[19] See, for example, Elihu Katz and Paul Lazarsfeld, Personal Influence the Part Played by People in the Flow of Mass Communications (Glencoe, Illinois, The Free Press, 1960).
[20] See, for example, Florence R. Kluckhohn and Fred I. Strodtbeck, Variations in Value Orientations (Evanston, Illinois, Row, Peterson and Company, 1961).
[21] John R. Mathiason, "Patterns of powerlessness among urban poor: toward the use of mass communications for rapid social change", Studies in Comparative International Development, vol. 7, No. 1 (1972), pp.61-81.
[22] A more sensitive discussion of field procedures can be found in the CIADEC field manual, El Modelo PREB/CIADEC para la Evaluación
[23] Mean and median are discussed in greater detail below in table 8.