The best way to understand the findings of social surveys is to look up the exact questions addressed to the survey respondents. Wordings of survey questions might not be reported by the mass media, for lack of space or journalistic interest, but should be in the technical reports of the survey doer, in line with standard codes of professional ethics for opinion research.
Thus, to clearly understand the recent crime victimization report of Social Weather Stations (SWS), see the technical report, “Families victimized by crime rise to 6.1 percent in Sept. 2024; victimization by cybercrimes rises to 7.2 percent; neighborhood fears stay high,” (www.sws.org.ph, 11/9/2024). It shows that the SWS numbers are not mere “perceptions” of crime, but actual instances of being burglarized at home, robbed on the street, having a motor vehicle stolen, being hurt by physical violence, or victimized by cybercrime, in the past six months, by a member of a statistically representative national sample of families.
The SWS survey question-wordings have been very carefully maintained from the start—Feb. 1989 for (1) street robbery, (2) home burglary, and (3) physical violence, April 1993 for (4) carnapping, and June 2023 for (5) cybercrime—to the present, since the SWS intent is to monitor well-being over time. The SWS surveys have been quarterly since 1992, with no break except for the 2020 pandemic. All the raw data, and all the questions asked, are in the SWS archives.
Incidentally, the latest incidence of victimization by cybercrime is higher than the combined sum of the other crimes listed. Cybercrime is the new scourge. The fears of crime in the neighborhood, on the other hand, are perceptions indeed, but SWS has separate numbers for them.
The victimizations experienced by the people are much higher than those they report to the police—we know this because of past SWS surveys, sponsored by the Philippine National Police itself. (Reports of commissioned SWS surveys are open to research, after temporary embargo. The main reason for non-reporting of crime is the expectation that nothing will come of it anyway.)
Survey researchers control only the questions, not the answers. I remember from the 1990s that the American Association for Public Opinion Research held a contest for the best slogan for its annual conference. The one I liked most was: “We may not have all the answers; but we have all the questions.” If one is already sure about the answer, then there’s no need to ask the question.
The rationale for exposing a survey questionnaire is to demonstrate that it has no leading questions. If a survey sponsor asks that some leading questions be asked, the survey doer may reject the request, and cite the code of professional ethics as justification. Ethical research engenders respect, and repeat business.
The answers to a multiple-choice question are part of the question. A survey question starts with its introduction, if any, and ends with the full listing of answers to choose from. If the research is seeking to explain some belief, attitude, intention, or behavior, then it is impractical to simply ask “Why?” This will easily result in as many one- or two-liners as respondents, i.e. hundreds, even thousands of lines to read through and try to classify.
As a general rule, it’s more efficient for the researcher to group the possible answers according to the competing hypotheses: Reason A, Reason B, Reason C, etc. It’s alright to allow a few (say, up to three) multiple answers. Catch-alls like “none of the above,” and “other” are alright, if they are expected to get only single-digit results; be ready for some “no response” and “refused to answer” too.
How about the “who will you vote for” question? As a general rule, a pre-election pollster should present the candidates’ names in exactly the same way as in the Commission on Elections (Comelec) protocol on election day: with correct spelling, nickname, party affiliation, and so forth, in alphabetical order (for senators) or in numerical order (for party list), as pertinent.
Note that assignment of random numbers to the party lists has been introduced by the Comelec in recognition that mere location on the candidates list can affect the choice of voters. (Suppose that, at the end of the Comelec list, there is an empty box without a name? I wouldn’t be surprised if some voters shade the unnamed box, thinking it might be for a mystery candidate!)
If dissatisfied with the questions of a survey, there’s the recourse of designing and implementing one’s own questions, and then comparing the answers. Competition is welcome; it is the means of validating science.
—————-
Contact: mahar.mangahas@sws.org.ph.