new jobs this week On EmploymentCrossing

470

jobs added today on EmploymentCrossing

36

job type count

On EmploymentCrossing

Healthcare Jobs(342,151)
Blue-collar Jobs(272,661)
Managerial Jobs(204,989)
Retail Jobs(174,607)
Sales Jobs(161,029)
Nursing Jobs(142,882)
Information Technology Jobs(128,503)

Data-Information Conversion in Test Interpretation

3 Views      
What do you think about this article? Rate it using the stars above and let us know what you think in the comments below.
This article is concerned with objective procedures for converting test scores and other data into information that is relevant to a counselee's educational and vocational plans, decisions, or problems. Validity studies are crucial to data-information conversion procedures. However, the scattered, non-localized validity studies often reported in support of an instrument are seldom of direct help to the counselor who has Fred and a set of test scores before him. While it is true that an accumulation of studies performed within a theoretical framework may support various uses of a particular test, the task of converting a counselee's test scores into usable information is left undone. Typically, the counselor can find the standing of a counselee in some norm group; after that he is on his own. Professional knowledge, clinical judgment, and personal sensitivity always will play crucial roles in test interpretation. However, objective data-information conversion procedures can make the counselor's job easier. Just what does a percentile rank of 63 on the XYZ Mechanical Aptitude Test say to Fred and his counselor?

Local Validity Data and Decision Making

Almost 15 years have passed since Dyer (1957) made a convincing case for local studies of test validity. Dyer was only one of a host of measurement specialists that cautioned test users about accepting tests on the basis of face validity or assuming that one or two validity studies conducted in some other setting provided sufficient evidence that a test would be useful in their setting and for their purposes. Recent reviews of validation research (Ghiselli, 1966; Prediger, Waple, and Nusbaum, 1968) have reinforced this caution. Dyer saw little help with development of local validity data coming from the statisticians and professional researchers. Instead, he felt the job must fall to the local practitioner.



The same thought was reflected 8 years later by Clarke, Gelatt, and Levine (1965) who placed the need for local validity data in the context of decision theory and presented a decision-making paradigm for local guidance research. Attention was focused on the process of decision making, with information on the possible outcomes of various courses of action being seen as a necessary if not sufficient condition for wise decisions. Examples of local validity studies conducted in the Palo Alta, California, schools were given to illustrate the development of objective probabilities useful in educational planning. As with Dyer, use of experience (expectancy) tables was emphasized. Subsequent articles (Gelatt and Clarke, 1967; Katz, 1966, 1969; Thoresen and Mehrens, 1967) elaborated on the role of objective probabilities in decision making, the influence of these probabilities on subjective probabilities, and the interaction between subjective probabilities, choice option utilities, and personal values.

Katz (1963, 1966), in particular, showed how the decision-making process is related to the broader process of vocational development. Results from the massive Project TALENT validation studies also have been placed firmly within the context of vocational development theory and decision theory (Cooley and Lohnes, 1968). The era in which the Parsonian concept of test interpretation could be viewed as the epitome of educational and vocational guidance is past. However, the above studies and formulations leave little doubt about the continued importance of test information in the vocational development process.

Bridges between Data and Information

Goldman (1961) described three objective bridges between test scores and their meaning for the counselee: the norm bridge, the regression bridge, and the discriminant bridge. Most of our current data-information conversion procedures consist of some form of the norm bridge. As Goldman notes, the norm bridge is an incomplete bridge since test norms simply permit one to estimate tending in some group and do not, per se, indicate the implications of this standing. The regression bridge, however, is a complete bridge from test scores to their implications and, as such, readily lends itself to data-information conversion. Usually, the implications are in the form of success predictions obtained via experience tables or regression equations.

The third bridge noted by Goldman, the discriminant bridge, provides an objective measure of a counselee's similarity to various criterion groups. Discriminant analysis techniques, when combined with the centour score procedures developed by Tiedeman, Bryan, and Rulon (1951), permit the comparison of counselee's test results with those of various criterion groups along the major dimensions of test data that differentiate the groups. The complementary nature of similarity and success estimates was first discussed some 20 years ago (Rulon, 1951; Tiedeman et al., 1951). Many counselors, however, are unfamiliar with the characteristics of similarity (centour) scores or their potential role in test interpretation since these topics have received little attention in testing texts or test interpretation manuals.

Data-Information Conversion via Similarity Scores

Consider the information needs of Fred, a high school senior who is thinking about enrolling in a post-high school vocational-technical program. Centour score procedures applied to Fred's high school grade record and test scores could result in a report indicating Fred's similarity to successful and satisfied students in various vocational programs. In the example that follows, the similarity scores shown in parentheses after each of the vocational programs are on a scale running from 0 to 100 with 100 representing the highest degree of similarity. The closer Fred's scores on the relevant tests are to the test scores of the typical successful and satisfied student in a vocational program, the higher his similarity score will be for that program. Fred's similarity score report might look like this: vocational horticulture (87), carpentry (41), commercial art (28), auto body (26), distributive education (25), auto mechanics (14), radio-TV repair (3) and data processing (1).

Thus, on the basis, of the measures used, Fred's aptitudes and interests are most similar to students in vocational horticulture. Carpentry ranks second and three other programs are in an approximate tie for third. Fred is least similar to students in data processing and radio-TV repair.

In this example, test data have been transformed into information that is directly relevant to one of the major functions of tests in educational and vocational guidance-facilitating exploratory behavior. Fred's counselor might use the similarity scores quite advantageously in stimulating Fred to explore the options available to him. Of course, the similarity scores should not be used alone. Their potential value lies in suggesting vocational program possibilities that might not have been recognized otherwise. The degree to which Fred explores these possibilities will be a function of his value system and his opportunities.

Secondary Role of Success Estimates

Success estimates obtained from regression analysis or expectancy tables also can be used to facilitate exploratory behavior. For example, Fred might be encouraged to explore the vocational programs for which he is predicted to receive the highest grades. However, estimates of success might be more appropriately incorporated into the actual exploration process where they could take their place along with a host of other relevant considerations. After all, Fred may not place much value on making high grades. His similarity scores could identify areas in which he would have a reasonable chance for success. His probable level of success could then be determined upon further exploration. Thus, a two-stage strategy is suggested with similarity scores being used to stimulate and facilitate exploration and success estimates being one of the many things to be considered during the process of exploration.

Other considerations also indicate the need for caution in the use of success estimates as the primary basis for facilitating exploration. Consider, for example, an experience table showing the relationship between the scores on some test and grades in carpentry. Could this table be used appropriately with Sally, Fred's sister? Can Sally be considered to be similar to the group from which the experience table data were obtained? To what degree would the trends shown in the results apply to her? Similarly, to what degree would success predictions in radio-TV repair apply to Fred (similarity score=3)? Can Fred's test scores legitimately be used to predict lab grades in cosmetology? Or, in another context, is one justified in comparing a high school senior's college freshman GPA predictions in engineering, humanities, education, physical science, or business? These questions, recently discussed by Roulon, Tiedeman, Tatsuoka, and Langmuir (1967), need further investigation. A "reasonable" degree of similarity between counselees and the validation sample might well be an appropriate prerequisite for the use of success estimates in counseling.

A second difficulty with success predictions results from the well-known "criterion problem." Obtaining a suitable criterion of success in education and training is difficult enough. When one moves into the world of work, the definition and measurement of success become infinitely more complex (Thorndike, 1963; Thomdike and Hagen, 1969). Members of various occupations or occupational clusters can be readily identified, however. (Gross selective standards for determining criterion group eligibility could also be applied.) This is all that is needed to permit use of discriminant analysis and centour score procedures. Thus, data-information conversion would be possible.

Implications

All of this is likely to be of little comfort to the conscientious counselor or personnel worker who has neither the time, training or inclination to become involved in data-related duties as versus people-related responsibilities. Few test users would argue about the need to strengthen the bridges between test scores and their meaning for the counselee; but how is the job to be done?

Two of the major stumbling blocks to conducting validity studies are data collection and analysis, fields in which great strides have been made in the last 10 years through the use of computers. In addition to providing help with record-keeping functions, computers have made time-consuming and/or highly sophisticated data analyses economically and psychologically feasible. Approaches to data-information conversion that have been available for some time are now possible on a large scale. This is nowhere better illustrated than in the work of Project TALENT staff members, in particular, Cooley and Lohnes (1968). However, practical applications of Project TALENT findings are limited because of the multitude of measures involved and their unavailability to practitioners. Unless the test equating studies proposed by Colley and Lohnes (1968) eventually are undertaken, counseling use of Project TALENT data will be restricted to special programs such as Project PLAN (Flanagan, 1969).

The computer-based system illustrated by the Cooley-Lohnes studies is generaliz-able to other settings, however. Development of such systems, either by educational institutions or private enterprise, and provision for wide access to them is essential to any major improvements in test interpretation procedures. If systems for data-information conversion were available at the local level, the need for the do-it-yourself validation research described by Dyer could be met with little sacrifice of counselor time and mental equanimity. Much of the work required in data preparation could be completed by clerical help or one of the several types of guidance technicians proposed by Hoyt (1970). The counselor's only tasks would be to ask important questions of his data and to help his counselees use the resulting information. While data information conversion systems will never replace professional knowledge, judgment, and experience, they can go a long way toward moving test interpretation beyond the era of squint and tell.
If this article has helped you in some way, will you say thanks by sharing it through a share, like, a link, or an email to someone you think would appreciate the reference.



I was facing the seven-year itch at my previous workplace. Thanks to EmploymentCrossing, I'm committed to a fantastic sales job in downtown Manhattan.
Joseph L - New York, NY
  • All we do is research jobs.
  • Our team of researchers, programmers, and analysts find you jobs from over 1,000 career pages and other sources
  • Our members get more interviews and jobs than people who use "public job boards"
Shoot for the moon. Even if you miss it, you will land among the stars.
EmploymentCrossing - #1 Job Aggregation and Private Job-Opening Research Service — The Most Quality Jobs Anywhere
EmploymentCrossing is the first job consolidation service in the employment industry to seek to include every job that exists in the world.
Copyright © 2024 EmploymentCrossing - All rights reserved. 21