new jobs this week On EmploymentCrossing

599

jobs added today on EmploymentCrossing

84

job type count

On EmploymentCrossing

Healthcare Jobs(342,151)
Blue-collar Jobs(272,661)
Managerial Jobs(204,989)
Retail Jobs(174,607)
Sales Jobs(161,029)
Nursing Jobs(142,882)
Information Technology Jobs(128,503)

Reviewing the State of Counseling Research

3 Views
What do you think about this article? Rate it using the stars above and let us know what you think in the comments below.
Career counseling outcome research has generally used measures that were developed from longitudinal or large-scale theoretical studies of the general population (e.g., career maturity [Super], person-environment congruence [Holland], or information-seeking [Krumboltz]) because they had established records of reliability and validity. Nonetheless, these measures have only tangential relevance to a client faced with a career decision. R. A. Myers (1986; 1989) suggested suspending studies using information seeking as an outcome until the relevance of the measure could be improved. Another common outcome measure, career maturity, while useful to developmental theorists, has very little utility for clients under pressure to make a career decision. Treatment satisfaction, a seeming improvement, actually has very little relation to the client's ability to make improved career decisions. In cases in which a relevant outcome measure has been employed, it frequently has been homemade, and without demonstrated reliability or validity (Spokane & Oliver, 1983). The use of multiple outcome measures (Oliver, 1979) is an improvement only if their relevance and psychometric quality are firmly established.

In a candid, tongue-in-cheek article on the problems in designing relevant outcome measures entitled "Needed: Instruments as Good as Our Eyes," Brickell (1976) chronicled a four-year attempt to verify the positive outcomes that a team of skilled observers had found for a certain career intervention. Brickell began by writing multiple-choice items, which were administered to six thousand students; no difference between treated and untreated students was found. The observers were sent out to the field sites again, however, and again found beneficial outcomes. Finally, the evaluators went back to the classes to observe the students' outcomes directly rather than employing teacher ratings of student progress. If an observer witnessed any learning about careers in the classroom, an item was written to reflect what was seen, sometimes right on the spot. The evaluators went from school to school, writing more than one thousand usable items while discarding hundreds of others. The items were compiled into grade-level tests that were administered to twelve thousand students.

The result was a set of sensible, significant differences in program effectiveness that took nearly five years to find. Brickell's final assessment was a goal-free, theory-free field test that had maximum relevance to actual student learning but minimum relation to program goals and career development theory. Traditional program evaluation methods had not worked because the theories were not relevant to the learning that was not taking place; nor were the objectives written to reflect the actual outcomes. Brickell's refreshing account of this case is must reading for anyone seeking to implement and evaluate a career intervention.



Relevance of Interventions

Career intervention outcome studies have used such distorted caricatures or analogues of actual career counseling that it is difficult to draw conclusions from their findings. Twenty-minute test interpretations or one-shot workshops on cognitive decision making conducted out of the counseling context tell us very little about how to improve the quality of client career decisions. Fortunately, actual subjects are now more frequently used in such studies, and natural career interventions are more common (Phillips, Cairo, et al., 1988). While it is true that analogue research studies may illuminate the complex processes involved in counseling through laboratory control, studies evaluating (rather than researching) actual interventions have been far too few. Naturalistic studies are difficult and expensive to conduct, but essential to progress in the field of career intervention.

Toward Meaningful Career Intervention Outcomes

Are different outcome criteria appropriate for different types of career interventions and client presenting problems? Certainly, counseling with a high school graduate in search of a first job will be evaluated differently from counseling with a mature adult with a high level of perceived incongruence. Does this difference mean, then, that no common outcome standard can be applied to all interventions? Reviews of the career literature often presume that interventions can be roughly compared using a common metric not only within studies but across them as well. Career intervention outcome studies, however, reveal that evaluations of developmentally oriented interventions rely heavily on outcome measures of career maturity and decisional status, whereas traditional counseling studies most often use information seeking or appropriateness of choice as outcome measures (Watts & Kidd, 1978). In fact, even a meta-analytic review strategy that converts various outcome measures to a common metric may, when comparing treatments across studies, face the problem that different interventions (e.g., class versus individual) may employ systematically different outcome measures. If, for example, it is true that studies of developmentally oriented interventions employ career maturity measures more often than individual counseling interventions, reviews of the entire set of developmental interventions would generally yield more favorable outcomes than a series of studies that used an unobtrusive measure (Webb, Campbell, Schwartz, Sechrest 8c Grove, 1981) of appropriateness of realism-a more difficult outcome to affect.

A single measure for evaluating all interventions would be very cumbersome, although a set of brief scales is certainly possible. A standard outcome battery composed of existing measures with demonstrated validity and reliability, and employed by every study is desirable, although very unlikely. One feasible alternative to multiple, specific outcomes is Goal Attainment Scaling, or GAS (Kiersok & Sherman, 1968), which may be used to evaluate those goals agreed upon either by the counselor and the client, or an agency or program at both the beginning and end of an intervention. When used as an outcome measure GAS has no demonstrated reliability or validity, since it must be created anew for each application. GAS has been used in career counseling studies (Hoffman, Spokane, & Magoon, 1981), to create a self-guided outcome sheet, but a review of studies using GAS (Cytrynbaum, Ginath, Birdwell, & Brandt, 1979) showed only modest inter-rater reliability, and very little construct or content validity. Furthermore, proper use of GAS appears to need the intensive negotiation of the goals and their perceived outcomes between the client and the counselor. Thus the technique is far from a perfect answer to the outcome problem in career interventions.

Several authors (Fretz, 1981; R. A. Myers, 1986; Oliver, 1979) have suggested using multiple measures that cut across a common set of outcome domains. The result of Oliver's suggestion, however, was often the unsystematic selection of several measures without much thought about why they were chosen. Her four classes of outcomes build on Myer's (1981) earlier classifications and have been recently revised (Oliver 8c Spokane, 1988):

Choosing a representative from each of these classes is sensible, although there are no guidelines governing the selection process. Many specific outcome measures can be classified using Oliver's system, but no single set of scales or measure has been compiled to measure the most important outcomes in career intervention.

Although the items on these scales have been derived from clinical experience, they are a blend of theory-based and goal-based items. They are not designed specifically from actual client behaviors, as Brickell (1976) did in his study, but they are certainly closer to what Brickell suggests than are traditional outcome measures. The set of scales described above might be revised to include more items written from direct observation of client behavior from video or audiotapes, and should reflect what clients actually do in the face of a career decision, rather than what they would like to do, report having done, or wish they could do (Pervin, 1987). A blend of client self-report, which may be more reliable than previously believed (R. A. Myers, 1986), and counselor observation, which is the method employed in these scales, may be the most effective evaluative strategy. Of course, no evidence of their validity or reliability is available, and they should be viewed more as an evaluative aide than as a psychometrically sound instrument.

What to Do When There Are No Significant Results

As indicated (Spokane & Oliver, 1983), most career intervention outcome studies have been single-shot affairs, done for doctoral dissertations or master's theses, with no follow-up or single studies by experienced researchers. The seasoned researcher who evaluates career interventions in a series of studies is a rare professional. When non-significant results are found in controlled outcome studies, it is usually presumed that the intervention is at fault. But as Brickell (1976) appropriately noted, this is only one of four possible conclusions the evaluator might draw:
  1. No beneficial career outcomes occurred, and the outcome measures correctly identified the non-significant outcomes.

  2. There were beneficial outcomes, but the measures employed were irrelevant to the outcomes, and thus did not detect them.

  3. No beneficial outcomes were detected, but the fault is in the intervention, which was either too brief, too artificial, or too general to achieve the desired result.

  4. There was no effect, but the methodology was at fault (e.g., improper control group, low power, poorly trained counselors, or high dropout rates).
More than the usual single effort is needed to arrive at one of the final three explanations for a negative (no-benefit) outcome. Because they require follow-up studies, which are rare, penetrating outcome studies have been very slow to accumulate (Spokane & Oliver, 1983). Until we have a body of literature on improving unsuccessful outcomes, when an evaluation is conducted and career interventions are found to be unsuccessful, any of the following strategies can be employed to increase the effectiveness of the counselor or the program. These corrective steps require some courage on the counselor's part, as well as a scientific rather than a defensive attitude:
  1. Intensify the intervention: This may include a longer intervention, a different mix of strategies and techniques, or one based upon individual counseling, but some evidence exists to support the delivery of a more intensive intervention after a brief intervention has failed (Bernard & Rayman, 1982).

  2. Review the goals, objectives, and outcome measures: Make sure that the intervention is appropriate to the client's problem. If no goals or objectives are available for a group or workshop, write them yourself, and see that they really capture the essence of the intervention.

  3. Review the client's needs and problems: An assessment of this kind might include using focus groups of clients or participants who are questioned about the problems they are facing and the appropriateness of the intervention process for those problems.

  4. Institute treatment plans: Career treatment plans should contain treatment goals, some discussion of assessment devices, and recommendations for intervention strategies and techniques.

  5. Engage a peer consultant: Present several cases or groups to this consultant, who reviews tapes and sessions, and evaluates data in an effort to improve treatment potency.

  6. Establish an advisory board: This board should represent the client population from which your cases are drawn: if a school or college, the board should be composed of students and faculty from all grades; if an industrial or governmental organization, employees and management should be represented; if a practice setting, a selection of clients should be represented.

  7. Contact dropouts: Complete this step especially if your dropout rate is over 30 percent. Dropouts will frequently be quite candid about why they left, and this information can be valuable in revising intervention strategies.

If this article has helped you in some way, will you say thanks by sharing it through a share, like, a link, or an email to someone you think would appreciate the reference.



I like the volume of jobs on EmploymentCrossing. The quality of jobs is also good. Plus, they get refreshed very often. Great work!
Roberto D - Seattle, WA
  • All we do is research jobs.
  • Our team of researchers, programmers, and analysts find you jobs from over 1,000 career pages and other sources
  • Our members get more interviews and jobs than people who use "public job boards"
Shoot for the moon. Even if you miss it, you will land among the stars.
EmploymentCrossing - #1 Job Aggregation and Private Job-Opening Research Service — The Most Quality Jobs Anywhere
EmploymentCrossing is the first job consolidation service in the employment industry to seek to include every job that exists in the world.
Copyright © 2024 EmploymentCrossing - All rights reserved. 168