cwu home     site map         

Professional Sequence Program

1.   Please describe your program's assessment process and what standards you are measuring in relation to the NCATE and State standards of knowledge (content, pedagogy and professional), skills (professional and pedagogical) and dispositions. Is the system course based, end of program based, or other? Be sure to reference how the faculty in your program was involved in developing the assessment process. In addition, describe how the assessment of standards relates to the unit's and program's conceptual framework.

Program Interpretations and Conclusions:

The Professional Education Program addresses Washington State Standard V: Knowledge and Skills. The Program is currently using a section-by-section course based assessment system. That is, assessment is done in every class section of every course, with each individual instructor evaluating his/her section of a course based on a common rubric. This has proven to be problematic for the following reasons.

1. The program faculty would like to have greater confidence in the consistency of scoring across all evaluators of assessment artifacts. It is difficult to compare or coalesce ratings of individual instructors. This is due largely to personal-bias errors (i.e., generosity error, severity error, central-tendency error, and halo effect), for which it is impossible to control if instructors evaluate the results of instruction in their own section. For example, one instructor may be a generous scorer, while another is a severe scorer. In such a case, no meaning can be attached to a particular score. The "Inter-rater Summary" provided by LiveText does provide information regarding severity error, but a more robust measure of true inter-rater reliability would be useful.

2. Further, although each course has its own rubric which is common to all sections of that course, there has been no consistent or common interpretation of these rubrics across all sections of a given course. So, for one instructor “appropriately cites external material” may mean that a minimum of three external sources are used in a paper. Another instructor may believe that this means that whatever external sources are cited use correct APA format.

3. Rubrics were generally developed by a single person who happened to be teaching a given course at some time in the past ten years or so. At times in the intervening period, some of these rubrics were modified, again generally by a single person. This method of development has led to resistance to utilize the rubrics on the part of some who were not involved in their development.

As the faculty have become more aware of these issues, they have seen the need to change this assessment process. This recognition has been perhaps the most significant result of the analysis of the data available in the LiveText system. In an effort to create at least some level of inter-rater reliability, the Program Coordinator led two training sessions, one for EDCS 444 and one for EDF 301. Each of these sessions was attended by faculty teaching the respective course.

The examination of summary rater scores from the LiveText reports and the discussions occurring in the training sessions made it clear that a new system was required. As a consequence, in March, 2008 the Program faculty agreed to move to a true course based system, whereby all sections of a given course would be assessed as a unit. This is being implemented Spring Quarter, 2008. In this system, student artifacts will be randomly selected from all sections (typically 4 or 5) of each course every quarter. A scoring team of three faculty members will be trained to utilize the rubric, and then the team will score the sample of artifacts. It is hoped that such a system will result in acceptable levels of reliability and validity.

In addition, in January, 2008, the program began a complete review of the curriculum to incorporate the new Standard V requirements from the State. In this review, the faculty will take a critical look at the assessment component in light of the revised goals, learner outcomes, evidentiary requirements by the state, and the interconnections between the Professional Education Program and the certification areas. This program should be implemented in September, 2009.

The specific standards addressed in the courses in the Professional Education Program are given in the Tables Three and Four in section (2) below.

2. Below is an analysis of the frequency with which your program cites CTL, WA State Standards/Competencies, and/or national standards within your LiveText artifacts, rubrics, and reports. Please examine the charts and write your program's interpretations and conclusions based on the information provided. (e.g., Are the standards dispersed appropriately in your program? Are all the standards represented as you wish them to be? After reviewing this analysis are there changes your program would recommend making to the way you cite standards or assess your candidates using LiveText??)

Program Interpretations and Conclusions:

The data in the above tables lead to the following conclusions.

1. In some instances, the data do not accurately reflect the coverage of standards by the rubric.

2. No data is presented in the Washington State Standards table for several courses (EDF 301, EDF 301A, EDF 302, EDCS 311, EDCS 444, PSYCH 314, and PSYCH 315).

3. For some courses, the rubric appears to provide an insufficient representation of the standards.

The first two of these issues are problems with data collection and presentation, which should be easily corrected. The third issue is more difficult to address. Faculty have begun to question the appropriateness of some of the current assessment instruments, which are almost solely focused on essay-type responses to very general prompts. As the revision of the Professional Education Program moves forward, the data from the tables above will be very useful in helping the faculty redesign more appropriate assessment instruments which utilize a wide range of types of assessment.

3. Below you will find one sample of your Live Text Report that identifies an aggregation of candidate learning outcome data. Please examine all of your reports in the LiveText exhibit area and discuss the accuracy, consistency, and fairness of the data, as well as what improvements could be made in the program assessment rubrics, courses, artifacts, or reporting. Include your interpretations relative to how well your candidates are meeting standards. After examining all of your report data, list any changes your program is considering.

Program Interpretations and Conclusions:

Some of the response to this item has been addressed in items (1) and (2) above.

The “Inter-Rater Summary” provided at the bottom of the LiveText report for most classes is not a true inter-rater reliability measure. The “Summary” points out, for several classes, that personal bias is making the data difficult to interpret. and less reliable than the faculty would like.. Low reliability implies a low validity. It is hoped that the new procedure mentioned in (1) above will help resolve some of the reliability and validity issues.

In reviewing the data, faculty were extremely frustrated in their attempts to extract useful information. There are, however, two points on which virtually all faculty agreed.

1. There needs to be consistency in the methodology used for data collection when doing assessment. This does not mean that all assessment instruments should be the same. What it does mean is that the descriptors for evaluation need to be the same and carry the same meaning for all evaluators.

Course rubrics are currently using a wide variety of descriptors. In EDCS 316, for example, the descriptors are Target, Acceptable, Unacceptable, with point scores of 2, 1, 0, respectively. In EDCS 431, they are Competent, Pre-Competent, Unacceptable, with point scores of 3, 2, 1. And in EDCS 444, they are Exemplary, Proficient, Partially Proficient, and Incomplete, with point scores of 10, 8, 6, and 4.

Faculty are unsure what “Exemplary” or “Pre-Competent” or the other terms mean and so different faculty use their own distinct interpretations in their evaluations. As noted above, the Program review will enable faculty to reach agreement on consistent terminology, enhancing reliability and usefulness of the data.

2. The other fundamental area of agreement among the faculty is that our graduates are very weak in their writing skills. This is a difficult issue to address, and universities around the country are struggling with the same problem. As a result of the Program review, the faculty hope to develop more effective strategies for identifying and remediating writing deficiencies. In some cases, this may result in students not completing the program if they are unable to meet the requisite standards.

4. Below you will find a chart of the CTL Standards aggregated by course. Please examine the data results and discuss any improvements if any you might consider for your program. Using these data, please reflect upon your candidates' success in meeting standards. Compare these data to the data provided in the WEST B and E charts that follow. Is there consistency in the rates of success? What do these data tell you?

Program Interpretations and Conclusions:

The lack of consistent descriptors mentioned in item (3) above has made presentation of composite data problematic. The table has required course rubrics with three, four, and five descriptors to be either collapsed or expanded to the four descriptors given. The chart then collapses these four descriptors into three. When the Program develops consistent descriptors across all courses, this data presentation will be more meaningful.

Given these problems with the data, they nonetheless cause some questions to be raised. 1. The percentage of students who are unsuccessful is quite high in some classes, notably EDF 301, EDF 302, and EDCS 424. Is this a function of the entry qualifications of the student, the material in the course, the nature of the artifact, or the reliability issues mentioned repeatedly above?

2. The West-B data are interesting, but because it functions as a screening tool for the Program, it would generally be the case that only students who pass the West-B would be assessed in Program courses. In practice, however, student are currently allowed to take some Program courses when they have not passed the West-B. Should this practice be stopped, requiring all students to be competent before they enter Professional Education Program courses?

3. Because the Professional Education Program is taken by all certification area students, the overall pass rate is of most concern to Program faculty. It would be expected that this rate should be in the high 90s. How can those programs which consistently show a pass rate of less than 95 % be examined in order to determine effective methods for securing higher pass rates?


Please find below the West B data for the teacher residency program. Please use these data, the LiveText data, and the West E data found below to predict candidate success in your program. Given theses summaries, are there changes to your program or to the unit your program recommends the CTL consider?

  • Between 2005-2007, 49% of the candidates passed all three sections of the exam their first attempt, 84% passed the reading portion in their first attempt, 82% math their first attempt, and 65% passed writing their first attempt.
  • The mean number of candidates not passing reading portion is 11%, math 12%, and writing 25%.

CTL WEST B Data Summary 2002 to Present


Program Interpretations and Conclusions:

The fundamental issue here is the purpose of the West-B. If it is to be a screening instrument for admission into the teacher training programs, then the question is how effectively does the West-B predict success in teacher training. The difficulty in any study addressing issues such as this is that there is a biased sample—only students who pass the exam are given the training. Hence, it is not possible to compare the success of those who did not pass the exam to those who did pass the exam.

The fact that only 49% of students passed all parts of the exam on their first attempt, and that most of the students who failed eventually do pass on retaking the exam, suggests that roughly half of our students are marginally prepared in terms of the West-B criteria. The failure rate in the writing component is the most troubling, as lack of writing ability has been identified by the faculty as a serious deficiency in many of our students.

Failure to pass the West-B is a university problem, not a CTL problem, because all of the students who take the exam have been admitted to the university and the vast majority have completed several quarters at the university. Nonetheless, CTL should work with the General Education Committee and other offices in the university to enhance basic skills education preparatory to and at the university.

6. The WEST E is administered by ETS as a state requirement for program Exit, measuring content knowledge by endorsement area. ETS has not sent the final corrected data summary at the time of this report, however, the data we keep on a continuously updated basis is described below in the following graph. The graph compares 2005-2006 and 2006-2007 data by endorsement area. We suspect the 2006-2007 data will change after all scores are received from ETS. According to this set of data, 2005-06 pass rates were 90%. Remember all candidates must pass the test to be certified, so they take it multiple times. We are working on authenticating a different process that will show how many times candidate take the test and when. The 2006-07 data indicates pass rates of 87%. If your program is one of those with a pass rate below 80%; what program recommendations are you considering that will positively affect the rate of passing the WEST-E for 2007-2009?

Program Interpretations and Conclusions:

The Professional Education Program per se does not have West-E pass rates associated with its students. See related comments in item (4) above.


Please find below the EBI teacher and principal data for all program completers. Discuss and report in the space provided what your program recommends the unit should accomplish to improve overall satisfaction, or what your program is doing to improve the trend.

  • This survey is administered through OSPI and is contracted through Educational Benchmarking Inc. These data are collected for all new teachers in public schools by surveying new teachers and their principals.
  • Response rate average over the seven years n=105
  • The graph represents a seven year average satisfaction trend by category
  • Highest satisfaction ratings are in the areas of:
    • Student learning
    • Instructional strategies
    • Management, control and environment
  • Lowest satisfaction ratings are in the areas of:
    • Reading skills
  • 5 year Principal responses followed similar patterns as teachers n=41


Program Interpretations and Conclusions:

It would be helpful to have a full explanation of the categories. What does “Student Learning” mean, for example? Does “Reading Skills” mean the teacher candidate has reading skills, or that he/she feels competent to teach reading skills?

The data comes from a relatively small number of our graduates over a long time period—a period in which there has been great change in personnel and structure of some programs. In addition, the differences signaled in the item description may not be significant. For these reasons, these data should not be given much weight.


Please find below first year and third year teacher survey results summarized by graphing mean responses for each question.

  • This survey is administered by CTL and data trend summary represents 2004-07
  • The average response rate for 2004-2007 is 15%
  • First year teacher N= 375, Third year teacher n =200
  • The graph and subsequent ANOVA demonstrates a significantly higher average satisfaction rating from first year teachers when compared to third year teachers (p<.05)
  • Highest satisfaction ratings are in the areas of:
    • Subject matter knowledge
    • Application of EALR's
  • Lowest satisfaction ratings are in the areas of:
    • Classroom management
    • Involving and collaborating with parents

Program Interpretations and Conclusions:

In contrast to the data in the previous item, this data is quite useful. It is validated by numerous other sources as well. As part of the Professional Education Program revision currently taking place, information of Program strengths and weaknesses was solicited from the following stakeholders.

Professional Education Program faculty
Content area faculty
Department of Education program faculty
Student teaching field faculty
CWU Center directors and faculty
Profession Educator Advisory Board (PEAB)
Washington Association of School Administrators (WASA)

Every one of these groups has identified the same two areas of weakness in new teachers: Classroom Management and Assessment. These are also the two areas of greatest perceived weakness indicated in the survey data presented here.

The Program revision is in the process of addressing these two issues. This will take time as there are many issues to resolve, such as where across the Program and content area training these items will be addressed, what format will most effectively teach these concepts, and how teacher candidates will be able to practice what they have learned prior to entering the classroom as teachers. Nevertheless, all parties agree to the common goal of better training our students in these critical areas.


Please find below a comparative analysis of candidate dispositions from beginning candidates to finishing candidates. Please comment on the changes you observe in your candidates over time and describe how and why you think this occurs. What does your program specifically do to engage candidates in developing professional teacher dispositions?

  • This inventory is administered by the CTL at admissions (N=645), and again at the end of student teaching (N= 195). Some of the 645 candidates have not yet student taught, which is why the n's are different.
  • There is a significant difference in 12 of 34 items (p<.05) between beginning candidates and candidates completing student teaching
  • Change is in the preferred direction from agree to strongly agree
  • This means somewhere between entry and before exit, the teacher program candidates are developing stronger professional beliefs and attitudes that reflect the underlying values and commitments of the unit's conceptual framework. Future work will include data that tells us where this change is occurring and if there are difference caused by demographic variables. If you want to read more about this disposition instrument, the validation study is published on the OREA web site under research.

Program Interpretations and Conclusions:

In general, there is substantial natural maturation that takes place in our students as they progress through the program. In addition, it would be expected that students who find they are not committed to teaching would tend to leave the program for other majors. As a result of both of these factors, it would be natural to see a change in dispositions.

This instrument and the results should be studied further by Program faculty to determine what information it can provide to enhance the program.



Final Student Teaching Evaluation Report on LiveText

  • The data report is too large to be placed in this document. Please access the data by going to this link on our assessment system web site
  • The report reveals the final assessment of elements found in state standards IV and V
  • Candidates are generally performing at a high level, although there are some candidates as depicted by the colors green and red who are not performing to standard.
  • Examination of those elements indicates some agreement with results provided in the 1st and 3rd year teacher survey.

Please look at these data carefully and discuss with your program faculty some ways the teacher residency program can begin to address the few but common deficits occurring in candidate knowledge and skills relative to the State standard elements. If you need to refer to state standards please refer to this link in the assessment system website:


Program Interpretations and Conclusions:

The student teaching report may reinforce some of the needs identified above with respect to classroom management and assessment.

What needs to be done is to integrate the student teaching evaluation into the full Professional Education Program in a more effective way. Historically, there has been little contact between Professional Education Program faculty and field supervision faculty. It is imperative that both groups perceive Program goals in the same way. This type of communication has been lacking in the past, and is a key part of the Program revision currently underway.


Please examine these data and report any discussions your program has regarding the reported results.

  • This survey is conducted by Career Services and reported to OSPI. The report, however, has been reanalyzed and the summary reflects the new analysis, which covers 2002-2006.
  • Average response rate = 57%
  • Of that 57%, the average percent of graduates who get jobs in state is 94%
  • The average percent of graduate still seeking a position is 27%
  • Two percent of the 57% have decided not to teach
  • For 2005-2006; 35 % of the program graduates responded to questions regarding ethnicity and gender. Out of the 35% who responded, 90% were Caucasian, 5% were Hispanic, 3% were African-American, and 1.8% were Asian.

Program Interpretations and Conclusions:

Some of the data in this table are suspect. For example, in the 2002/03 survey no graduates were substitute teaching. The very next year, however, nearly 30% of graduates were doing so.

The data clearly indicate that the vast majority of our graduates are teaching in the state of Washington. This strengthens the argument that we should be focusing our attention of the type of students, conditions, and laws that are present in this state.






© Central Washington University   |   All Rights Reserved