Assignment #5: SRC Survey

Overall, I was pleased with the first version of the survey with one major issue:  Teenagers are happy to check boxes, but reluctant to respond to open-ended questions in any detail.  I knew this was a concern when I was creating the survey and my four respondents confirmed my fear.  Although 3/4 surveyees responded in these areas, and their answers were valuable, all respondents were quite brief.  A larger sample might be required to verify that the open-ended questions are indeed wasted space.

This is a difficult issue to resolve because open-ended questions are, at some point, necessary.  These questions may instead have to be incorporated in the ‘interview’ or ‘focus-group’ sections of the evaluation.

Link to the PDF version of the original SRC survey:  Survey1

Link to the refined survey.  Small changes were made in wording, an ‘N/A’ column was added to the last section as well as two overlooked SRC planned events:  Survey2 (please note, conversion to pdf altered formatting)

Assignment #4: Logic Model

SRC Flowchart i

Inputs

The inputs for the SRC are minimal. As there is little money for the program, the financial aspect is quite basic.

As a group run program with staff supervision, the dominant input is personnel time, primarily that of the students.

Outputs

The outputs are defined as those activities created by the SRC.  Examples include the food sales, assembly leadership, fundraising for charities, theme days etc..  Outputs also include moments of leadership development and the impact that the SRC has on the various communities involved including the student body, the staff, the community and the school board.

Outcomes

The most important aspects of the outcomes are those that are long-term, including leadership skill development and future participation in councils by the SRC members.  Long-term outcomes also include the development of school culture and establishment of traditions within the school.

Obviously, the short and medium-term outcomes are essential in developing the long range positives and these are defined by the programs council develops.  If these programs are successful both in the way they are developed and in their application then long term outcomes will be achieved.  If students take the time to effectively plan an assembly, they are learning development and leadership skills (long-term outcomes) while providing a short-term outcome (the assembly itself) and perhaps building a culture of focused, engaged assemblies within the school (medium and long range outcome).  As we see here, the short-medium-long range outcomes are intertwined.  The short and medium range outcomes must be achieved in order to develop the higher level long range targets.

Assignment #3: Evaluation Assessment

Program Under Evaluation

Hague High School Student’s Representative Council (HHSRC).  

Note this is an ongoing program with significant overlap from year to year.  There is no before and no ‘end date’ making signposts of success or achievement difficult to identify.

Purpose

To help participants, staff and the student body:

a) understand the strengths of the program

b) improve the program

Users will include the SRC supervisors and council members who will use the evaluation to oversea identified goals over an extended period (supervisors) and view the strengths and weaknesses of the program to better achieve identified targets (SRC members).

Questions the Evaluation Seeks to Answer

  • What impact does the program have on participants, the school and the community?
  • Is the student body satisfied with what they gain from the program? 
  • What are the strengths and weaknesses of the program?
  • What activities contribute most? Least?
  • How well does the program respond to initiating need?
  • Does the program meet the objectives outlined in their constitution
  • What does the program consist of – activities, events?
  • Does everyone benefit from the program?  Who most – who is left out?
  • What in the socio-economic-political environment inhibits or contributes to the program’s succes?
  • What are the characteristics of the target population (student body)?
  • What changes do people see as possible or important?

(Many questions taken from “Planning an Evaluation”, p. 5)

Information Required 

Wish to Know Indicators
Are activites successfully organised and coordinated? SRC plans activities participated in by the student bodyStudent body has positive reaction to activities, either through observed participation or survey/interview responses
Are SRC members developing leadership skills?  Growth between initial SRC ‘retreat’ and current participation in council and activities
Is the promotion of school spirit successful?  Participation in planned/unplanned activitiesPositive language use observed in discussion of school

Information Sources

  • HHSSRC Constitution
  • Meeting minutes
  • SRC financial statements
  • SRC members (names cannot be published)
  • Vice Principal and official SRC supervisor B.B.
Data Methodology
  • Survey
  • Interview
  • Observation
  • Focus Group
Video will be used to record data where possible.
Surveys will be distributed on paper due to the ‘offline’ nature of this demographic.
Participants representing the ‘student body’ will be composed of a random sampling.  3 students from each grade (3 x 6 = 24) will be chosen at random to participate in interviews and focus groups.
Data Analysis
Data will be interpreted by myself and B.B., VP of Hague High.  Hopefully, the additional view will offer balance to the analysis.
Evaluation Sharing
To Whom When/Where/How
SRC At the Tuesday SRC meeting through a verbal presentation supported with visual (ppt) documentation and analysis.
Staff At the Monday staff meeting meeting through a verbal presentation supported with visual (ppt) documentation and analysis
Adminstration Written report submission + 1:1 discussion.
Questions which require answering
  • Define ‘school spirit’.  What is it? What does it look like?
  • Identify ‘statistical techniques’ for data analysis.
  • Research ‘how to analysize narrative data’.

Assignment #2: Evaluation Evaluation Models

Choosing the Appropriate Model:

I first attempted to apply  Kirkpatrick’s Four Levels of Evaluation:

     1) Reactions: Measures how participants have reacted to the training.

     2) Learning: Measure what participants have learned from the training.

     3) Behaviour: Measures whether what participants learned is being applied on the job.

     4) Results:  Measure whether the application of training is achieving results.

Although Kirkpatrick’s methodology was developed specifically to evaluate employee training programs, I presumed the same concepts could be applied to this case study.

However, after attempting to apply Kirkpatrick’s model, I found that I did not have the required information to evaluate the ‘learning’ and ‘reactions’.  The case study describes the program in detail, but does not include information about participants’ reactions during or after completion of the program.

I think attempted to apply a specifically health oriented approach by Public Health Agency of Canada.  This model seems appropriate, but when compared to Shufflebeam’s CIPP model, awfully detailed for the task.  Again, as with Kirkpatrick, were I charged with more information and perhaps access to the stakeholders themselves, it would be an excellent model to apply.

So, after considering various models, it is the Shufflebeam CIPP approach which best suits the project.

       What should we do?

      How should we do it?

      Are we doing it as planned?

     Did the program work?

Applying Shufflebeam’s CIPP Model

Context Evaluation: What should we do?

The goal of the program under evaluation is to “promote physical activity among Aboriginal women during their childbearing years”.

In order to meet this goal, a prenatal exercise program was developed for Aboriginal women in Saskatoon.

Accessible by public transportation, the program was free, led by a certified instructor and combined “both aerobic and mucle-toning-childbirth-preparation excercises” during 45 minute session.  Activities included,”low-impact aerobics, water aerobics, selected exercise machines, line dancing…brisk walking” and water aerobics.

Weekly door prizes were offered, nutritious snacks were shared, “games, crafts or parties for special occasions were held”.  Resources offering free education materials related to pregnancy and heath were also available.

It seems that in each case the activities were well matched to the program goals.  ‘What needs to be done?’ falls into four main categories:

1) Offer an attractive program that is desirable and easy to get to.

2) Create an environment that is social and builds support networks.

3) Encourage self-education for high-risk Aboriginal women

4) Develop an effective exercise program.

Input Evaluation: How Should We Do It?

1) Offer an attractive program that is desirable and easy to get to (bus tickets, rent-free pool access, free childcare, free bathing suits, snacks and beverages)

2) Create an environment that is social and builds support networks. (games, crafts, parties, post-excercise social area, library, bringing a buddy)

3) Encourage self-education for high-risk Aboriginal women (Registered Nurse co-ordinator, Physiotherapist, resource table, library of books and videos for loan)

4) Develop an effective exercise program. (research based, certified instructor, addition of more water-based activities)

Process Evaluation: Are We Doing it as Planned? And if Not, Why Not?

From the response to the previous question, it seems that the National Health Research and Development Program has done an excellent job.  Particularly impressive is their recruitment initially of 7% of the target population.

All activities seem ‘value-added’ and their link to the program goal is easily identifiable.  The plan seems to go above and beyond the initial requirements.

Product Evaluation:  Did it Work?

The program seems to be successful, particularly as participants began to ‘bring a buddy’ and asked for more water aerobics classes.

The breadth of the program is impressive however, there are many factors in a loaded question like ‘did it work?’.  For example, it might be interesting to see the budget for the program and evaluate the cost effectiveness of the program.

It would also be interesting to know the value gained by all stakeholders (from participants to the YMCA) regarding partnerships and participation developed beyond the confines of this program.  For example, did participants begin using other YMCA programs? and did the nutritious snacks supplied at the sessions influence the dietary habits of participants at home?

Reflection

The Shufflebeam model is straightforward and useable.  It is open-ended and from the plethora of information available online, may be adapted to a variety of programs through the creation of specific sub-questions.

It seems that within this four step model, its brilliance lies in its simplicity.

Assignment #1: Program Evaluation Critique

Evaluating the Evaluation of the Public Charter Schools Program.

Program:

This Evaluation, written in 2004, seeks to gauge the success of the American charter school program, a relatively new model within the complex web of American public school solutions to educational reform in the United States.

The report asks seven key questions, seeking to a) establish some norms within the very diversified landscape of charter schools across the country and b) to learn whether charter schools are meeting “state performance standards”(8), by specifically evaluating their performance in relation to traditional public schools.

Model/Process Used:

The report was compiled by an R&D company called SRI International, a group whose specializations range from environment and education to robotics.  Comprehensively written for the U.S. Department of Education, the report is for view within the public domain.

This evaluation is a summative assessment leading towards a comparison of results with public school data.  SRI used the following methods of collecting data:

i) Survey data collected from school officials

ii) Data from the federal Schools and Staffing Survey

iii) Data from state departments of education

iv) Site visits to twelve charter schools.

Through this combination of qualitative and quantitative evidence spanning three years, modest conclusions are drawn.

From the initial statement of evaluation goals, the report appears to be ‘goal-oriented/objective based’ – that goal being to continue documentation of the charter school movement.

The fifth chapter however (Charter Schools and State Performance Standards) seeks to compare the success of charter schools with traditional approaches to public education.  This is a larger and more abstract undertaking.  Rather than simply compiling data, the fifth chapter aims to draw some conclusions about the success of charter schools in relation to traditional public schooling.

Evaluation Strengths:

Order of the Report: Highlights of the findings are included at the beginning in the ‘Executive Summary’ of the report.  Readers do not have to sift through to the end of the process description in order to locate conclusions drawn by the authors.

The report is divided into five sub-questions, the first four seek to quantify some general statistics about charter schools, the fifth draws conclusions about the success of the model.

Voice of the Author: In addition to the easily chunked sections, the evaluation is written using clear, simple syntax and diction; the report avoids jargon either educational or statistical.  Language use makes the report straightforward and readable, crucial to the evaluation being an accessible document within the public domain.

Use of Charts: ‘Exhibits’ used for visually sharing numerical data are effective, particularly for breaking up numbers by state.

Appendixes: Charts and information included in the appendix provides data that can be used by subsequent researchers to build on this report.  The inclusion of data also adds validity to the initial evaluation.

Evaluation Weaknesses:

Use of Charts: While the tables used to chart data are effective, the bizarre blending of  bar graphs, a pie chart and a couple of line graphs do not always clarify the information they serve to represent.

Bias: There seems to be a bias towards the assumed success of charters schools, as though the committee was set to compile a report that would validate continued funding of the program.

Even when fault is found, as it is with a lack of ‘authorizers’, there is often a ‘however’ at the end of the section justifying the charter school’s lack in a particular area.

Data Collection: Although it is an overwhelming task to compare schools across the fifty member states, the report notes the “uneven data quality from the states that provided information” (14).  This is a weakness across the board, particularly given the diversity of charter schools from funding model to educational goal.  The evaluation’s claim to draw conclusions valid throughout the United States despite visiting only twelve of the nearly 1,500 schools adds to questions about the comprehensiveness of their data collection.

Under the performance standards chapter, the document admits that “scant research currently exists on charter school performance” (53).

Failure to include the effect on student learning:  The evaluation includes a section on ‘performance standards’ and tries to legitimize the practise of charter schools through analysis of demographics and funding.  However, it seems that any evaluation of an educational model needs to first and foremost examine the success of the learners.

Overall

Compiled by an international R&D service who are well trained the creation of slick, well-formatted evaluations, the report looks good but tries to tackle too large a demographic without success.

While the organisation of available information provides a foundation for subsequent reports, the lack of consistent meaningful data makes drawing conclusions an impossible task.  While facts are compiled, the usefulness of this report to draw meaningful observations about the charter school experiment is questionable.

Societal Influence

There are several societal influences on our thoughts, feelings and behaviour:  

  • Social norms:  Rules that regulate human life, including social conventions, explicit laws and implicit cultural standards.  
  • Imitation:  Probably the most powerful social influence on our behaviour and attitudes is the behaviour of other people.
  • Social facilitation:  Increased activity resulting from the presence of another person.
  • Social loafing:  Decreased activity resulting from the presence of another person.
  • Reciprocity:  Another strong social influence is reciprocity, the tendency to pay back favours others have done for us.  Reciprocity does not require that the “favour” be initially requested or even wanted.  The debt of obligation can be so strong that reciprocity can be exploited by those who want us to comply with their requests when we would otherwise not do so.
  • Commitment:  Once people commit themselves by making a decision and acting on it, they are reluctant to renounce their commitment.  Commitment increases people’s compliance even when the reason for the original commitment is removed.
  • Attractive people:  One of the reasons people tend to comply with the requests of attractive people is that they want to be liked by attractive people; in their minds, being liked by attractive people makes them more desirable, too.  People tend to emphasize their associations with attractive and important people.
  • Authority:  People tend to comply with the requests of people in authority and to be swayed by their persuasive arguments, and such obedience is generally approved by society (Buskist et al., 2002, p. 504-513).