School Age Observation Essay About Cafeteria

This article describes the use of an observation system to measure middle-school staff practices, environment characteristics, and student behavior in the school common areas. Data were collected at baseline from 18 middle schools participating in a randomized controlled trial of school-wide Positive Behavior Support. The observations were reliable and showed sensitivity to differences between school settings and between schools. Multilevel models with students nested in schools were used to examine the associations of staff practices and the school environment with student behavior. Less effective behavior management and more staff criticism, graffiti, and percentage of low-income students were associated with student problem behaviors. Greater use of effective behavior management and positive attention, and fewer low-income students were associated with positive student behavior. The use of data-based feedback to schools for intervention planning and monitoring is illustrated. Implications for school-wide efforts to improve student behavior in middle schools are discussed.

This article reports on an observation system for measuring middle-school staff practices, environment characteristics, and student behavior in school common areas. The observation system was developed to measure the outcomes of school-wide interventions aimed at improving middle-school practices and environmental characteristics in an effort to decrease problem behaviors and increase positive behaviors in middle-school students. Almost two thirds of aggressive incidents at school occur outside of the classroom in a school common area (Lockwood, 1997). Middle-school students feel particularly unsafe in school areas that are lacking adult supervision (Astor, Meyer, & Pitner, 2001). The observation system therefore focuses on common areas of the school―places where students spend time before and after school starts, during passing times between classes, and during school breaks―such as hallways, the cafeteria, outdoor areas, the gym or game room, and school entryways and bus areas. The aims of this article are to describe the observation system procedures, reliability, and sensitivity to detect differences and to examine associations of observed staff practices and school environment characteristics with student behavior. The use of such descriptive and correlational information from baseline data can inform researchers and educators about potential mechanisms of school-wide efforts for student behavior improvement.

Challenges of Early Adolescence

Early adolescence is a particularly important period of development because it is a time when diverse problems begin to emerge and because problems appearing in this stage often have more negative long-term consequences than do problems that develop later (Biglan et al., 2004). Early adolescence is a time when emotional states become less positive and more variable (Larson, Moneta, Richards, & Wilson, 2002), peer associations and friendships change (Hardy, Bukowski, & Sippola, 2002), frequent peer harassment (Nansel et al., 2001; Rusby, Forrester, Biglan, & Metzler, 2005) and relational aggression (Björkqvist, Lagerpetz, & Kaukianen, 1992) occur, and adult supervision decreases (Richards, Miller, O’Donnell, Wasserman, & Colder, 2004; Stoolmiller, 1994). It is a time when peers become more salient and their influence increases (Dodge & Sherrill, 2006). For some adolescents, affiliation with deviant peers increases and the use of tobacco, alcohol, and illicit drugs accelerates (Biglan & Smolkowski, 2002; Dishion & Dodge, 2006).

The organization and structure of middle schools often pose additional challenges to early adolescents’ successful development (Alspaugh, 1998; Eccles, Lord, Roeser, Barber, & Jozefowicz, 1997; Roeser & Eccles, 1998; Roeser, Eccles, & Sameroff, 1998). The majority of aggressive incidents among youth occur at school, and most often are not reported to adults (Lockwood, 1997; Petrosino, Guckenburg, DeVoe, & Hanson, 2010). Concurrently students experience a substantial decline of teacher support across the middle-school years, which is unfortunate given that student perception of teacher support is associated with fewer behavior problems (Way, Reddy, & Rhodes, 2007). Lower delinquency and less peer victimization occurred in schools with rules that were clear and were perceived by students to be fair (Gottfredson, Gottfredson, Payne, & Gottfredson, 2005).

Instead, schools commonly use punitive consequences that remove students from the classroom and school activities. This removal is not effective at reducing problem behavior and academic progress (Gottfredson, Gottfredson, & Hybl, 1993). Removal from school (suspensions and expulsions) increases as a sanction in middle school (Clark, Dogan, & Akbar, 2003). The increased emphasis on punitive discipline exacerbates behavioral problems at school (Mayer, 1995). Indeed, in far too many cases, the results of these efforts are to exacerbate emotional and behavioral problems and to contribute to rejection by nondeviant peers and the formation of friendships with other rejected students (Walker, Colvin, & Ramsey, 1995; Walker, Ramsey, & Gresham, 2004). These conditions also contribute to delinquency among aggressive students (Clark et al., 2003). Improvements are needed in middle-school environments so that a more positive influence on social outcomes for youth can be achieved.

Staff Member Practices and Student Behavior

In a meta-analysis of 165 school-based programs aimed at preventing delinquency (86 of the studies involved middle schools), the largest effect sizes in reducing delinquency were for programs that involved school-wide behavior management interventions (Wilson, Gottfredson, & Najaka, 2001). Interventions that focused on the school social environment had significant effects on delinquency, alcohol and drug use, truancy, school dropout, and other problem behaviors. School-wide primary prevention efforts also provide a needed foundation for more focused behavioral interventions for indidivdual students (Horner, Sugai, & Anderson, 2010). Interventions involving school-wide behavior management strategies result in reductions in antisocial behavior (Sprague et al., 2001), vandalism (Mayer, 1995), aggression (Grossman et al., 1997; Lewis, Sugai, & Colvin, 1998), later delinquency, and alcohol, tobacco, and other drug use (Kellam & Anthony, 1998; Kellam, Mayer, Rebok, & Hawkins, 1998; O’Donnell, Hawkins, Catalano, Abbott, & Day, 1997). Providing clear expectations, monitoring students’ behavior, and consistently delivering positive reinforcement for students following expectations can reduce aggressive and disruptive behavior and increase cooperative behavior in middle schools.

School-Wide Positive Behavior Support

Positive Behavior Intervention and Support (PBIS) focuses on improving behaviors in all areas of the school (Sprague & Golly, 2004; Sprague & Horner, 2007; Sugai & Horner, 2002). The PBIS includes multiple strategies aimed at school staff members and students, such as establishing school-wide behavior rules, posting and teaching the rules, and establishing a system for active supervision in all school areas and for providing positive reinforcement (Sprague & Golly, 2004; Sprague & Horner, 2007). The PBIS team is provided with data-based feedback regarding their implementation of the PBIS practices and the impact of implementation on student behavior as indexed by discipline referral patterns (Irvin, Tobin, Sprague, Sugai, & Vincent, 2004; Irvin et al., 2006). In the present randomized controlled trial (RCT) on PBIS, the baseline observation data showing the frequency and location of observed student problem behavior and the extent to which school staff are monitoring and providing positive attention to students is presented to intervention schools to help pinpoint priorities for improvement.

Research on school-wide PBIS in middle schools mainly involves quasi-experimental designs with a small group of schools, and many have used office discipline referrals to measure outcomes. Study findings include reductions in the number of discipline referrals for aggressive and oppositional behaviors (Sprague et al., 2001) and peer aggression in boys (Metzler, Biglan, Rusby, & Sprague, 2001), as well as lower frequency and severity of juvenile arrests (Sprague & Nishioka, 2004). In addition, case studies in middle schools have been conducted to examine the impact of PBIS on student behaviors in common areas during transition times. Clear expectations, reminders, and incentives were used to promote appropriate behavior during transitions to lunch, resulting in significant reductions in inappropriate behaviors, such as running, pushing, and yelling (Oswald, Safran, & Johanson, 2005). In another case study of PBIS, when middle-school students were taught how to be quieter in the hallway during transitions and were provided incentives for expected behavior, a noticeable decrease in sound levels during transition times was found (Kartub, Taylor-Green, March, & Horner, 2000).

The Middle-School Environment

Rutter, Maughan, Mortimore, Ouston, and Smith (1979) examined the impact of the physical features of schools on student outcomes such as attendance, behavior, and academic performance. Displaying student work on the walls was considered a form of positive feedback to students and was significantly associated with performance on academic tests. Cleanliness and tidiness of classrooms were significantly associated with positive student behavior. Conversely, damaged school property (e.g., cracked windows, broken chairs) and graffiti were associated with student violence, student fighting, and low levels of student on-task behavior. Graffiti on school grounds was also associated with truancy.

School-wide interventions have been developed that include the manipulation of such physical features of the school. Embry, Flannery, Vazsonyi, Powell, & Atha (1996) promoted the public display of student work and student recognition in schools. More recent research on school climate has focused on multiple dimensions, including student safety, the physical school environment, and the quality of teacher–student interactions (Cohen, McCabe, Michelli, & Pickeral, 2009). Yet, little is known about the ways in which staff practices and school physical features may interact to produce improved student behavior in school settings.

Study Aims

The first aim of this study is to describe the school observation system procedures, code definitions, and reliability. The second aim is to test the sensitivity of the observation system to detect differences in the behavior management practices and student behavior. The sensitivity of the observation measure is tested by examining differences in staff and student behavior in the different common area settings. Lower staff monitoring and higher rates of behavior problems are expected to occur in outdoor areas compared to the hallways and cafeteria. The sensitivity of the observation measure is also tested by examining differences in these measures between schools. A strength of direct observations is that they tend to be sensitive to behavioral changes (Snyder et al., 2006). Using a measure that is sensitive in detecting differences is critical to assist schools in pinpointing areas that need improvement and for developing and monitoring school-wide interventions. The third aim is to investigate the concurrent associations between staff practices and school environmental features with overall student behavior in middle-school common areas. The extent to which effective behavioral management practices and school-level environmental factors are associated with students’ positive and problem behaviors is modeled. Effective behavior management, positive attention, good staff-to-student ratio, display of student work, and clear rules are expected to be associated with student positive behavior. Poor behavior management practices, low staff-to-student ratio, negative staff–student interactions, and a school environment with damaged property and graffiti are expected to be associated with student problem behavior.

Method

Participating Schools

This study uses baseline data from 18 middle schools that are participating in an RCT of school-wide PBIS. These schools are in small- to medium-sized communities in Oregon. Enrollment in the middle schools averaged 400 students (range = 65–1,100 students). The majority of the middle schools included Grades 6–8, and four schools enrolled Grades 7 and 8 only. On average, 49% of middle-school students in the schools receive free or reduced lunch (range = 20%–80%), and average enrollment across schools was 82% White, 9% non-White Hispanic or Latino, 1% Black, 2% Asian or Pacific Islander, and 3% Indian or Alaskan native (http://nces.ed.gov/ccd/schoolsearch, accessed on August 21, 2009).

Prior to the collection of any observation data in the schools, school principals signed a letter of agreement that described the activities involved in participating in the RCT. Parents of students in participating schools were sent a letter informing them of the study and that anonymous observation data were going to be collected. For the common area and school environment observations, no individual student or teacher data were collected. Behavioral frequencies are of any staff member or middle-school student in the observation area.

Observation Assessment Procedures

The observers were research assistants who were kept masked to school condition. Observers were trained with procedures used in previous observation studies (e.g., Rusby, Foster, & Taylor, 2008): learning code definitions, examples, and non-examples and practicing with flashcards of examples, videotapes, and in schools. Observers were required to achieve a minimum of 80% reliability on the staff and student behavior codes before collecting study data. It took approximately 6 weeks of training and practice for observers to achieve reliability.

For the collection of baseline observation data, observers visited each school on three different days in spring. Multiple observations in the common areas (i.e., hallways, cafeteria, outdoor areas, gym, and school entryway) were collected per school visit. A total of 411 common area observations were collected (an average of 23 per school across the 3 baseline time points). Table 1 shows the protocol for scheduling the common area observations. Following each observation, observers completed a rating of staff and student behavior. Observers also walked around the school common areas to count incidences of vandalism and property damage and the extent to which clear rules were posted and student work and recognition were displayed. This walk about assessment was collected during each of the three visits to the schools.

Table 1

Common Area Observation Schedule Protocol

Common area observations

Observers selected an area that was approximately 20 × 15 feet, depending on location. They focused on any staff members and middle-school students who were within the selected area. Observers used physical boundaries to define the areas, such as in the hallway between two classroom doors, or in an area with four picnic tables in the cafeteria; outdoors, the area could be half of the basketball court. The frequency of staff and student behaviors in the selected area was simultaneously coded. Tallies of behaviors were recorded in 1-minute segments, and each observation lasted for approximately 20 minutes. Hallway observations were typically shorter (approximately 5 minutes) due to school schedules; however, data were only kept on hallway observations lasting 3 minutes or more. Reliability checks were conducted on 96 randomly selected baseline common area observations (23% of the observations), a sufficient number for testing reliability. Following each common area observation, observers completed ratings of staff monitoring, behavior management practices, and positive attention. Observers also completed ratings of overall student positive behavior.

Walk about ratings of middle-school environment

The observers also walked around the school when students were in class to measure school environmental features. The extent of damage to school property and graffiti, and the display of student work, student recognition, and expectations/rules were rated (see Measures).

Measures

The observation system was created for this RCT of PBIS, focusing on behaviors salient to the school-wide intervention goals. All observations systems used paper-and-pencil collection methods. The extent to which staff actively monitored students, effectively managed student behavior, and provided positive attention to students and the extent to which students acted positively and appropriately or exhibited problem behaviors were assessed. Similar to those in a study by Cushing, Horner, and Barrier (2003), observations occurred in the common areas of the school and captured peer interactions and subsequent adult positive or negative reinforcement for the behaviors. In addition, the observations captured staff antecedent behaviors such as monitoring amd providing clear expectations. Many of the observation codes of staff and students were derived from the Assist coding system (Rusby, Taylor, & Milchak, 2001).

Staff practices

Observers tallied each time a staff member actively connected with a student or students, provided praise or recognition to a student, verbally criticized a student’s behavior, provided a tangible positive reinforcer, or provided a tangible punitive consequence. Table 2 provides a brief descriptive definition and inter-rater agreement for each of the codes. The average inter-rater agreement for staff behavior in common areas was 92%. A rate per minute for each code was computed by dividing the frequency counts by the number of minutes observed. A composite score for staff praise and reward was computed by adding the rate per minute of praise/approval and positive reinforcement. The inter-rater reliability of this composite score was estimated using a one-way random effects intraclass correlation coefficient (ICC; Shrout & Fleiss, 1979). The inter-rater reliability ICC for staff praise and reward was .91.

Table 2

Definitions and Inter-Rater Agreement for Observation Codes

From the observer ratings, seven items measuring effective behavior management included whether staff members had good control or influence on students, gave clear instructions, prepared students for transitions, monitored students, prompted expected behavior, and were consistent. These items were rated on a 5-point scale from “did not occur” to “constantly occurred.” The effective behavior management items were used in a study in elementary-school classrooms (Rusby et al., 2008) and were originally derived from the Oregon Youth Study observer impressions of parent monitoring and discipline (Capaldi & Patterson, 1989). Reliability (Cronbach’s alpha) for effective behavior management was .87. Two items, “adults are warm and caring toward students” and “adults praise students for specific behaviors,” measure positive attention and are rated on the same 5-point scale. The correlation between the two positive attention items was .43 (p < .001).

Student behavior

Observers tallied each time a student was noncompliant, engaged in potentially dangerous behavior, or was verbally or physically aggressive toward a peer. Code definitions and inter-rater agreement for the student behavior codes are in Table 2. The average inter-rater agreement for student behavior was 82%. A composite score for negative peer interactions was computed by adding the rate per minute of verbal aggression, potentially dangerous behavior, and physical aggression. The inter-rater reliability ICC for negative peer interactions was .90. Overall student positive behavior was measured with eight items from the observer ratings on cooperation and prosocial behaviors. This measure was adapted from the Classroom Atmosphere Rating (Greenberg & Wehby, 1995). Reliability (Cronbach’s alpha) for student positive behavior was .91.

School environment features

The school environment ratings measured indicators of problems, such as damage to school property and graffiti, and of positive recognition, such as display of student work and student recognition (Embry et al., 1996; Rutter et al., 1979). Damage to school property included broken windows, broken equipment, holes in walls, and doors missing from lockers. To measure graffiti, the number of walls with graffiti were counted and the extent of graffiti on each wall was measured by estimating whether one-quarter, one-half, three-quarters, or the full wall was covered with graffiti. The number of walls and display cases with student work and with student recognition was counted. The extent to which clear rules and expectations were posted in the school common areas was also assessed (Schoolwide Evaluation Tool [SET]; Horner et al., 2004; Sugai, Lewis-Palmer, Todd, & Horner, 2001).

Other school factors

Observers also estimated the number of students and adults who were present during each observation, to get a ratio of staff to students for each observation. Demographic information of each participating school was collected from the Oregon Department of Education Web site (Oregon Department of Education, 2009) when baseline data were collected. The Oregon Department of Education data include percentage of students eligible for free and reduced lunch (reflecting percentage of students from low-income families) and number of middle-school students enrolled (reflecting school size).

Results

Analytic Procedures

First, the univariate distributions were examined to ensure an accurate interpretation of the measures of central tendency and variability. Scales that exhibited moderate positive skew and kurtosis (observed adult criticism and student problem behavior) underwent square root transformations to obtain normal distributions. Next, the extent to which staff practices and student behavior differed by common area setting was tested using analysis of variance. To examine the presence of setting effects on staff practices and student behaviors, a one-way analysis of variance was run for each of the behaviors of interest. A Scheffe test of planned comparisons was conducted for significant omnibus setting effects as a means to further examine the setting effect. Finally, the extent to which staff practices, school environment, and student behavior differed by school was also tested using analysis of variance.

To appropriately account for the nested nature of the data, multilevel models with observations clustered by school were run estimating random intercept coefficients using Mplus (Hedeker, Gibbons, & Flay, 1994; Nich & Carroll, 1997). For each predictive model, an unconditional model was first run to determine the proportion of the variation in the outcome measure that was explained at the between-school level. The within-school-level predictors were staff behaviors, which were observed multiple times in different school common areas. The between-school-level predictors included global ratings of the school environment and school demographic factors. The first model predicting student problem behavior included staff effective behavior management, staff criticism, and staff-to-student ratio as within-school-level predictors and extensiveness of graffiti, damaged school property, number of students enrolled, and percentage of students receiving free or reduced lunch as between-school-level predictors. The second model predicted student positive behavior. For this model, staff effective behavior management, positive attention, and staff-to-student ratio were within school-level-predictors, and display of student work, display of rules/expectations, number of students enrolled, and percentage of students receiving free or reduced lunch were the between-school-level predictors.

Four variables were excluded from consideration in the predictive models. Observation of staff actively connecting with students was excluded to prevent multicolinearity. Actively connecting with students is considered a component of effective behavior management and was significantly associated with that construct (r = .50, p < .001). The display of student recognition was excluded as it predominantly captured recognition for sports achievements, such as display cases of trophies won in the past, deviating from the original intention of the measure. Last, observed staff praise and reward and observed use of tangible punitive consequences had such low rates that they were excluded from the models (mean [M] = .01, standard deviation [SD] = .17 and M = .01, SD = .05, respectively).

Observed Staff and Student Behavior in Different Common Areas

Analysis of variance showed that different common areas had different levels of staff effective management, criticism, and positive attention. Student problem and positive behavior also differed by common area. As shown in Table 3, effective behavior management was greater in the lunchroom than in the hallways and outdoor areas. Positive attention was also greater in the lunchroom than in the hallways. Staff criticized students more in the hallways than in the lunchroom and entryways/exits. The staff-to-student ratio was poorest in the lunchroom and outdoor areas. Student problem behavior was the highest and positive behavior was the lowest in outdoor areas.

Table 3

Differences in Staff Practices and Student Behavior by Common Area

Differences in Staff Practices, School Environment, and Student Behavior by School

The positive staff behaviors, effective behavior management, and positive attention significantly differed by school, but criticism did not. Student physical aggression and potentially dangerous behavior differed by school, but verbal aggression did not. Student positive behavior did differ by school. Damaged school property significantly differed by school, but extensiveness of graffiti did not. Display of school work, recognition, and rules/expectations significantly differed by school. The analysis of variance depicting school-level differences in staff practices, school environment, and student behavior is shown in Table 4.

Table 4

Differences in Staff Practices, School Environment by School

Illustrative Example of Different Student Behaviors by School Common Area for Two Schools

To illustrate how student behavior may differ by common school area, Figure 1 shows an example from School A and School B. Observed student behavior are rates per minute. In School A, most of the problem behavior occurred in outdoor areas and in the hallway. Physical aggression occurred approximately once every 2½ minutes, potentially dangerous behavior occurred approximately once every 2 minutes, and verbal aggression occurred approximately once every 4 minutes in outdoor areas. For School B, rates of any of the problem behaviors were less frequent than every 5 minutes and were more evenly displayed in the different common areas throughout the school. Note that, in School B, students were not in a gym during break times. This graphed information is presented to schools to help in developing school-wide behavior support plans. For example, in School A the recommendation to the behavior support team was to focus their behavior support efforts first in outdoor areas and then in the hallways, whereas in School B increasing positive incentives school-wide was recommended for preventing minor behavior problems throughout the school.

Figure 1

Example of student problem behaviors in common areas of Schools A and B.

Multilevel Model Predicting Student Problem Behavior

The unconditional multilevel model estimated that 7.6% of the variance in student problem behavior is at school level and 92.4% is at the individual observation level (within school). Low use of effective behavior management practices and higher rates of criticism of students by school staff members significantly predicted within-school student problem behavior. Extensiveness of graffiti and percentage of students getting a free or reduced lunch predicted student problem behavior between schools. This model accounted for 6.1% of the within-school-level variance and 71.9% of the between-school variance. These results are depicted in Table 5.

Table 5

Model of Staff Practices and School Environment Predicting Student Problem Behavior

Multilevel Model Predicting Student Positive Behavior

The unconditional model estimated 5.0% of the variance in student positive behavior between schools and 95.0% of the variance within schools. Effective behavior management, positive staff attention, and lower student-to-staff ratio predicted positive student behavior within school. The model accounted for 29.1% of the within-school variance. None of the school environment variables predicted positive student behavior. Lower percentage of students receiving free or reduced lunch predicted positive student behavior. This model accounted for 40.7% of the between-school variance (see Table 6).

Table 6

Model of Staff Practices and School Environment Predicting Student Positive Behavior

Discussion

The results indicate that schools and common areas within schools differ systematically in both the level of staff practices aimed at providing positive behavior support and of student behaviors. The variability in staff practices was significantly related to student behavior. These results provide further support for the importance of staff effective behavior management practices at the same time that they indicate the value of direct observation data for studying school-wide positive behavior support practices.

Differences were found for staff practices and student behavior in the different common areas across schools. Student-to-staff ratios were poorest in the lunch room and in outdoor areas. Although student-to-staff ratios were poor in the lunch rooms, the use of effective behavior management was greater in lunchrooms than in outdoor areas. In outdoor areas that had poor student-to-staff ratios and lower levels of effective management, student problem behavior was higher and positive behavior was lower. These differences indicate that, even when student-to-staff ratios are poor, staff use of effective behavior management practices may deter problem behavior from occurring and promote positive student behaviors. In addition, in areas in which effective behavior management practices are low, problem behaviors are likely to occur. Support for this hypothesis is shown in the model of student problem behavior (see Table 5), which shows that lower use of effective behavior management was associated with student problem behaviors, whereas student-to-staff ratio was not. The model of student positive behavior (see Table 6) shows that effective behavior management, positive attention, and better student-to-staff ratios were associated with student positive behavior.

Overall, low rates of approval/praise and tangible reinforcers were also found across schools. The average rate of approval/praise from staff to students in common areas was .05 per minute, or once every 20 minutes. The average rate of tangible reinforcers was .01 per minute, or once every 100 minutes. It is expected that these observed variables will increase in middle schools using PBIS in the postdata collection (Metzler et al., 2001; Oswald et al., 2005). Observing the paucity of positive reinforcement in these middle schools is unfortunate given the demonstration of its salience for improving behavior in school settings, particularly when combined with strategies for clearly defining and teaching specific desired behaviors (e.g., Kartub et al., 2000).

For the school environment variables, extensiveness of graffiti and percentage of students receiving free or reduced lunch were associated with student problem behavior. It is important to note that one cannot infer that the counts of damaged school property measured damage caused by students (vandalism). A measure of vandalism as reported by students has also been collected for this RCT, and will be used in future analyses. None of the expected school-level environmental variables were associated with student positive behavior, except lower percentage of students receiving free or reduced lunch. The measure of displays rules and expectations had a low base rate; few schools had specific rules and expectations posted throughout the common areas of the school. Given that these are baseline data, it is possible that increased rates will be found in some schools’ postintervention data, which may alter the associations between this variable and student behavior. One of the PBIS strategies for teaching expectations of safety, responsibility, and respect is to publicly display the expectations throughout the school (Sprague & Golly, 2004). It is likely that intervention schools will better define school rules and expectations and increase their posting of specific rules.

The figures presented with data from two different schools illustrate the different rates of student problem behavior in the different common areas of each school. We note that the overall rates of student problem behavior averaged across the different common areas are slightly different (.48 rate per minute for School A, or once every 2 minutes, and .36 rate per minute for School B, or once every 2¾ minutes). What is more important in this illustration, however, is the difference in rates of problem behavior in the different settings for School A. In School A, student problem behavior occurred in outdoor areas more than one time per minute, and in the hallways it occurred approximately once every 1½ minutes. Such information can be useful for schools in forming their plans for promoting positive behavior. It would be advantageous for School A to focus its efforts in the outdoor areas and hallway, whereas School B could focus on imbedding strategies throughout the school common areas. Presenting data-based feedback such as this to school staff is a critical feature of PBIS and helps schools define where problems tend to occur in the school and develop a plan to decrease those problems (Metzler et al., 2001; Sprague et al., 2001).

Adequate reliability was achieved for the individual observation behavioral codes, with good reliability for the composites of interest: staff praise/reward and negative behavior among students. Also, good inter-item reliability was found for the observer rating scales of effective behavior management and student positive behavior. Moreover, both the direct observation and observer rating measures showed sensitivity to differences. This is apparent from the differences found between common area settings and differences found between schools on these measures. Significant differences between schools were found on staff use of effective behavior management practices and positive attention. Also, school differences were found on features of the school environment, such as the amount of damage to school property and the extent to which schools displayed student work, student recognition, or specific rules and expectations. Differences between schools were also found for student physical aggression, potentially dangerous behavior, and positive behavior. Such sensitivity shows promise for these measures to detect change given an intervention (Snyder et al., 2006).

Limitations

The number of schools in this study is too low to generalize our results to the behavior of middle-school staff members, the state of middle-school environments, and student behavior in middle schools. The purpose of this investigation is not to make such claims. The number of baseline observations (a total of 411 common area observations), however, are ample to examine the reliability and sensitivity of the measures and to use multilevel models examining concurrent relationships of staff behavior and the school environment with student behavior.

Another important limitation to note is that the multilevel models show concurrent associations and are not longitudinal prediction models. Given this limitation, interpretations of causality cannot be made. For example, staff criticism was among the within-school variables associated with student behavior problems. The data do not show that staff criticism caused higher rates of problem behaviors. It is possible that high rates of problem behavior caused increases in staff use of criticism. The data show that rates of staff criticism and problem behavior tend to co-occur.

The environment assessment used in this study for measuring the display of student rewards included rewards for sports teams as well as rewards for student academic achievement or community service. Although there is no way to separate these different types of rewards in this data set, doing so in future studies is recommended. In discussions with our assessment team it was determined that rewards for sports teams, such as display cases with sports trophies from many years, was the most prevalent display of student reward in the middle schools. The observers noted that few rewards were observed for academic, artistic, or community service achievements or other forms of valued school contribution.

Last, observation data in a group RCT such as this take time to collect and format for analysis. This delay is counterproductive to providing data feedback in a timely manner so that it will be useful for schools. It may not be feasible for schools to collect observation data without additional supportive resources. Finding systematic and efficient ways for collecting and summarizing repeated observation data for schools is challenging, yet necessary for providing formative data to help schools make data-based decisions for promoting positive student behavior and a school climate that is conducive to student learning and success.

Future Directions for this RCT of PBIS

In this RCT of PBIS the observation data are being collected 1 and 2 years after the establishment of the intervention in half of the study schools. Multiple pre- and postobservation data points will allow random coefficient analyses (Singer & Willett, 2003) of differences in pre-and postintervention intercepts and slopes of staff practices, school environment variables, and student behavior. Mediation models (Baron & Kenny, 1986; Judd & Kenny, 1981) will be used to test the extent that changes in staff practices and school environments are functionally related to changes in student behavior. Given the demonstrated sensitivity of the observation data, we expect to be able to detect changes in the observed variables if they occur.

Implications for Practice

This article demonstrates a reliable and sensitive system for observing staff practices, school environment features, and student behaviors in school areas in which problems tend to occur. The observation system was developed for use in an RCT of PBIS, and can be used in other studies evaluating whole-school interventions for preventing problem behaviors. Data-based feedback for individual schools is illustrated and can be used in the context of evaluation studies as well as by practitioners who are providing support for schools that are developing plans for improving student behavior. Such information can be used to pinpoint what type of problem behaviors are occurring and where they typically take place. Such information can also illustrate what staff practices may need improving to achieve desired student outcomes, such as monitoring, displaying clear expectations, and providing positive feedback to students.

Acknowledgments

This research was supported by grant R01-DA019037 and 1-P30-DA018760 from the National Institute on Drug Abuse.

Contributor Information

Julie C. Rusby, Oregon Research Institute.

Ryann Crowley, Oregon Research Institute.

Jeffrey Sprague, University of Oregon.

Anthony Biglan, Oregon Research Institute.

References

  • Alspaugh JW. Achievement loss associated with the transition to middle school and high school. Journal of Educational Research. 1998;92:20–25.
  • Astor RA, Meyer HA, Pitner RO. Elementary and middle school students’ perceptions of violence-prone school sub-contexts. Elementary School Journal. 2001;101:511–528.
  • Baron RM, Kenny DA. The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology. 1986;51:1173–1182.[PubMed]
  • Biglan A, Brennan PA, Foster SL, Holder HD, Miller TL, Cunningham PB, et al. Helping adolescents at risk: Prevention of multiple problem behaviors. New York: Guilford; 2004.
  • Biglan A, Smolkowski K. Intervention effects on adolescent drug use and critical influences on the development of problem behavior. In: Kandel DB, editor. Stages and pathways of drug involvement: Examining the gateway hypothesis. New York: Cambridge University; 2002. pp. 158–183.
  • Björkqvist K, Lagerpetz KMJ, Kaukianen A. Do girls manipulate and boys fight? Developmental trends in regard to direct and indirect aggression. Aggressive Behavior. 1992;18:117–127.
  • Capaldi DM, Patterson GR. Psychometric properties of fourteen latent constructs from the Oregon Youth Study. New York: Springer–Verlag; 1989.
  • Clark R, Dogan RR, Jr, Akbar NJ. Youth and parental correlates of externalizing symptoms, adaptive functioning, and academic performance: An exploratory study in preadolescent blacks. Journal of Black Psychology. 2003;29:210–229.
  • Cohen J, McCabe EM, Michelli NM, Pickeral T. School climate: Research, policy, teacher education and practice. Teacher College Record. 2009;111:180–213.
  • Cushing LS, Horner RH, Barrier H. Validation and congruent validity of a direct observation tool to assess student social climate. Journal of Positive Behavior Interventions. 2003;5:225–237.
  • Dishion TJ, Dodge KA. Deviant peer contagion in interventions and programs: An ecological framework for understanding influence mechanisms. In: Dodge KA, Dishion TJ, Lansford JE, editors. Deviant peer influences in programs for youth: Problems and solutions. New York: Guilford; 2006. pp. 14–43.
  • Dodge KA, Sherrill MR. Deviant peer group effects in youth mental health interventions. In: Lansford JE, Dodge KA, Dishion TJ, editors. Deviant peer influences in programs for youth: Problems and solutions. New York: Guilford; 2006. pp. 97–121.
  • Eccles JS, Lord SE, Roeser RW, Barber BL, Jozefowicz DMH. The association of school transitions in early adolescence with developmental trajectories through high school. In: Schulenberg J, Maggs JL, Hurrelmann K, editors. Health risks and developmental transitions during adolescence. New York: Cambridge University Press; 1997. pp. 283–320.
  • Embry DD, Flannery D, Vazsonyi A, Powell K, Atha H. PeaceBuilders: A theoretically driven, school-based model for early violence prevention. American Journal of Preventive Medicine. 1996;12:91–100.[PubMed]
  • Gottfredson DC, Gottfredson GD, Hybl LG. Managing adolescent behavior: A multiyear, multischool study. American Educational Research Journal. 1993;30:179–215.
  • Gottfredson GD, Gottfredson DC, Payne A, Gottfredson NC. School climate predictors of school disorder: Results from national delinquency prevention in school. Journal of Research in Crime and Delinquency. 2005;42:421–444.
  • Greenberg MT, Wehby JH. Classroom Atmosphere Rating Scale. Fast Track: Conduct Problems Prevention Research Group; 1995. Unpublished manual.
  • Grossman DC, Neckerman HJ, Joepsell TD, Liu P, Asher KN, Beland K, et al. Effectiveness of a violence prevention curriculum among children in elementary school. Journal of the American Medical Association. 1997;277:1605–1611.[PubMed]
  • Hardy CL, Bukowski WM, Sippola LK. Stability and change in peer relationships during the transition to middle-level school. Journal of Early Adolescence. 2002;22:117–142.
  • Hedeker D, Gibbons RD, Flay BR. Random-effects regression models for clustered data with an example from smoking prevention research. Journal of Consulting and Clinical Psychology. 1994;62:757–765.[PubMed]
  • Horner RH, Sugai G, Anderson CM. Examining the evidence base for school-wide positive behavior support. Focus on Exceptional Children. 2010;42(8):1–14.
  • Horner RH, Todd A, Lewis-Palmer T, Irvin L, Sugai G, Boland J. The school-wide evaluation tool (SET): A research instrument for assessing school-wide positive behavior support. Journal of Positive Behavior Intervention. 2004;6:3–12.
  • Irvin LK, Horner RH, Ingram K, Todd AW, Sugai G, Sampson N, et al. Using office discipline referral data for decision-making about student behavior in elementary and middle schools: An empirical investigation of validity. Journal of Positive Behavior Interventions. 2006;8:10–23.
  • Irvin LK, Tobin TJ, Sprague JR, Sugai G, Vincent CG. Validity of office discipline referral measures as indices of school-wide behavioral status and effects of school-wide behavioral interventions. Journal of Positive Behavior Interventions. 2004;6:131–147.
  • Judd CM, Kenny DA. Estimating the effect of social interventions. New York: Cambridge University Press; 1981.
  • Kartub DT, Taylor-Green S, March RE, Horner RH. Reducing hallway noise: A systems approach. Journal of Positive Behavior Interventions. 2000;2:179–182.
  • Kellam SG, Anthony JC. Targeting early antecedents to prevent tobacco smoking: Findings from an epidemiologically based randomized field trial. American Journal of Public Health. 1998;88:1490–1495.[PMC free article][PubMed]
  • Kellam SG, Mayer LS, Rebok GW, Hawkins WE. Effects of improving achievement on aggressive behavior and of improving aggressive behavior on achievement through two preventive interventions: An investigation of causal paths. In: Dohrenwend BP, editor. Adversity, stress, and psychopathology. New York: Oxford University Press; 1998. pp. 486–505.
  • Larson R, Moneta G, Richards MH, Wilson S. Continuity, stability, and change in daily emotional experience across adolescence. Child Development. 2002;73:1151–1165.[PubMed]
  • Lewis TJ, Sugai G, Colvin G. Reducing problem behavior through a school-wide system of effective behavioral support: Investigation of a school wide social skills training program and contextual interventions. School Psychology Review. 1998;27:446–459.
  • Lockwood D. Violence among middle school and high school students: Analysis and implications for prevention. National Institute of Justice Series. Washington, DC: National Institute of Justice, Office of Justice Programs, U.S. Department of Justice; 1997.
  • Mayer GR. Preventing antisocial behavior in the schools. Journal of Applied Behavior Analysis. 1995;28:467–478.[PMC free article][PubMed]
  • Metzler CW, Biglan A, Rusby JC, Sprague JR. Evaluation of a comprehensive behavior management program to improve school-wide positive behavior support. Education and Treatment of Children. 2001;24:448–479.
  • Nansel TR, Overpeck M, Pilla RS, Ruan J, Simons-Morton B, Scheidt P. Bullying behaviors among US youth. Prevalence and association with psychosocial adjustment. Journal of the American Medical Association. 2001;285:2094–2100.[PMC free article][PubMed]
  • Nich C, Carroll K. Now you see it, now you don’t: A comparison of traditional versus random-effects regression models in the analysis of longitudinal follow-up data from a clinical trial. Journal of Consulting and Clinical Psychology. 1997;56:252–261.[PubMed]
  • O’Donnell J, Hawkins JD, Catalano RF, Abbott RD, Day LE. Seattle Social Development Project: Preventing delinquency among low-income children. Prevention Researcher. 1997;4:7–9.
  • Oregon Department of Education. School and district report cards. 2009 Retrieved from http://www.ode.state.or.us/data/reportcard/reports.aspx.
  • Oswald K, Safran S, Johanson G. Preventing trouble: Making schools safer places using positive behavior supports. Education and Treatment of Children. 2005;28:265–278.
  • Petrosino A, Guckenburg S, DeVoe J, Hanson T. What characteristics of bullying, bullying victims, and schools are assocated with increased reporting of bullying to school officials? (Issues & Answers Report, REL 2010-No. 092) Washington, DC: U.S. Department of Education, Institute of Education Sciences; 2010. Retrieved from http://ies.ed.gov/ncee/edlabs.
  • Richards MH, Miller BV, O’Donnell PC, Wasserman MS, Colder C. Parental monitoring mediates the effects of age and sex on problem behaviors among African American urban young adolescents. Journal of Youth and Adolescence. 2004;33:221–233.
  • Roeser RW, Eccles JS. Adolescents’ perceptions of middle school: Relation to longitudinal changes in academic and psychological adjustment. Journal of Research on Adolescence. 1998;8:123–158.
  • Roeser RW, Eccles JS, Sameroff AJ. Academic and emotional functioning in early adolescence: Longitudinal relations, patterns, and prediction by experience in middle school. Development and Psychopathology. 1998;10:321–352.[PubMed]
  • Rusby JC, Forrester KK, Biglan A, Metzler CW. Relationships between peer harassment and adolescent problem behaviors. Journal of Early Adolescence. 2005;25:453–477.
  • Rusby JC, Foster EM, Taylor TK. Associations of classroom structure and behavior management practices with at-risk student behavior in first grade. In: Molina DH, editor. School psychology: 21st century issues and challenges. Hauppauge, NY: Nova Science Publishers, Inc; 2008.
  • Rusby JC, Taylor T, Milchak C. Assessing school settings: Interactions of students and teachers (ASSIST) observation system. Eugene, OR: Oregon Research Institute; 2001. Unpublished manual.
  • Rutter M, Maughan B, Mortimore P, Ouston J, Smith A. Fifteen thousand hours: Secondary schools and their effect on children. Cambridge: Harvard University Press; 1979.
  • Shrout PE, Fleiss JL. Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin. 1979;2:420–428.[PubMed]
  • Singer JD, Willett JB. Applied longitudinal data analysis: Modeling change and event occurrence. New York: Oxford University Press; 2003.
  • Snyder J, Reid J, Stoolmiller M, Howe G, Brown H, Dagen G, et al. The role of behavior observation in measurement systems for randomized prevention trials. Prevention Science. 2006;7:43–56.[PubMed]
  • Sprague J, Golly A. Best behavior: Building positive behavior support in schools. Longmont, CO: Sopris West Educational Services; 2004.
  • Sprague J, Horner R. School-wide positive behavioral support. In: Jimerson SR, Furlong MJ, editors. Handbook of school violence and school safety: From research to practice. Mahwah, NJ: Lawrence Erlbaum Associates; 2007.
  • Sprague J, Nishioka V. Skills for success: A three-tiered approach to positive behavior supports. Impact. 2004;16(3):16–17. 35.
  • Sprague J, Walker H, Golly A, White K, Myers DR, Shannon T. Translating research into effective practice: The effects of a universal staff and student intervention on indicators of discipline and school safety. Education and Treatment of Children. 2001;24:495–511.
  • Stoolmiller M. Antisocial behavior, delinquent peer association, and unsupervised wandering for boys: Growth and change from childhood to early adolescence. Multivariate Behavioral Research. 1994;29:263–288.
  • Sugai G, Horner R. The evolution of discipline practices: School-wide positive behavior supports. Child and Family Behavior Therapy. 2002;24:23–50.
  • Sugai G, Lewis-Palmer T, Todd A, Horner R. School-wide Evaluation Tool version 2.1, June 2005. Eugene, OR: Educational and Community Supports, University of Oregon; 2001.
  • Walker HM, Colvin G, Ramsey E. Antisocial behavior in school: Strategies and best practices. Pacific Grove, CA: Brooks/Cole; 1995.
  • Walker HM, Ramsey E, Gresham FM. Antisocial behavior in school: Evidence-based practices. 2nd ed. Belmont, CA: Wadsworth/Thomson Learning; 2004.
  • Way N, Reddy R, Rhodes J. Students’ perceptions of school climate during the middle school years: Associations with trajectories of psychological and behavioral adjustment. American Journal of Community Psychology. 2007;40:194–213.[PubMed]
  • Wilson DB, Gottfredson DC, Najaka SS. School-based prevention of problem behaviors: A meta-analysis. Journal of Quantitative Criminology. 2001;17:247–272.

School Cafeteria Food Essay

685 Words3 Pages

Have you ever tasted school cafeteria food? I don’t think you would want to. In school story books, do you have characters saying that the food tasted good at school cafeterias? Nope. Why is this? Cafeteria food is often cheap, bought in bulk, high in calories, malnutritious, and microwaved. Student polls and opinions prove this. Therefore, this leads to a suggestion: Healthier, tastier foods and a better, advanced lunch system should be implemented. First of all, students aren’t motivated to eat unhealthy, not-tasty food. If you observed students buying lunch in the cafeteria, you don’t often see them buying these kinds, but not limited to, foodstuffs: burritos (which are just beans wrapped in tortillas), “burgers” (meat slapped on two…show more content…

A simple salad bar and healthier, tastier foods (such as soup, sandwiches, pasta, etc.) would be the key to flourishing and happy or happier students. On the other hand, you may be thinking, “$3.75-$4.75 is cheap for a school lunch”, “Who would pay for it?”, or “If it’s good enough for school standards, then it MUST be healthy”. $3.75-$4.75 is not cheap with the economy failing and families having to stretch their dollars. About a year ago, the lunch was $2.50. If a student bought a $4.75 lunch every day for a month, about thirty days, the total would be $142.50. Multiply that by twelve months and it is $1,710 a YEAR. Compared to the $900 a year with the $2.50 meal, lunch prices ARE pretty darn expensive. Secondly, if enhanced foods and lunch system were implemented, some might say it would take a huge amount of money to actually start the project. However, if tax dollars are enough to pay for billion dollar roads, then it’s enough to start a respectable lunch system. A portion of the tax dollars could be used instead of it all going to highway and road construction. Finally, the food may fit school standards, but it may not be such a good choice. Most people eat 2,000 to 2,500 calories a day, depending on their needs. How healthy could a tortilla with beans, or a piece of meat slapped between two slices of bread, be? Not to mention the amounts of saturated fat, trans fat, and salt! If you checked the

Show More

0 thoughts on “School Age Observation Essay About Cafeteria”

    -->

Leave a Comment

Your email address will not be published. Required fields are marked *