Show Less
Open access

Developing and Assessing Academic and Professional Writing Skills


Edited By Susanne Göpferich and Imke Neumann

Academic literacy used to be considered a complex set of skills that develop automatically as a by-product of academic socialization. Since the Bologna Reform with its shorter degree programmes, however, it has been realized that these skills need to be fostered actively. Simultaneously, writing skills development at all levels of education has been faced with the challenge of increasingly multilingual and multicultural groups of pupils and students. This book addresses the questions of how both academic and professional writing skills can be fostered under these conditions and how the development of writing skills can be measured.
Show Summary Details
Open access

Assessing writing in vocational education and training schools: Results from an intervention study

Liana Konstantinidou, Joachim Hoefele, Otto Kruse Zurich University of Applied Sciences, Switzerland

Assessing writing in vocational education and training schools: Results from an intervention study


English: In this paper, we describe the assessment of writing in vocational education and training (VET) schools within the scope of an intervention study carried out in the German-speaking part of Switzerland. In literacy education within VET schools, the diversity of students’ linguistic backgrounds is not sufficiently taken into consideration. To address this lack, a teaching approach based on process-oriented writing instruction incorporating both German as L1 and German as L2 approaches was developed and evaluated. The paper focuses on the instruments used for the assessment of students’ writing competence at three stages and describes the writing test as well as the scoring procedures applied. Reflections on the quality of the writing competence scale (Cronbach’s alpha: .82–.85) and some of the main results showing a significant increase in writing competence in the experimental group compared to the control group are presented alongside a discussion of the potential value of our assessment procedures.

German: Dieser Beitrag hat die Erhebung von Schreibkompetenz im Rahmen einer Interventionsstudie an Berufsfachschulen der Deutschschweiz zum Thema. Im allgemeinbildenden Unterricht dieser Schulen findet sich ein hoher Anteil von Lernenden mit mehrsprachigem Hintergrund und unkonventionellen Sprachbiografien, was in der Förderung der Sprach- bzw. Schreibkompetenz jedoch nur wenig berücksichtigt wird. Aus diesem Grund wurde ein Konzept der prozessorientierten Schreibdidaktik entwickelt, das Mutter- und Zweitsprachenunterricht mit Schreib- und Sprachförderung zusammenbringt und evaluiert. Der Artikel konzentriert sich auf eine Darstellung der Instrumente zur Erhebung und Bewertung der Schreibkompetenz, was eine Beschreibung der Schreibtests und des Scoring-Verfahrens beinhaltet, ebenso wie auf Überlegungen zur Qualität der Schreibkompetenz-Skala (Cronbachs Alpha: .82–.85). Die Hauptergebnisse, die unmittelbar nach der Intervention einen signifikanten Zuwachs der Schreibkompetenz der Experimentalgruppe im Vergleich zur Kontrollgruppe zeigen, werden vorgestellt, bevor der Artikel mit einer Diskussion der Ergebnisse schließt. ← 73 | 74 →

1    Introduction

Literacy development in vocational education and training (VET) schools is a field that has only recently attracted research (Becker-Mrotzek/Kusch/Wehnert 2006; Efing 2008) in spite of the high value that is placed on the dual education system in German-speaking countries. In Switzerland, annually more than 65% of all students pass through one of the 2- to 4-year VET programmes, in which they spend between one and two days1 a week in school and the remainder at work (SBFI 2015: 4 ff.). In today’s digital and knowledge-based society, reading and writing skills become increasingly important in VET and in professional life (Müller 2003; Jakobs 2008). In 2010, the Swiss Business Federation published the results of a survey of 771 Swiss enterprises of various sizes and representing numerous business sectors. Participating businesses highlighted that apprentices had a general lack of mathematical and especially language skills at the beginning of their training. This concerned not only apprentices with a migration background, but also native VET students (economiesuisse 2010: 3 ff.).

For the majority of VET students, reading and writing are complex cognitive and linguistic challenges that they do not always master successfully (Nodari 2002: 11). Although studies focusing on VET schools are not available, we must assume that a sizeable number of VET students, especially those in apprenticeships that are less demanding regarding school achievement, come close to what, in Shaughnessy’s (1977) understanding, are “basic writers”. They are little acquainted with the formal requirements and conventions of Swiss standard German and unaccustomed to expressing their ideas in written form. They are not overly motivated to cope with language-related tasks and have often been unsuccessful writers in their previous educational experience. As scholastic achievement is one of the main selection criteria for the apprenticeships that students choose, we must also assume that those students gaining access to apprenticeships that are more demanding with respect to language skills usually have a higher proficiency in German. ← 74 | 75 →

It is more than motivation and the kind of apprenticeship, however, that explains variation in language skills. Many VET students come from families with a modest level of education and/or a migration background. Adolescents with an immigrant background have usually grown up in multilingual environments using several languages simultaneously and learned German as a second or third language. Furthermore, an additional challenge is that both native and migrant-background students must cope with diglossia in Switzerland. There is a large gap between the written language and the local dialects spoken in Switzerland with their markedly different lexis, grammar and idioms as compared to Swiss standard German (Ammon/Bickel/Ebner 2004; Dürscheid/Giger 2010; Dürscheid/Elspass/Ziegler 2011). Today, local Swiss dialects are also preferred in informal written communication such as SMS and e-mail. As local Swiss dialects are not standardized with respect to grammar and spelling, young people are not accustomed to regulated language usage and are not overly motivated to comply with the norms of Swiss standard German (Sieber 2013). In spite of the linguistic diversity in VET schools in German-speaking Switzerland, the teaching of German/literacy still follows approaches designed for monolingual groups of native speakers, and teachers are trained to teach classes consisting of purely mother-tongue students (Belke 2001: 2) without accounting for their pervasive multilingualism.

At Swiss VET schools, the teaching of language and communication is integrated into a multidisciplinary subject called ABU “allgemeinbildender Unterricht”, which can be translated as “general education” in contrast to vocational education subjects (BBT 2006). ABU allows teachers to focus on a wide range of subjects of general interest such as economics (e.g., dealing with money), civics (e.g., voting), or law (e.g., signing a contract). The teaching of German/literacy is integrated into these fields of content. Even though school curricula include instructions on the relationship between language/literacy teaching and the teaching of general education content, teachers are quite free to create their own teaching units into which they can integrate language instruction and the teaching of writing.

The assessment procedures reported on in this paper are part of an intervention study in which a new way of teaching writing in VET schools was introduced. The basic philosophy behind the teaching method we suggested was a process approach largely following the instructional model outlined ← 75 | 76 → in American secondary and higher education (cf. Pritchard/Honeycutt 2006) in the 1970s, and which was later adopted in other countries (cf. Ruhmann/Kruse 2014). This approach was integrated with an L2 approach to writing to cater to the lack of essential linguistic skills in Swiss standard German, which holds true for the majority of students (both native and non-native).

To describe which decisions led to the construction of our measurement techniques, we will briefly review the literacy education situation in Swiss vocational schools and describe the nature of the intervention we conducted. This is followed by a description of the writing test and the scoring procedures. We then introduce the writing competence scale and detail some of the major results of the intervention. Finally, we show that the changes in the teaching of writing resulting from our intervention are well reflected in the writing competence scale, thus demonstrating it to be a valid instrument for the assessment of writing skills.

2    Process-oriented writing between L1 and L2 in vocational schools

In our project, we were looking for ways to introduce a process approach to writing in VET schools and add approaches from the teaching of German as an L2 in order to more effectively respond to the multilingual realities of the student population.

The process approach revolutionized the teaching of writing in educational contexts in the United States after Emig’s (1971) study The Composing Process of Twelfth Graders. It offered a new perspective on writing by drawing attention away from the idealized prescriptions on what good texts are and directing it to the thinking process of the writer instead. Writing has since been seen as a way of generating text through several interrelated activities which are not processed in a linear way, as in earlier models of the writing process, but recursively. This means that writers tend to rethink their ideas repeatedly and proceed in several loops before they decide on the final form of the text. This paradigm shift was the starting point for research looking at what really happens during writing and is still seen as one of the cornerstones of writing pedagogy (Anson 2014; Ruhmann/Kruse 2014).

From the 1980s on, the process approach was accompanied by cognitive models of writing, initiated by Hayes & Flower (1980), Flower & Hayes ← 76 | 77 → (1980; 1981), Bereiter (1980), Bereiter & Scardamalia (1987), and others, that view writing as a series of cognitive activities such as idea generation, structuring ideas, planning, proposing ideas, and translating ideas into word strings. This approach placed a strong emphasis on problem-solving activities and writing strategies. Recently, several new elements have been absorbed into the process approach, the most important being peer feedback, collaborative writing and reflective techniques (Pritchard/Honeycutt 2006; Ruhmann/Kruse 2014).

Unfortunately, the shift from product to process has resulted in an increased reluctance towards the teaching of language in the writing class. In cognitive process models (in the tradition of Hayes/Flower 1980), linguistic activity is viewed as a means of translating thought into text but not as a part of thinking, idea generation and meaning making. All essential processes are thought to be cognitive or computational in nature, so that there is no particular need to pay attention to the teaching of language for writers. As a result, language support has almost disappeared from the teaching of writing. If language support is given to writers, it is likely to happen in courses offered specifically for second-language writers who are seen as needing language skills (see, for instance, the work of Swales 1990; 2004; Hyland 2000; 2011).

Recently, it has become clear that language instruction has to find its way back into the teaching of writing in L1 classes (Steinhoff 2007; Pohl 2007; Locke 2010; Myhill 2010; 2012; Feilke 2012; 2014; Anson 2014). Hyland (2011: 24) expresses this need most emphatically when he claims that

“teachers of writing clearly need to be teachers of language, as it is an ability to exercise linguistic choices in the ways they treat and organize their topics for particular readers […]. A knowledge of grammar, focusing on how students can modify meanings in distinct and recognizable ways becomes central to learning to write.”

Hyland uses the term grammar in a broad sense here, referring to the linguistic means necessary for text production. In the case of VET schools, his suggestion seems particularly relevant, as students with multilingual backgrounds may lack the essential linguistic means for their task of learning, for example, business communication. In our approach, we drew on teaching units from German as an L2 to support the development of students’ language skills, and provided the students with a variety of focused linguistic elements that they could use when formulating or revising text. Materials ← 77 | 78 → (e.g., reading assignments) to prepare writing tasks were also handed out to the students to instruct them on the writing processes/routines, which also include linguistic forms and rhetorical elements. We followed the idea of Feilke & Lehnen (2012) that learning to write means building routines when using rhetorical and structural elements for expressing ideas rather than simply ‘running’ cognitive functions as Hayes & Flower (1980) suggest. For this approach, it is important to focus on key linguistic elements that are needed to solve a task, and allow students to develop routines during writing and revising.

The approach we developed was based on the following guidelines:

    As a basis, we chose a procedure from the process tradition (Anson 2014; Ruhmann/Kruse 2014) and deconstructed the writing process into distinct actions that can be connected individually in the development of a unique strategy.

    We placed a strong emphasis on social activities and collaborative learn­ing to make writing a rewarding group enterprise.

    We took care to create authentic prompts which connected writing to real-life situations of the writers.

    From second-language learning, we added the principle of preparation (“Vorentlastung”) in which reading activities were presented before the actual writing task.

    We followed a scaffolding strategy (Gibbons 2002) by accompanying the writing process with support for the linguistic tasks that had to be solved.

    We connected oral and written activities to create synergies between both modes of text production and reception.

We developed this teaching strategy together with the regular teachers, thereby allowing them to integrate the new curriculum into the requirements of their existing teaching plans and their individual teaching styles. The teachers were introduced to the basic philosophy of our research approach and then co-constructed the new teaching units for all stages of the curriculum. Altogether, we developed three major teaching units which covered 16 teaching hours (the equivalent of 33% of the overall time available for ABU teaching in a semester). The teachers’ experience with and knowledge of the target group were very important for the implementation of our concept into teaching practice. ← 78 | 79 →

The three basic units were constructed as open scenarios in which writing fulfilled a core function, but was seen as a means of reaching the goals of an overarching activity. There were three such scenarios that shared a common general structure.

In the first scenario, students were asked to produce portraits of each other – to be printed in a brochure along with pictures – that would be presented to their families and at their workplaces. In the second scenario, students were asked to write a letter of complaint responding to an excessive bill from their mobile phone provider. In the third scenario, students were requested to write a letter to a real company that they would actually visit on a class excursion. These letters were written individually and then the best from each class was selected to be sent to the company. The best letter from all the classes was awarded a prize of 500 Swiss francs, which could be spent during the excursion.

Each of these tasks required many classroom hours to prepare, carry out and evaluate. The main recurring elements were:

    presenting the task

    discussing the problems involved

    providing materials to solve the problems (techniques, preparatory texts, search operations, etc.)

    integrating ‘focus on language’ exercises into the writing process which raise awareness of language skills and their importance for the subsequent writing or prewriting phase

    initiating reflection on text procedures that take place in writing activities

    generating and structuring ideas, draft writing

    peer feedback with subsequent revision

    writing the final version

There was frequent alternation between individual and group work, and each paper went through several cycles of consideration before students settled on the final version. In this way, participants were led through stages of generating ideas, planning, discussing and revising several times.2 ← 79 | 80 →

3    Study design and results

The intervention study took place in 18 different classes in three VET schools in Switzerland (N = 318). Nine of these classes received regular ABU teaching (control group) while nine classes received a special treatment of process-based teaching of writing (experimental group). Teaching in the control group followed the usual principles of general education classes in VET schools connected with several writing assignments which were graded and aimed at supporting writing development. The general study design can be described as a pre-test/post-test/control group design with follow-up. Experimental and control groups were compared at the beginning of the intervention, at the end of the intervention, and after a follow-up period of 3 months. The test procedures and assessments were not part of the teaching assignments, but were carried out on separate occasions and in an identical manner for both groups. Testing took place in August 2013 (pre-test), in February 2014 (post-test), and in June 2014 (follow-up test).

Because of school circumstances (practical and administrative issues), classes (natural groups) rather than students were randomly assigned to the experimental and control group respectively (quasi-experimental study). In order to avoid selection effects or major differences between the intervention and the comparison group that might explain differences (or lack of differences) in the post-test and follow-up comparisons, the project team created a control group that matched the experimental group with respect to the apprenticeships included in the sampling. In this manner, six different apprenticeships (hairdressers, information/documentation assistants, lab assistants, logisticians, multi-skilled mechanics and painters) from one group of classes were assigned to the experimental group, and those from their parallel classes at the same school were assigned to the control group.3 Furthermore, variables that might confound the results, such as gender, immigration and linguistic background, were taken into consideration (Bortz/Döring 2006: 524 ff.). ← 80 | 81 →

As mentioned above, the participating teachers of the experimental group were introduced to process-oriented teaching of writing in both L1 and L2 situations and prepared to participate in the intervention study (as mediators). In cooperation with them, a new writing curriculum including three extended writing tasks based on the scenarios mentioned above was developed and tested against a traditional curriculum. The main research question concerned differences between the students in the experimental group and the control group regarding their writing competence immediately after and four months after the intervention. It was assumed that immediately following the intervention (and four months later) students in the experimental group would show better writing skills than students in the control group and that they would organize, plan and revise their writing better. Students in the experimental group were also expected to have more positive attitudes towards writing and be more self-confident, self-regulated and motivated writers.4

3.1   The writing test

There are several possible reference points for assessing writing skills at VET schools (Weigle 2002). A basic requirement of educational research is to arrive at methodologically sound estimations of students’ abilities at all levels of educational systems. Very little, however, is known about the level of literacy skills to be expected from students in VET schools, in which students from very heterogeneous professional fields (ranging from multi-skilled mechanics and information/documentation professionals to hairdressers and logisticians) are educated and where the teaching of writing is not as clearly defined as in primary education.

A more reliable reference point for the assessment of writing is the teaching situation and the literacy curriculum running in the background, no matter how ill-defined it may be. Connecting assessment with teaching may also lead to better instruction. In turn, better instruction can provide information regarding the validity of scales with respect to their sensitivity to the kind of change our intervention leads to. In order to estimate the ← 81 | 82 → impact of the new teaching strategy, we developed assessment tools that reflect both the particular level of the vocational students and the nature of the intervention.

The writing test was developed in three stages: design, operationalization and administration (Bachmann/Palmer 1996, quoted in Weigle 2002: 77). During the design stage, the general curriculum (BBT 2006) and the curricula of the participating schools were studied in order to determine what kind of texts students are required to write in the first year of VET. Furthermore, a focus-group discussion with VET students allowed us to identify their writing needs in everyday environments outside school. We expected this procedure to lead to authentic writing prompts connected to the students’ real-world concerns. We selected a persuasive writing/argumentation task (Becker-Mrotzek/Böttcher 2012: 218 ff.) which required literacy skills that came closest to the kind we intended to measure. Students were asked to compose formal letters (motivational letters) in which they had to argue for a cause and convince the intended audience to support it.

The writing test consisted of three different authentic prompts that were specified in cooperation with the teachers in the operationalization stage. The selection of authentic prompts was intended to arouse the interest of the test-takers (interactiveness). The prompts were as follows: (1) at the pre-test stage, students were to write a letter to the school management in order to win a school contest for a four-day stay for the whole class in the French-speaking part of Switzerland as a prize; (2) in the post-test, the recipient of the motivational letter was the education office, which finances vocational students’ sojourns in Ticino; and (3) in the follow-up test, the letter had to be directed to the students’ workplace mentor who had to be persuaded to grant a paid leave for the participation in a school sojourn in the Italian-speaking part of Switzerland. Formal letters reflect the multi-dimensional nature of writing and are conventionalized enough to allow for the development of clear rating criteria which contribute to the validity of the instrument (see section below).

The writing test was presented as a booklet of eight pages including prewriting pages (planning and draft pages) as well as the criteria for the text evaluation. As is common in test situations at VET schools, all assessments were written by hand directly in the booklet provided. For the integration of process-oriented writing elements into the assessment, as in the National ← 82 | 83 → Assessment for Education Progress (NAEB 2007), students were given a substantial amount of time for completing the task and were invited to undertake planning and revising activities before starting to write (e.g., On this page you can make notes and plan your writing.). The writing time of each student was recorded.5

To prepare the main survey, the writing tasks were tested in a pilot study (administration stage) with 118 VET students from the same apprenticeships as those participating in the main study. The results of the pilot study (empirical item investigation, factor and reliability analysis, inter-rater reliability) led to changes and adaptations in the wording of the writing tasks and in the scoring criteria.6 Additionally, in order to confirm the validity of the measurement instrument, we asked the VET teachers to verify whether pilot study test scores (see Section 3.2) corresponded to the students’ general achievement in writing (Weigle 2002: 49). This result was very encouraging since teachers confirmed that test scores corresponded to the writing achievement for more than 80% of the students.

The writing test was administered by trained test administrators. A test administrator manual based on previous international studies (PISA, PIRLS, ICCS) described their roles and responsibilities and included instructions on the distribution of the student testing instruments according to student tracking forms, the supervision of the testing sessions and the recording of students’ participation (Schulz/Ainley/Fraillon 2011: 91).

3.2   Scoring procedures

The development of scoring criteria also formed part of the operationalization stage of the writing test’s development. For the scoring procedure, a code book was developed. It was based on previous German studies, including DESI (Harsch et al. 2007) and VERA (code book available online at IQB ← 83 | 84 → n.y.), the work of Becker-Mrotzek & Böttcher (2012) and Nussbaumer & Sieber’s (1994) Zurich Analysis Grid.

As the main scoring criteria, six rating scales (or subscales) were chosen, which were assigned to three main dimensions:

    Linguistic Competence: subscales Correctness and Style

    Genre Competence: subscales Formal Conventions and Structure/Coherence

    Pragmatic Competence: subscales Content and Communicative Impact

A limited number of scoring criteria (here: six) is suggested by Becker-Mrotzek & Böttcher (2012: 128) and Baurmann (2008: 133) because they can easily be adapted to writing tasks and the objectives of evaluation, which increases the practicality of the scoring procedures (Weigle 2002: 138).

Each text was rated as a whole using each of the six subscales. For the possible scores on each subscale, a range from 0 to 4 points was chosen with 4 points indicating complete mastery and 0, no mastery. Very short texts (under 50 words) were also rated with 0 (see Tables 1, 2 and 3). Mastery levels were defined a priori and are described in the tables. This allowed us to make inferences about test takers’ writing competence on an absolute scale level and not only in relation to other test takers (Weigle 2002: 125). The Writing Competence score is the sum of the six subscale scores and amounts to a maximum of 24 points. Raters used the full range of scores for all six rating scales except for the Formal Conventions scale, where none of the students was given zero points.

The overall scoring strategy may be called ‘analytic’ as we decomposed the construct of Writing Competence into several distinct rating scales (Weigle 2002: 109 ff.). Each of the six rating scales themselves, however, may be characterized as a combination of analytic and holistic scoring, as some of them demand a holistic judgment while others are based on distinct criteria or the number of errors. While, for example, the Correctness and Formal Conventions rating scales are quite analytic and include error counting, the Communicative Impact rating scale is holistic, asking the rater for a judgment on the extent to which the text would convince the recipient. ← 84 | 85 →

Table 1:   ‘Linguistic Competence’ scoring criteria and descriptors (based on the DESI and VERA code books)


← 85 | 86 →

Table 2:   ‘Genre Competence’ scoring criteria and descriptors (based on the DESI and VERA code books)


← 86 | 87 →

Table 3:   ‘Pragmatic Competence’ scoring criteria and descriptors (based on the DESI and VERA code books) 7


← 87 | 88 →

To ensure reliability, all tests were scored independently by two raters. Test scoring took place after the study was completed. The three raters participating in the scoring procedures received intensive training in which each scoring criterion was discussed in detail and more than 30 randomly selected texts were scored jointly. The aim of the training was to achieve high agreement between the raters regarding their understanding of the scoring criteria. After training, texts were assigned to the three raters by VET class. A main rater was appointed for every class, but test versions (pre-test, post-test, follow-up test) and experimental/control conditions were permuted and assigned randomly to avoid rater bias.

While independent scoring was used in the pilot study,8 we preferred the method of consensus scoring for the main survey. In accordance with this method, raters score independently in a first step but subsequently have the opportunity to discuss their results on every criterion in rater teams (main rater A and rater B) in order to arrive at a consensus (Robinson 2000; NAEP 2008). In our study, in case of disagreement, the results of the main rater were counted in the project data set. In any case, situations where raters did not agree were rare (see the results of inter-rater reliability below).

The degree of agreement between the two raters provides a measure of the reliability of the scoring process (Schulz/Ainley/Fraillon 2011: 96). To measure inter-rater reliability, Cronbach’s alpha coefficient was calculated, which provides an estimate of the consistency of ratings across multiple judges (Stemler 2004). Because of the consensus scoring, a high inter-rater reliability was expected. This expectation was confirmed by the results, where Cronbach’s alpha ranged from .98 to 1.0 for all subscales, and the overall Writing Competence scale showed an almost optimal inter-rater reliability (Cronbach’s Alpha: .997).

Examples of low, medium and high writing competence of VET students are presented below. The three texts were produced by students from the same VET class and were collected in the post-test phase. ← 88 | 89 →



← 89 | 90 →


Fig. 1:   Examples of low, medium and high writing competence of VET students

3.3    The writing competence scale

As mentioned above, the writing competence scale was defined as the sum of the six rating scales and thus ranging from 0 to 24 points. To check the one-dimensionality of the scale, an exploratory factor analysis (principal component method with varimax rotation) was conducted for each of the test administrations (t0, t1 and t2) with the six rating scales as single items. The suitability of the data for this analysis was confirmed by the results of the KMO (Kaiser-Meyer-Olkin Measure of Sampling Adequacy: approx. .82 for all three administrations) and Barlett’s tests (.000). Based on Kaiser’s rule, Which Drops ← 90 | 91 → all factors with eigenvalues under 1.0 (Bühner 2006: 200), one solid factor capable of explaining 57 to 60% of all the variables’ variances across the three administrations was selected. The consistency of the constructed scale was confirmed by a reliability analysis, in which subjects with missing values on one of the items/rating subscales were excluded. The result of this analysis (Cronbach’s Alpha .82–.85 for the three administrations) indicates a highly reliable scale.

Tables 4 to 6 show the main results of the factor and reliability analyses. Five of the six items/rating subscales have very good psychometric properties and score high on the main factor, which supports the interpretation of the sum of scores as the Writing Competence scale. The subscales Communicative Impact and Structure/Coherence are the most representative of the overall assessment score.

Table 4:   Writing Competence scale t0 (pre-test)


Both the low correlation between the Formal Conventions score and the Writing Competence score (corrected item-total correlation: .26–.35) and the rise in Cronbach’s Alpha if the Formal Conventions subscale is deleted from the analysis (.87–.88) indicate that the Formal Conventions score does not discriminate enough between writers with high and low writing competence (Bortz/Döring 2006: 219). In other words, it is possible that participants who master the formal conventions of the genre receive low scores on the Writing Competence scale and vice versa. Similar results were ← 91 | 92 → obtained in the pilot study. In spite of this, we decided to keep the item because formal conventions of letters belong to the curriculum of general education classes in VET schools and students are intensively trained to comply with them. In addition, formal conventions such as genre knowledge are important elements of writing competence according to theoretical writing models (e.g., Swales 1990; Russell 1997; Devitt 2004).

Table 5:   Writing Competence scale t1 (post-test)


Table 6:   Writing Competence scale t2 (follow-up test)


← 92 | 93 →

To test the quality of the scales, we carried out factor analyses for the whole sample for the three test administrations. Table 7 shows the results for the Writing Competence scale for both groups (experimental and control) combined. In the pre-test, VET students had an average score of 12.09 (SD: 3.7). Their performance was slightly better in the post-test and follow-up test while score variation was almost equally high for all three points of time (SD: 3.7–3.9). In the pre-test condition, VET-students’ scores ranged from 1 to 24; in the post- and follow-up tests, none of the students achieved fewer than 3 points. In the follow-up test, no students achieved the maximum score. Skewness and kurtosis give an impression of the scale’s distribution. While in the pre-test, the Writing Competence scale is slightly heavy-tailed, in the post- and follow-up tests, its distribution barely differs from the norm (see Table 7).

Table 7:   Writing Competence Scale – descriptive statistics for experimental and control groups combined


For a comparison of scores on the Writing Competence scale between the experimental and the control groups over time, a mixed between-within subject analysis of variance was conducted. The results show a significant main effect for time, F (2,222) = 16.005, p = .000, η2 partial = .07, with both groups of students showing an increase in the Writing Competence scores across the three points of time (see Fig. 2). This main effect, when comparing the two groups of students, was significant with F (1, 222) = 23.169, p = .000, η2 partial = .10, suggesting a difference in Writing Competence scores between the experimental and the control groups. The effect size is moderate (see Fig. 2). ← 93 | 94 →


Fig. 2:   Intervention effects on Writing Competence

A one-way repeated measures ANOVA conducted for each group shows significant differences in the Writing Competence scale values of the experimental group over time (F (2, 118) = 20.956, p = .000) and no change in the Writing Competence of the control group (F (2, 103) = 2.567, p > .05). Follow-up comparisons (Bonferroni) for the experimental group show a significant increase in scores not only between the first (M: 12.90, SD 3.8) and the second (M: 14.25, SD: 3.60) point of time, but also across the first and the third (M: 14.50, SD: 3.58). This suggests that the didactic concept had a positive impact on VET students’ writing competence and provides evidence of a long-term retention of the learning effect.

Fig. 2 also shows that the experimental and control groups were not comparable regarding their pre-treatment writing ability (EG M: 12.90, SD 3.88 vs. CG M: 11.50, SD: 3.54). For this reason, but also because pre-treatment ability can be a basic or even the single predictor for post-treatment ability (Vanhove 2015), we used the pre-test scores as covariates. The results of the analysis of covariance (ANCOVA) indicate a strong relationship between ← 94 | 95 → the pre-intervention scores and post-intervention scores on Writing Competence (F (1, 222) = 228.632, p = .000, η2 partial = .51). However, the group effect remains significant (F = (1,222) = 25.200, p = .000, η2 partial = .10).9

The quality of a writing test can also be measured by its impact on the individuals involved (especially test-takers and teachers) and the immediate educational system (Weigle 2002: 53). The dissemination of the study results was, therefore, particularly important. After the study, we gave individual feedback to the participating students on the development of their writing competence over time. Additionally, we organized a conference with the participating teachers, stakeholders and experts in order to evaluate the project. The results were beneficial to all involved and led to recommendations for a revision of the writing curricula in VET schools.

4    Conclusion and recommendations

Issues of assessment can teach us many things about our subjects and the educational contexts. The reliability scores of our measurement procedures show that the kind of literacy we aimed to study appeared to be a consistent construct when relevant scales were developed, as was the case with our six subscales of Correctness, Style, Formal Conventions, Structure/Coherence, Content and Communicative Impact, which can be assigned to the three competence fields Linguistic Competence, Genre Competence and Pragmatic Competence.

Factor analysis has also shown that these scales may be seen as dimensions of a strong main factor, which we interpret as ‘general writing competence’ and which gives the measured construct consistency and internal validity. The validity of our scales is also supported by the results regarding the impact of the intervention. The change in the teaching of writing was reflected in higher scores for writing competence. We may, therefore, conclude with confidence that the scales developed are sensitive to different modes of teaching and enable differentiation among writing performances even at a very basic skills level.

Respecting the specificity of literacy with regard to educational and linguistic contexts is an essential prerequisite for successful measurement. ← 95 | 96 → daptation to context must be reflected in scale definitions and scoring procedures. Although we believe that the competence fields (Linguistic Competence, Genre Competence and Pragmatic Competence) can be reproduced in many contexts, their measurement must be adapted not only to the achievement level, but also to the task dimension of the literacy field in question with respect to its communicative, textual and educational characteristics. The more advanced students are, the easier it should be to develop scales with enough variability to discriminate between levels of ability simply because texts are longer and skills more differentiated. What we were able to show in this study, however, is that discrimination is also possible at very basic competence levels.

From its very beginnings, our study was not primarily aimed at demonstrating that process orientation is superior to conventional approaches to the teaching of writing, rather, it was designed to address a very practical question of teaching in a field that is practically untouched by research. Teaching writing in VET schools is often regarded as a field of pedagogy where deficits from earlier education prove to be obstacles to substantial progress, and where teachers often resign themselves to the many mistakes their students make even in very short texts. What we have shown is that progress is possible and that a close look at literacy development in this group demonstrates subtle differences in the kinds of skills that literacy comprises at this early stage and in this particular field of education.

Our research has been highly appreciated by both teachers and administrators. In addition to our cooperative approach (creating the writing assignments together with teachers), one of the main reasons for this acceptance was the assessment methodology that resembles examination situations in school (the test situation) and uses rating scales which can pick up relevant dimensions in any kind of text production. The scales allow teachers to understand where deficits in literacy development are located and which types of linguistic, genre-related and pragmatic competence are missing. For this reason, our measurement approach facilitates the improvement of educational practice and the introduction of change in the teaching of writing on the basis of evidence which, by teachers, is considered consistent and well-justified. ← 96 | 97 →


Ammon, Ulrich / Bickel, Hans / Ebner, Jakob, 2004: Variantenwörterbuch des Deutschen: Die Standardsprache in Österreich, der Schweiz und Deutschland sowie in Liechtenstein, Luxemburg, Ostbelgien und Südtirol. Berlin: Walter de Gruyter.

Anson, Chris M., 2014: “Writing, language, and literacy.” In: Tate, Gary / Rupiper Taggart, Amy / Schick, Kurt / Hessler, Brooke H. (eds): A Guide to Composition Pedagogies. New York: Oxford University Press, 3–26.

Bachmann, Lyle F. / Palmer, Adrian S., 1996: Language Testing in Practice. Oxford: Oxford University Press.

Baurmann, Jürgen, 2008: Schreiben. Überarbeiten. Beurteilen. Ein Arbeitsbuch zur Schreibdidaktik. 3rd ed. Seelze-Velber: Kallmeyer.

BBT (Bundesamt für Berufsbildung und Technologie) (eds), 2006: Berufliche Grundbildung: Rahmenlehrplan für den allgemeinbildenden Unterricht. Bern: BBT.

Becker-Mrotzek, Michael / Böttcher, Ingrid, 2012: Schreibkompetenz entwickeln und beurteilen. 4th ed. Berlin: Cornelsen Verlag.

Becker-Mrotzek, Michael / Kusch, Erhard / Wehnert, Bernd, 2006: Leseförderung in der Berufsbildung. Duisburg: Gilles & Francke Verlag.

Bereiter, Carl / Scardamalia, Marlene, 1987: The Psychology of Written Composition. Hillsdale. NJ: Lawrence Erlbaum.

Bereiter, Carl, 1980: “Development in writing.” In: Gregg, Lee W. / Steinberg, Erwin R. (eds): Cognitive Processes in Writing. Hillsdale (NJ): Erlbaum, 73–93.

Belke, Gerlind, 2001: Mehrsprachigkeit im Deutschunterricht. Sprachspiele, Spracherwerb, Sprachvermittlung. 2nd ed. Baltmannsweiler: Schneider Verlag Hohengehren.

Bühner, Markus, 2006: Einführung in die Test- und Fragebogenkonstruktion. 2nd ed. München: Pearson Studium.

Bortz, Jürgen / Döring, Nicola, 2006: Forschungsmethoden und Evaluation für Human- und Sozialwissenschaftler. 4th ed. Heidelberg: Springer.

Devitt, Amy J., 2004: Writing Genres. Carbondale: Southern Illinois Univ. Press.

Dürscheid, Christa / Giger, Nadio, 2010: “Variation in the case system of German – linguistic analysis and optimality theory.” In: Lenz, Alexandra N. / ← 97 | 98 → Albrecht Plewnia (eds): Grammar between Norm and Variation. Frankfurt: Lang, 167–192.

Dürscheid, Christa / Elspass, Stephan / Ziegler, Arne, 2011: “Grammatische Variabilität im Gebrauchsstandard – das Projekt „Variantengrammatik des Standarddeutschen“.” In: Konopka, Marek / Kubczak, Jacquelin / Mair, Christian / Sticha Frantiŝek / Waβner, Ulrich H. (eds): Grammar & Corpora/Grammatik und Korpora 2009. Tübingen: Narr, 123–140.

Economiesuisse, 2010: Volksschule: Fokus auf das Wesentliche. Zürich: economiesuisse.

Efing, Christian, 2008: „‚Aber was halt schon schwer war, war, wo wir es selber schreiben sollten.’ Defizite und Förderbedarf in der Schreibkompetenz hessischer Berufsschüler.“ In: Jakobs, Eva-Maria / Lehnen, Katrin (eds): Berufliches Schreiben. Ausbildung, Training, Coaching. Frankfurt/M. etc.: Peter Lang, 17–34.

Emig, Janet (1971): The Composing Process of Twelfth Graders. Urbana (IL): NCTE.

Feilke, Helmut / Lehnen, Katrin (eds), 2012: Schreib- und Textroutinen. Theorie, Erwerb und didaktisch-mediale Modellierung. Frankfurt/M. etc.: Peter Lang.

Feilke, Helmuth 2012: „Was sind Textroutinen? Zur Theorie und Methodik des Forschungsfeldes.“ In: Feilke, Helmuth / Lehnen, Katrin (eds): Schreib- und Textroutinen. Theorie, Erwerb und didaktisch-mediale Modellierung. Frankfurt/M. etc.: Peter Lang, 1–31.

Feilke, Helmuth, 2014: „Argumente für eine Didaktik der Textprozeduren.“ In: Bachmann, Thomas / Feilke, Helmut (eds): Werkzeuge des Schreibens. Beiträge zu einer Didaktik der Textprozeduren. Stuttgart: Fillibach bei Klett, 11–34.

Flower, Linda / Hayes, John R., 1980: “The dynamics of composing: Making plans and juggling constraints.” In: Gregg, Lee, W. / Steinberg, Erwin. R. (eds.): Cognitive Processes in Writing. Hillsdale (NJ): Lawrence Erlbaum, 31–50.

Flower, Linda / Hayes, John R., 1981: “A cognitive process theory of writing.” College Composition and Communication 32, 365–387.

Gibbons, Pauline, 2002: Scaffolding Language, Scaffolding Learning. Teaching Second Language Learners in the Mainstream Classroom. Portsmouth (N.H.): Heinemann. ← 98 | 99 →

Harsch, Claudia / Neumann, Astrid / Lehmann, Rainer / Schröder, Konrad, 2007: „Schreibfähigkeit.“ In: Beck, Bärbel / Klieme, Eckhard (eds): Sprachliche Kompetenzen. Konzepte und Messung, DESI Studie. Weinheim, Basel: Beltz, 42–62.

Hayes, John R. / Flower, Linda, 1980: “Writing as problem solving.” Visible Language 14, 288–299.

Hoefele, Joachim / Konstantinidou, Liana (in prep.): „Förderung der allgemeinen Schreibkompetenz an Berufsschulen. Prozessorientierte Schreibdidaktik zwischen DaM und DaZ.“ In: Kreyer, Rolf / Güldenring, Barbara / Schaub, Steffen (eds): Angewandte Linguistik in Schule und Hochschule. Neue Wege für Sprachunterricht und Ausbildung. Frankfurt/M.: Peter Lang.

Hyland, Ken, 2000: Disciplinary Discourses: Social Interactions in Academic Writing. Harlow (England): Pearson Education.

Hyland, Ken, 2011: “Learning to write: Issues in theory, research, and pedagogy.” In: Manchón, Rosa M. (ed.): Learning-to-write and Writing-to-learn in an Additional Language. Amsterdam: John Benjamins, 17–36.

IQB, n.y.: „Beispielaufgaben Deutsch Sek 1.“ (last accessed: 14 August, 2015).

Jakobs, Eva-Maria, 2008: „Berufliches Schreiben: Ausbildung, Training, Coaching. Überblick zum Gegenstand.“ In: Jakobs, Eva-Maria / Lehnen, Katrin (eds): Berufliches Schreiben. Ausbildung, Training, Coaching. Frankfurt/M.: Peter Lang, 1–16.

Locke, Terry (ed.), 2010: Beyond the Grammar Wars. A Resource for Teachers and Students on Developing Language Knowledge in the English/Literacy Classroom. New York: Routledge.

Müller, Annette, 2003: Deutsch als Zweitsprache in der Berufsausbildung. Sprachsoziologische Überlegungen, pädagogische Positionen und drei Bausteine zur Sprachförderung. Berlin: Artà.

Myhill, Debra, 2010: “Ways of knowing. Grammar as a tool for developing writing.” In: Locke, Terry (ed.): Beyond the Grammar Wars: A Resource for Teachers and Students. New York: Routledge, 129–148.

Myhill, Debra, 2012: “The ordeal of deliberate choice. Metalinguistic development in secondary writers.” In: Berninger, Virginia Wise (ed.): Past, Present, and Future Contributions of Cognitive Writing Research to Cognitive Psychology. New York: Psychology Press, 247–273. ← 99 | 100 →

NAEP (National Assessment of Educational Progress), 2008: “NAEP Scoring. Writing Scoring Specifications.” (last accessed 15 September, 2015).

NAEB (National Assessment Governing Board), 2007: “Writing Framework and Specifications for the 2007 National Assessment of Educational Progress.” (last accessed 14 August, 2015).

Nodari, Claudio, 2002: „Was heisst eigentlich Sprachkompetenz?“ In: Schweizerisches Institut für Berufspädagogik (ed.): Barriere Sprachkompetenz. Dokumentation zur Impulstagung vom 2. November 2001 im Volkshaus Zürich. Zollikofen: SIBP, 9–14.

Nussbaumer, Markus / Sieber, Peter, 1994: „Texte analysieren mit dem Zürcher Textanalyseraster.“ In: Sieber, Peter (ed.): Sprachfähigkeiten – Besser als ihr Ruf und nötiger denn je! Aarau etc.: Verlag Sauerländer, 141–186.

Pohl, Thorsten 2007: Studien zur Ontogenese wissenschaftlichen Schreibens. Tübingen: Niemeyer.

Pritchard, Ruie J. / Honeycutt, Ronald L., 2006: “The process approach to writing instruction.” In: MacArthur, Charles / Graham, Steve / Fitzgerald, Jill (eds): Handbook of Writing Research. New York, etc.: The Guilford Press, 275–291.

Robinson, Deborah Wilburn, 2000: “Building consensus scoring on the scoring of students’ writing: A comparison of teacher scores versus native informants’ scores.” French Review 73(4), 667–688.

Ruhmann, Gabriela / Kruse, Otto, 2014: „Prozessorientierte Schreibdidaktik: Grundlagen, Arbeitsformen, Perspektiven.“ In: Dreyfürst, Stephanie / Sennewald, Nadja (eds): Schreiben. Grundlagentexte zur Theorie, Didaktik und Beratung. Opladen: Barbara Budrich, 15–34.

Russell, David R., 1997: “Rethinking genre in school and society: An activity theory analysis.” Written Communication 14, 504–554.

Schulz, Wolfram / Ainley, John / Fraillon, Julian (eds.), 2011: ICCS 2009 Technical Report. Amsterdam: IEA.

Shaughnessy, Mina, 1977: Errors and Expectations. New York: Oxford University Press. ← 100 | 101 →

Sieber, Peter, 2013: „Probleme und Chancen der Diglossie – Einstellungen zu Mundarten und Hochdeutsch in der Deutschschweiz.“ In: Eriksson, Brigit / Luginbühl, Martin / Tuor, Nadine (eds): Sprechen und Zuhören – gefragte Kompetenzen? Überzeugungen zur Mündlichkeit in Schule und Beruf. Bern: hep, 106–136.

SBFI (Staatssekretariat für Bildung, Forschung und Innovation), 2015: „Berufsbildung in der Schweiz – Fakten und Zahlen.“ (last accessed: 05 April, 2015).

Steinhoff, Thorsten, 2007: Wissenschaftliche Textkompetenz. Tübingen: Niemeyer.

Stemler, Steven E., 2004: “A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability.” Practical Assessment, Research & Evaluation, 9(4). (last accessed: 07 April, 2015).

Swales, John M., 1990: Genre Analysis: English in Academic and Research Settings. Cambridge: Cambridge Univ. Press.

Swales, John M., 2004: Research Genres. Explorations and Applications. Cambridge: Cambridge Univ. Press.

Swisseducation, 2015: “Vocational education and training.” (last accessed: 14 August, 2015).

Vanhove, Jan, 2015: “Analyzing randomized controlled interventions: Three notes for applied linguists.” Studies in Second Language Learning and Teaching 5(1), 135–152.

Weigle, Sara Cushing, 2002: Assessing Writing. (Cambridge Language Assessment Series). Cambridge: Cambridge Univ. Press. ← 101 | 102 → ← 102 | 103 →

1       VET can also be completed at a full-time vocational school. In the French-speaking and Italian-speaking parts of Switzerland, the proportion of full-time vocational schools is higher than in German-speaking Switzerland. For further information about VET in Switzerland, see Swisseducation (2015).

2       For further information on the intervention and the teaching materials used, see Hoefele/Konstantinidou (in prep.).

3       In the school year of the intervention, there was no parallel class for the information/documentation assistants. For this reason, a vocational class of lab assistants was assigned as the control group. According to school coordinators, these two vocational classes are comparable regarding school achievement and motivation.

4       Results regarding attitudes and other psychological aspects are available but are not the focus of this article.

5       The recording of writing times, as well as the assessment of planning and revising activities, were intended as an attempt to assess some writing process elements. These elements, however, were not taken into consideration when assessing the quality of the final written products.

6       Adaptions of the scoring criteria mostly concerned the Correctness and Style subscales.

7       The evaluation of the content is task-dependent. The description in the table is rather general compared with the description in the project code book.

8       Inter-rater-reliability results (Cronbach’s alpha .93) from the pilot study showed a high agreement between the raters regarding the Writing Competence scale. Only for the subscales Correctness and Style, Cronbach’s alpha was under .70. This led to adaptations and clarifications of the subscales for the main survey.

9       For further results, see Hoefele & Konstantinidou (in prep.).