This file was prepared for electronic distribution by the inforM staff.
Questions or comments should be directed to inform-editor@umail.umd.edu.


                          CHAPTER FIVE

               SEASONING YOUR OWN SPAGHETTI SAUCE
                AN OVERVIEW OF METHODS AND MODELS
                      BY CAROLYNE W. ARNOLD

There were three principal charges to the National Assessment Team
(NATs). We were to introduce site participants to the wide variety
of assessment methods available, both qualitative and quantitative;
assist them in devising assessment strategies and procedures
custom-tailored to the needs and specifications of their particular
campuses; and provide practical training in how to design
assessment instruments, collect and analyze data, interpret
results, and report findings. Our task, then, was not to create a
unified, standardized assessment plan to be adopted by all seven
women's studies programs. Instead, we were to provide participants
with an array of strategies they could adapt to the specific
questions each different institution intended to pose about student
learning. As one gourmand among us put it memorably, assessment
means "seasoning your own spaghetti sauce." We assessment experts
were to introduce the seasonings available in the spice cabinet.

As outside consultants, we appropriately represented a variety of
areas of expertise and a range of experience using different kinds
of instruments, from interviews to questionnaires; from dialogic
journals to focus groups; from ethnographic observations to
statistical comparisons. Two of us were trained in quantitative
assessment, two had special strengths in developmental models, one
had focused her research on an ethnographic model, five had used a
range of qualitative approaches, and all were familiar with key
concepts in women's studies. Among the six of us, we had more than
a century's worth of hands-on experience in assessment and
evaluation on campuses, in policy centers, or at the national
level.

Throughout our training sessions, we emphasized the importance of
developing an assessment plan that would be appropriate for each of
the institutions and that would reveal the greatest information
about the questions each women's studies program had posed about
student learning in their classes. Each of us recognized that not
all assessment techniques were appropriate for all things or all
institutions. We also felt challenged to find a way to develop an
emerging assessment methodology that would be commensurate with
feminist theory, pedagogy, and practice.

Overriding all the models and methods we presented were the two
familiar overarching notions of evaluation: formative evaluation
and summative evaluation. Formative evaluation is oriented toward
gathering data that will allow decision makers to make informed
choices about improving an existing program. Ideally, formative
evaluation results in clarification of goals, objectives, and
program revisions during the course of the assessment that allows
for mid-course corrections. There is wide latitude in selecting
methodologies, types, and sources of data including study subjects
and means of data gathering. The primary aim is to generate data
that present comprehensive information about designated problems,
issues, and questions from the perspectives of students, faculty
members, alumni, administrations, and others.

Summative evaluation, by contrast, is a process by which data are
gathered to determine the effectiveness and continued viability of
a program. Findings and results are used to prove the value and
worth of a program, to justify its need, or to make a go or no-go
decision about whether to keep the program. Whichever approach,
each site was encouraged to select the model that best fit its
special circumstances, to define program goals in areas that
affected student learning, and to derive student learning
objectives based upon these goals. What follows are brief glimpses
of some of the possible assessment methods we offered to
participating institutions.

                      FEMINIST ETHNOGRAPHY

Mary Kay Thompson Tetreault, whose earlier work examined
developmental phases for faculty and curriculum development, is
dean of the School of Human Development and Community Service at
California State University-Fullerton. In collaboration with
Frances Maher, she has done extensive work in developing feminist
ethnography as an assessment method. Tetreault's research area has
focused increasingly on direct observation of classroom dynamics
and culture.

Feminist ethnography explores an approach that seeks to immerse
researchers in the culture of each institution--and within that
context, each classroom--to enable them to decipher meanings that
participants make of events. The researchers seek continually to be
conscious of the different positions and points of view presented
in transcript and interview data by the study subjects--the
professors, the students, and themselves. In other words, this
methodology seeks to juxtapose and display the perspectives of the
re- searchers, the informants, and the reader by putting them into
explicit relationships with one another.[2]

According to the authors, in feminist ethnography sources of data
may be transcripts or observations of class discussions and
classrooms or interviews with teachers and students. Data are
analyzed line by line, which allows interpretations of patterns of
what gets said, by whom, and in what sequence statements are made.
Using such a technique reveals the interactions of different
student viewpoints and permits an analysis that incorporates the
influence of gender, race, age, and other differentials that affect
the process of knowledge construction. Transcripts show the role of
the participants' "situated knowledge" or "positionality" in
weaving together and balancing different versions of feminism and
"partial truths."

            ILLUMINATIVE AND PARTICIPATORY EVALUATION

Another of the NATs--Joan Poliner Shapiro, associate dean of the
College of Education at Temple University--has researched and
written about illuminative and participatory approaches to
assessment.[3] In earlier writings Shapiro focused primarily on
illuminative evaluation as an approach for assessing women's
studies programs and projects.[4] As Shapiro explains, illuminative
evaluation is an alternative model of evaluation which was one of
the first to deal with some, though not all, of the criticisms and
questions raised by feminist and nontraditional educational
evaluations.

The illuminative model is used as an example of a nontraditional
approach to measure the success or failure of innovative projects
and encompasses the phenomenological and ethnographic mode as well.
Illuminative evaluation is so broad-based that it utilizes not only
the techniques of participant observation, interviews, and analysis
of documents in the form of a case study but also, where
appropriate, incorporates questionnaires and other quantifiable
instruments. The advantage of illuminative evaluation is that both
qualitative and quantitative methods can be combined to "illuminate
the subject."

Illuminative evaluation, as a strategy, makes no claim to perfect
objectivity. The evaluation is not supposed to be value-free.
Illuminative evaluation also is a form of assessment that can be
called goal-free and, thus, is particularly useful for evaluating
new programs when long-term effects cannot be anticipated.[5] The
illuminative approach also has a number of qualities that would
seemingly make it well suited for assessing women's studies
programs. Before the strategy is employed, however, both its
strengths and weaknesses as a methodology should be fully explored
to ascertain whether or not it is appropriate for a given setting.

Like illuminative evaluation, participatory evaluation is
interactive in its approach. This technique contains both
qualitative and quantitative methodologies and is well suited for
measurements of subtle and not-so-subtle changes. Participatory
evaluation allows the evaluator to be a knowledgeable insider and
no longer confines the assessor to the impartial outsider role.
Many believe this type of relationship is more conducive to
engendering greater trust between the evaluator and those being
evaluated. It tries to avoid the subject/object split, enabling
those participants whose voices may not typically be heard to
speak.

Qualitative methods such as participant observation, textual
analysis, and interviews with participating individuals supply the
data upon which findings are based. Similarly, quantitative
measures such as enrollment figures and other numerical data are
accepted sources of information and are readily incorporated into
the assessment process. In Shapiro's participatory approach,
assessment and evaluation become interchangeable. In participatory
evaluation, the focus is on the improvement of aspects of the
program or project over time, as opposed to an emphasis on a final
judgment. In this model, the process of the evaluation is extremely
important. To put it another way, formative evaluation is stressed
more than summative evaluation. 

Shapiro also has written in the area of feminist pedagogy. In a
paper with Carroll Smith-Rosenberg, she explores student learning
in an introductory women's studies ethics course by turning to the
voices of the students themselves as they speak through the pages
of their journals.[6] In another paper, she uses case-study
analysis as a way to enable nonfeminist and feminist students, in
a women's studies classroom, to better hear and understand each
other.[7] In all of her writings, feminist pedagogy and assessment
tend to overlap. For Shapiro, journals and case studies not only
can be used as effective pedagogical approaches; they also have the
potential to be used as appropriate and meaningful techniques for
the assessment of student learning.

                    PORTFOLIOS AND ASSESSMENT

Pat Hutchings of the American Association for Higher Education has
done pioneering work in developing and promoting the portfolio as
a means of collecting student work over time. She is especially
adept at elaborating aspects of the model in simple, direct terms
that forcefully present its strengths and weaknesses. Many
questions and many uses can be derived from this file of
information on student learning.

According to Hutchings, portfolio assessment is "a collection of
student work done over time. Beyond that, the rule is variety--and
appropriately so. One finds portfolios used for student advising on
one hand, for program evaluation on the other; some portfolios
include only written work, others a broader array of 'products'.
Some are kept by students, some by department advisors, some by
offices of assessment...."[8]

Distinctive features of portfolio assessment as listed in
Hutching's article are as follows:
* Unlike many methods, portfolios tell you not only where the
     students end up but also how they got there.
* Portfolios put explicit emphasis on human judgment and meaning-
     making.
* Because they prompt (even demand) conversation, portfolios lend
     themselves to use.
* Portfolios are less subject to misuse than apparently simpler,
     single-score methods.
* Most importantly, portfolios can be educational for students.

The variety rule extends to methods of data collection as well. For
example, a portfolio may include course assignments, research
papers, an audio-tape of a presentation, materials from a group
project, and other types of work. Portfolios reveal not only
outcomes--that is, what students know and can do at the end of
their studies--but they get "behind outcomes." As Hutchings
describes it, "They reveal learning over time and are participatory
in nature. They invite conversation and debate between, for
instance, the student, the department advisor, a faculty member
from a support area, the director of student services, and perhaps
others from external audiences." On the other hand, the
disadvantages are that "they're bulky, time consuming, difficult to
make sense of, maybe not what the legislature had in mind, and they
are in an early, unproven stage of development."[9]

       COLLABORATIVE LEARNING AND WOMEN'S WAYS OF KNOWING

Jill Mattuck Tarule, dean of the College of Education and Social
Services at the University of Vermont and one of the authors of
Women's Ways of Knowing, encouraged participants to use
collaborative learning models in their final designs. Collaborative
learning shares a number of traits with women's studies. Both are
learner-focused and process-focused. They each see the learner as
constructing the meaning and learning as embedded in context. Both
also see negotiating power lines as a fundamental task. While
collaborative learning does not necessarily involve a concept of
positionality, it challenges students to work together in small
groups, examining a particular perspective or creating a salient
critique. Women's studies argues strongly that knowledge is
situated--made partial because of the position of the knower to the
known, especially as that is affected by race, class, gender,
sexuality, and other markers. Research on cooperative and
collaborative learning groups suggests that students, as they
struggle to construct knowledge together, become conscious of their
distinct differences rooted in their own experiences. Collaborative
learning reflects an intentionality about offering a social
critique that also distinguishes women's studies.

Tarule also advocated tapping as assessment resources common class-
room practices that are especially rich in revealing how learning
occurs over time. Among her suggestions were the dialogic journal,
periodic free-writes, student journals, and student
self-assessments. The dialogic journal is kept in the classroom and
is designed to capture a communal record of the learning process in
the class over the semester. Periodic free-writes, like the
dialogic journal, might already be a regular part of a course and
yet also become data read later for assessment purposes. Student
journals are a variation of the two writing techniques. Some are
structured with a very definite set of questions to answer; others
are more free-flowing and spontaneous.

Finally, one might incorporate self-assessments into the fabric of
a given course with an eye to using it later to answer some
questions about course content, student learning, or class
dynamics. Students can be asked early on to do a free-write on why
they took the course or what key questions they hope the course
will answer. In the middle of the course, students can be asked to
write on what they are thinking about the course at that point.
Finally, they can do a self-assessment at the end. Another
variation of the self-assessment is to design it as a progress
report in which the students write a letter to the professor every
two weeks which the professor answers. In a course at Wellesley
College, students wrote letters about why they took a specific
course and then mailed them to their parents. Ultimately, all these
sources of data can be examined through many different lenses. Such
a data base could even become part of a longitudinal study of
student learning.
 
Offering women's studies faculty members another lens through which
to consider how students learn, Tarule presented the main contrasts
between separate and connected knowing as they emerged in Women's
Ways of Knowing.[10] Separate knowing, more typically associated
with men and more valued as a way of knowing in the traditional
academy, seeks to construct truth--to prove, disprove, and
convince. Connected knowing, which emerged from the authors'
research as more typical of women, seeks to construct meaning, to
understand and be understood. While separate knowing is
adversarial, connected knowing is collaborative. Where the
discourse of separate knowing is logical and abstract, that of
connective knowing is narrative and contextual. Instead of
detachment and distance in relation to the known, the connected
knower seeks attachment and closeness. While feelings are seen to
cloud thought for the separate knower, feelings illuminate it for
the connected knower. Objectivity for the former is achieved by
adhering to impersonal and universal standards; objectivity for the
latter is achieved by adopting the other's perspective. While the
separate knower is narrowing and discriminating, the connected
knower is expansive and inclusive. For the separate knower, there
is the risk of alienation and absence of care. For the connected
knower, there is the risk of loss of identity and autonomy.

Tarule emphasized that separate and connected knowing are gender-
related, not gender-specific. She urged faculty members to narrate
with students how they do or do not go back and forth between the
two ways of knowing and under what circumstances and in what
contexts they choose one approach over the other. By being involved
in the students' narratives of their learning processes, one can
hold the complexity in tension.

         CREATING PROGRAM GOALS AND LEARNING OBJECTIVES

In discussing the relation between goals and measures, Mary Kay
Thompson Tetreault posed two questions: How do we begin to
conceptualize about what we want to know? What are the things that
shape what we want to know? Tetreault thinks it is valuable to
probe our own personal histories as students and as teachers to
trigger our thinking. What do we care passionately about? How were
we silenced? What is the relation of our own history to that of the
new generations of students? She also urged women's studies faculty
members to look at their program goals and imagine what their
students might say. How would the collection of data change? Are
the questions the right questions for students? As they formulated
goals, Tetreault reminded faculty members to be conscious of their
own partial perspectives and ways to enhance that partiality with
a variety of other people's perspectives.


As a specialist in the field of public health and as the assessment
expert who relied in my research predominantly on quantitative
methods, I was to function as a resource for those who were
interested in developing more statistically-oriented instruments.
One of the most prevalent misconceptions is that quantitative and
qualitative evaluation are necessarily in opposition. Because they
can actually complement one another, the two used together can
reveal different and illuminating information about a given
subject. Most campuses in our project ended up using a combination
of quantitative and qualitative methods to assess student learning.
The method finally chosen depends on the question, issue, or
problem you want to examine or solve.

In creating instruments used in quantitative analysis, it is
important to use language that can be translated into measurable
outcomes. "Recognize," "describe," and "identify," for instance,
are easier to measure than "appreciate," "engage," or "foster."
Similarly, skills or learning behaviors you can see and observe
lend themselves more easily to having a numerical value attached to
them. In the preliminary goal statements each women's studies
program prepared, a number of problems stood out which are common
when first setting up an assessment plan:

* Participants were unclear about the conceptual distinction
     between goals and objectives. They did not make the
     distinction of separating the concept (the goal) from the
     measurement of it (the objective).
* There often was a blurring of program/departmental institutional
     goals, faculty/instructional objectives, and student learning
     objectives.
* The language used to formulate and describe program goals did not
     convey the appropriate meaning of the concept (direction,
     aspiration, expectations, ideals, and purposes of the
     programs).
* The language used to define student learning objectives was vague
     or ambiguous. Learning objectives did not identify and specify
     measurable end-products--that is, "outcomes."

To help programs avoid such conundrums, I sought to train
participants in five areas:

* how to distinguish between program goals and student learning
     objectives 
* how to conceptualize, formulate, and state goals in appropriate
     language
* how to derive student learning objectives from program goals
* how to translate student learning objectives into outcomes
* how to translate outcomes into measurable (quantifiable)
     indicators of student learning. 

A program goal actually is no more than a generalized statement
expressing a program's expectations, a "timeless statement of
aspiration." Program goals should be stated in terms that are
clear, specific, and measurable. They also should express consensus
about what the program aims to do.

Developing program goals is useful in any assessment project
because  they establish the program's rationale, framework, and
parameters. They also serve as the philosophical justification for
the program's existence. Many in our project found that program
goals gave their program focus and direction. For assessment
purposes, they also serve as a monitor to gauge a program's
accomplishments. In the case of the "Courage to Question" project,
we used the creation of program goals as a way to determine
specific areas in women's studies programs that involved the
knowledge base, critical skills, personal growth, and pedagogy. In
doing so, program goals ultimately can reflect the educational
needs of both students and the institution. They also permit a
program to focus on what students should know and be able to do.

As an assessment project on student learning takes shape, it also
is important to define learning objectives clearly. Such objectives
describe an outcome (intellectual or personal change) that students
should be able to demonstrate or achieve as the result of an
instructional activity or a formal or informal learning experience.
These outcomes are observable and measurable. Learning objectives
are useful because they specify what students are expected to be
able to do expressly as a result of instruction. Ultimately, they
form the basis for evaluating the success or failure of
instructional efforts. They also supply valuable feedback to both
the students and the instructor. In addition, they serve as a
vehicle for developing program goals, curriculum design, teaching
plans, instructional activities, and assessment activities. Because
they summarize the intended outcomes of the course, they can
communicate to colleagues and others the intent of a course clearly
and succinctly.

It is critical that women's studies programs define their program
goals and thus their direction and spend time thinking about
exactly what and how they want students to learn. By formulating
program goals and articulating learning objectives in terms of
demonstrable, measurable outcomes, faculty members can measure the
success or failure of their instructional efforts.

                           CONCLUSION 

As my colleague Joan Shapiro describes it, assessment is really
just paying attention, listening, but it is also interactive and
ongoing. There is no shortage of choices among assessment
instruments available to help do just that. Two factors, however,
are especially important to consider in designing an assessment
plan: choosing multiple and appropriate measures that will produce
the information you need to know and choosing methods that
complement the theoretical underpinning of the academic discipline
or issue you are investigating.

Drawing on a common procedure for planning qualitative assessment
designs, Tarule described a matrix that all but one of the seven
participating programs adopted in creating their final assessment
design. The matrix allows one to assess a particular class or an
entire project. One dimension has a set of intentions or goals
horizontally across the top. Listed vertically down the side are
the methods or sources relied upon to discover information about
the goals across the top. (See page 102.) The matrix invites
multiplicity, giving several perspectives from which to view a
subject area--echoing Shapiro's notion of the importance of
triangulation. It also invites the use of information already
embedded in what we do. Using unobtrusive and integrated measures
that are integral to the work you are doing is always preferable to
a bulky, complicated, often expensive external measurement. A
chemist once said that being a good chemist is like being a good
cook: You need to know what you want to do, what materials you will
need, and how to mix them properly. Assessment shares that culinary
analogy. The National Assessment Team for "The Courage to Question"
offered an array of spices for faculty members to choose from as
they were posing questions about student learning in women's
studies classes. Ultimately, the campuses themselves seasoned the
meal according to their own tastes.

             SAMPLE PRE-ASSESSMENT QUESTIONS TO POSE

                      ESTABLISHING CONTEXT

A. Context of the institutional environment:
Persons, Groups, and Institutions Affected and Involved
* Who will be involved in the planning, implementation of the
     assessment?
     Students, WS faculty members, other faculty members,
     colleagues, administrators, alumni, employers, professional
     associations, regents, trustees, government officials,
     politicians?
* Who will be the subjects of the assessment?
* Who will see the results of the measurement?
* Who will be affected by the results of the assessment?
* How will the results affect the various constituencies?

Type and Process of Assessment
* Did the request for assessment originate internally or from the
     outside?
* Is this a "one-time-only" assessment or is it part of an
     established procedure ?
* Do you need to obtain standardized results?
* Do you have special resources available to assist in the
     planning, development, conduct, analysis of the assessment?
* What is the institutional attitude/climate regarding the WS
     program: supportive, apathetic, hostile?
* How do these responses to women's studies manifest themselves?
* Are there any potential or anticipated problems, negative
     consequences, pitfalls to avoid from the assessment itself or
     from its result?
* Is it important to ensure the anonymity, confidentiality, or
     security of the subjects, process, or results of the
     assessment?

B. Context of the instructional setting:
Size
* How many students will take the course?
* Will students be grouped or sectioned in some way?
* If so, by what criteria ?
* What is the estimated total yearly enrollment for the course?
* Is there a limit on class size?
* If so, who sets the limit?
* Is there a waiting list to take the course?
* Who sets the criteria that determine what students will be 
     admitted?

Environment
* In what type of institution will the course be taught?
* What is the size of the institution?
* Where is the institution located?
* In what type of location will classes be held?
* What type of classroom facilities are available?

Time
* Which period of calendar time will the course cover?
* How much total time is available for class sessions?
* How long are single class sessions?
* How many class sessions are there?
* How often do class sessions meet?
* Is there opportunity to hold extra class sessions, formal or
     informal?
* How much out-of-class work can be expected of the student?
* What are the other concurrent time demands of students? (other
     courses, work, families, etc.)
* What is the assignment and exam schedule of the course? other
     courses?
* What time of day/evening will class sessions be held?
* How much time is available for office hours, advising, tutoring?

Resources
* How much time is available for new course development? Is there
     an incentive ?
* How much time is available for the development of materials prior
     to or during the implementation of new courses?
* How much time is available for course preparation?
* How much money and other resources are available for material
     developments, guest lectures, etc.?
* What information sources or references are available? In what
     form? Where are they located?
* What human resources may be drawn upon and to what extent?
* What types of supplies and equipment are available? How
     accessible are they ?
* Who will teach the course?

Precedents
* Are there any precedents or conventions to which the instructor
     must adhere (grading system, pass/fail system,
     competency-based system, team teaching approaches, method of
     student selection to take course, etc.)?
 
C. Students as context:
Demographics
* What is the average age, age range of the student body, WS
     program, class?
* What races, ethnicities, intra-ethnicities, nationalities,
     languages?
* What is the sex representation?
* What is the economic, social, class representation of the student
     body, WS program, class?
* What is the mix of political and ideological beliefs?
* Is there a prevailing value system?
* What is the marital status representation?
* Do students typically have children? What are the ages of their
     children?
* Where do students typically reside--city, suburb, on campus, off
     campus?
* Are students typically employed--full-time, part-time, on campus,
     off campus, work study?
* What is the mix of types of jobs students hold?
* Are the students distinguished by any special physical handicaps
     or assets involving health, endurance, mobility, agility,
     vision, hearing, etc.?

Entry Level
* What is the general educational level of the student body, WS
     program, class? Immediately out of high school, transfers from
     community colleges, adult returners?
* What is the general ability level (aptitude) of the students,
     e.g. advanced placement, honors program, remedial, etc.?
* What preparation do students have in the course subject content?
* Have students taken WS courses before?

Background, Prerequisites, Motivations
* Is there anything special about the racial, ethnic, age, sex,
     social, cultural, economic, political background, level of
     educational attainment, places of residence of the student
     body, WS program, class?
* Do students tend to have serious biases or prejudices regarding
     the subject matter, instructor, teaching methods, etc.?
* What background characteristics do students have in common?
* Why are students taking the course?
* Is it required or an elective?
* What do students expect to get out of the course?
* How would you describe the level of motivation, interest?
* What types of rewards--immediate or long range--do students
     expect to gain from taking the course?
* What types of roles--personal and professional--are students
     likely to assume upon graduation? Will they take more
     courses, begin a family, or do both? What is the percentage of
     students who will assume these roles and at what stage in
     their lives?
* Under what circumstances (family life, personal life, career
     life) will students likely use what they will learn in the
     course?



1. Frances Maher's and Mary Kay Thompson Tetreault's paper, "Doing
Feminist Ethnography: Lessons from Feminist Classrooms," in The
International Journal of Qualitative Studies in Education 6
(January 1993). Their article addresses methodological issues faced
in ethnographic studies of feminist teachers in different types of
colleges and institutional settings.
2. Maher and Tetreault, 1990.
3. Shapiro's article--"Participatory Evaluation: Towards a
Transformative of Assessment of Women's Studies Programs and
Projects," Educational Evaluation and Policy Analysis 10 (Fall
1988): 191-99--is a thorough discussion of the evolution of these
two models and their usefulness in assessing women's studies
programs.
4. Shapiro, Secor, and Butchart, 1983; Shapiro and Reed, 1984;
Shapiro and Reed, 1988.
5. Shapiro, 1988.
6. Shapiro and Smith-Rosenberg, 1989.
7. Shapiro, 1990.
8. Pat Hutchings, "Learning Over Time: Portfolio Assessment," AAHE
Bulletin 42 (April 1990).
9. Ibid.
10. Mary Field Belenky, Blythe McVicker Clinchy, Nancy Rule
Goldberger, and Jill Mattuck Tarule, Women's Ways of Knowing: The
Development of Self, Voice, and Mind (New York: Basic Books, 1986).
11. Based on Belenky, Clinchy, Goldberger, and Tarule, Women's Ways
of Knowing, 1986; and Peter Elbow, Writing Without Teachers (New
York: Cambridge University Press, 1973). (With thanks to Hilarie
Davis for her suggestions.)