For a better experience, click the Compatibility Mode icon above to turn off Compatibility Mode, which is only for viewing older websites.

Glossary of Terms

Glossary of Assessment Terms

Assessment:
The systematic collection, review, and use of information about educational programs and courses undertaken for the purpose of both improving student learning as well as instructional delivery.
Bloom's Taxonomy of Cognitive Objectives:
Six levels arranged in order of increasing complexity.
College:
A division within the university comprised of departments or schools offering courses and majors leading to a degree in specific areas.
Concentration:
A concentration is comprised of an approved list of requirements that are attached to a major and provide depth and breadth in a specific area of the major being studied.
Course Embedded Assessment:
The process of reviewing materials generated in the classroom. In addition to providing a basis for grading students, such materials allow faculty to evaluate approaches to instruction and course design.
Curriculum Maps:
Tools that can be used at any stage in the curriculum cycle–whether developing, reviewing or revising curriculum. They provide a picture, a graphical description or a synopsis of curriculum components that can be used to encourage dialogue and help faculty ensure that learning experiences are aligned and lead to the achievement of program learning outcomes
Data:
Data is information often in the form of facts or figures. This may be gathered from a number of sources such as databases or surveys and should be used in program review to make calculations or draw conclusions. Data may be quantitative or qualitative.
Degree:  
Degree is a student’s ultimate goal when attending the university.  The degree can be specified (BSAE, MSAE, DPT, EDD), non-specified (BS, MS, PHD) and at Drexel we consider certificates (CERT, PBC, PMC) as degrees because it is a goal toward which the student is working, although it is not a traditional degree.
Department:
A division within a college offering courses and majors leading to a degree in specific  areas. What  follows  is  a  set  of  guidelines  which  can  serve  as  a  framework  for approaching a continuous quality improvement approach to academic programs. The two main components of the program review will be a self-study and a review by an external expert.
Direct Measures of Learning:
Students display knowledge and skills as they respond directly to the instrument itself. Examples might include: objective tests, essays, presentations, and classroom assignments.
External Assessment:
Use of criteria (rubric) or an instrument developed by an individual or organization external to the one being assessed. This kind of assessment is usually summative, quantitative, and often high-stakes, such as the SAT or GRE exams.
Formative evaluation:
Improvement-oriented assessment. The use of a broad range of instruments and procedures during a course of instruction or during a period of organizational operations in order to facilitate mid-course adjustments.
Goals for Learning:
Goals are used to express intended results in general terms. The term goals are  used  to  describe  broad  learning  concepts,  for  example:  clear  communication,  problem solving, and ethical awareness.
Indirect Measures of Learning:
Students are asked to reflect on their learning rather than to demonstrate it. Examples include: exit surveys, student interviews, and alumni surveys.
Institutional Effectiveness:
The measure of what an institution actually achieves.
Institutional Level Assessment:
Institution level assessment is aimed at understanding and improving student learning across the institution
Learning Outcomes:
 
Observable behaviors or actions on the part of students that demonstrate that the intended learning objective has occurred.  Learning Outcomes occur on both the program and course levels.
Major:
A major is comprised of an approved list of requirements that must be completed in order for a student to reach the goal of a degree.  All students working toward our definition of degree have a major.
Measurements:
Design of strategies, techniques and instruments for collecting feedback data that evidence the extent to which students demonstrate the desired behaviors.
Methods of Assessment:
Techniques or instruments used in assessment.
Minor:
 
A minor is an approved group of courses, usually 24 credits, developed to provide students with an opportunity to explore areas outside of the major.  At Drexel minors are only available to undergraduate students who have a major.
Mission Statement:
A mission statement explains why the organization exists and what it hopes to achieve in the future. It articulates the organization’s essential nature, its values and its work
Modifications/Improvement Plans:
Recommended actions or changes for improving student learning, service delivery, etc. that respond to the respective findings of the measurement evaluation.
Objectives for Learning:
Objectives are used to express intended results in precise terms. Further, objectives are more specific as to what needs to be assessed and thus are a more accurate guide in selecting appropriate assessment tools. Example: Graduates in Speech Communication will be able to interpret non-verbal behavior and to support arguments with credible evidence.
Performance Assessment:
The process of using student activities or products, as opposed to tests or surveys, to evaluate students' knowledge, skills, and development. Methods include: essays, oral presentations, exhibitions, performances, and demonstrations. Examples include: reflective journals (daily/weekly); capstone experiences; demonstrations of student work (e.g. acting in a theatrical production, playing an instrument, observing a student teaching a lesson); products of student work (e.g. Art students produce paintings/drawings, Journalism students write newspaper articles, Geography students create maps, Computer Science students generate computer programs, etc.).
Portfolio:
An accumulation of evidence about individual proficiencies, especially in relation to learning standards. Examples include but are not limited to: Samples of student work including projects, journals, exams, papers, presentations, videos of speeches and performances.
Program:
 
In order to graduate from the university a student must complete a program which is made up of a specific group of courses denoted as a major(s), which lead to a corresponding degree(s).  Major and Program are often interchanged and can both be used to refer to the group of requirements that must be completed to earn a degree.
Program Assessment:
Uses the department or program as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability.  Ideally, program goals and objectives would serve as a basis for the assessment. Example: How well can senior engineering students apply engineering concepts and skills to solve an engineering problem?  This might be assessed through a capstone project, by combining performance data from multiple senior level courses, collecting ratings from internship employers, etc.
Quantitative Methods of Assessment:
Methods that rely on numerical scores or ratings. Examples:   Surveys,   Inventories,   Institutional/departmental   data,   departmental/course-level exams (locally constructed, standardized, etc.).
Qualitative Methods of Assessment:
Methods that rely on descriptions rather than numbers. Examples: Ethnographic field studies, logs, journals, participant observation, and open-ended questions on interviews and surveys.
Reliability:
Reliable measures are measures that produce consistent responses over time.
Reflective Essays
:
Generally are brief (five to ten minute) essays on topics related to identified learning outcomes, although they may be longer when assigned as homework. Students are asked to reflect on a selected issue. Content analysis is used to analyze results.
Rubrics:
Written and shared for judging performance that indicates the qualities by which levels of  performance  can  be  differentiated,  and  that  anchor  judgments  about  the  degree  of achievement.
Student Outcomes Assessment:
The act of assembling, analyzing and using both quantitative and qualitative evidence of teaching and learning outcomes, in order to examine their alignment with stated purposes and educational objectives and to provide meaningful feedback that will stimulate self-renewal.
Summative evaluation:
Accountability-oriented assessment. The use of data assembled at the end of a particular sequence of activities, to provide a macro view of teaching, learning, and institutional effectiveness.
Teaching-Improvement Loop: 
Teaching,  learning,  outcomes  assessment,  and  improvement may be defined as elements of a feedback loop in which teaching influences learning, and the assessment of learning outcomes is used to improve teaching and learning.
Trends:
The current tendency or movements in a particular direction.
Validity:
As applied to a test refers to a judgment concerning how well a test does in fact measure what it purports to measure

References:

Adapted from Assessment Glossary compiled by American Public University System, 2005 http://www.apus.edu/Learning-Outcomes-Assessment/Resources/Glossary/Assessment-Glossary

Maki, P. L. (2004). Assessing for Learning: Building a sustainable commitment across the institution. Sterling, VA: Stylus Publishing.

Palomba, C. A. & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass Publishers.

Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco: Jossey-Bass Publishers

State University of New York at Potsdam