Archived Assessment Report
| Program | ENGL Communications Gen Ed |
| Assessment Reporter | [email protected] |
| Theme | Practicing Community |
| Review Year | 2024-2025 - Final Report |
| Learning Outcome (or Gen Ed Essential Skill) | Focus Area |
|---|---|
| 2b. Critical Thinking: Evidence Acquisition | Does the student include a range of sources relevant to their research question? |
| 2c. Critical Thinking: Evidence Evaluation | How well do students evaluate sources to incorporate into the final deliverable? |
| 3b. Information and Digital Literacy: Digital Literacy | Does the student effectively communicate, compute, create, and design in digital environments? |
| Learning Outcome (or Gen Ed Essential Skill) | Description of Assessment Tool | Population or Courses Assessed | Hypothetical Analysis/Target |
|---|---|---|---|
| 2b. Critical Thinking: Evidence Acquisition | Students will create Informative Research Reports and may include multimedia elements. | ENGL 1110, ENGL 1110P | Our analysis will focus on whether certain types of assignments allow students to better demonstrate their acquisition of these essentials than others. Part of the data we're collecting is also the genre and medium of the assigned project. |
| 2c. Critical Thinking: Evidence Evaluation | An informative research report which may contain multimedia elements | ENGL 1110, ENGL 1110P | Our analysis will focus on whether certain types of assignments allow students to better demonstrate their acquisition of these essentials than others. Part of the data we're collecting is also the genre and medium of the assigned project. |
| 3b. Information and Digital Literacy: Digital Literacy | An informative research report which may contain multimedia elements | ENGL 1110, ENGL 1110P | Our analysis will focus on whether certain types of assignments allow students to better demonstrate their acquisition of these essentials than others. Part of the data we're collecting is also the genre and medium of the assigned project. |
| Learning Outcome (or Gen Ed Essential Skill) | Summary of Results | Reflection on Focus Area | Intepretation of Results |
|---|---|---|---|
| 2b. Critical Thinking: Evidence Acquisition | For English 1110: 46 students did not demonstrate proficiency (17.42% of participating students) while 218 students achieved proficiency (82.58% of participating students). For English 1110P: 21 students did not demonstrate proficiency (45.65% of participating students), while 25 students demonstrated proficiency (54.35% of participating students). | Due to the fact that there was no consistency in naming the activity, it is impossible to determine whether genre and medium affected students' ability to demonstrate acquisition of skills. | The overall data for spring term reveals that participating students are succeeding in demonstrating competency in evidence acquisition. However, our participation for English 1110 exceeded participation among our English 1110P courses. For example, our assessment pool for English 1110 included only 265 students out of 603 total students enrolled in the course. Thus we are missing a majority of the student population enrolled in this course. Similarly, our participation for English 1110P included just 46 out of 111 total students enrolled. Again, the majority of students enrolled in this course did not participate in the assessment. This suggests that our data may not be as accurate as it could be because it includes a small sampling of students. To improve the accuracy of our data collection, we should consider broadening our participation to include all faculty. |
| 2c. Critical Thinking: Evidence Evaluation | For English 1110: 46 students did not demonstrate proficiency (17.42% of participating students) while 218 students achieved proficiency (82.64% of participating students). For English 1110P: 21 students did not demonstrate proficiency (45.65% of participating students), while 25 students demonstrated proficiency (54.35% of participating students). | Due to the fact that there was no consistency in naming the activity that was assessed, it is impossible to determine whether genre and medium affected students' ability to demonstrate acquisition of skills. | The overall data for spring term reveals that participating students are succeeding in demonstrating competency in evidence acquisition. However, our participation for English 1110 exceeded participation among our English 1110P courses. For example, our assessment pool for English 1110 included only 265 students out of 603 total students enrolled in the course. Thus we are missing a majority of the student population enrolled in this course. Similarly, our participation for English 1110P included just 46 out of 111 total students enrolled. Again, the majority of students enrolled in this course did not participate in the assessment. This suggests that our data may not be as accurate as it could be because it includes a small sampling of students. To improve the accuracy of our data collection, we should consider broadening our participation to include all faculty. |
| 3b. Information and Digital Literacy: Digital Literacy | For English 1110: 46 students did not demonstrate proficiency (17.42% of participating students) while 219 students achieved proficiency (82.64% of participating students). For English 1110P: 21 students did not demonstrate proficiency (45.65% of participating students), while 25 students demonstrated proficiency (54.35% of participating students). | Due to the fact that there was no consistency in naming the activity that faculty assessed, it is impossible to determine whether genre and medium affected students' ability to demonstrate acquisition of skills. | The overall data for spring term reveals that participating students are succeeding in demonstrating competency in evidence acquisition. However, our participation for English 1110 exceeded participation among our English 1110P courses. For example, our assessment pool for English 1110 included only 265 students out of 603 total students enrolled in the course. Thus we are missing a majority of the student population enrolled in this course. Similarly, our participation for English 1110P included just 46 out of 111 total students enrolled. Again, the majority of students enrolled in this course did not participate in the assessment. This suggests that our data may not be as accurate as it could be because it includes a small sampling of students. To improve the accuracy of our data collection, we should consider broadening our participation to include all faculty. |
| 2b. Critical Thinking: Evidence Acquisition | |
|---|---|
| Describe the change that was implemented. | In this round of data collection, we learned we need to refine our data collection process and methodology to yield more relevant and consistent data for analysis. Thus, in our next round of data collection, we will require all faculty teaching ENGL 1110 and 1110P to participate in the assessment, and we will work to ensure that all faculty assess the same activity (final research project). We plan to move forward with more consistent and timely communication of the requirements for assessment, and we will work with faculty to embed the assessment rubric in their Brightspace courses. |
| Type of Change |
|
| Change in Assessment Approach or Tools? | Yes, these changes will require a change in our assessment approach because we are moving from a small pool of faculty participation to broad faculty participation. Doing so necessitates more transparent communication about the importance of assessment, along with how assessment should be completed. We will initiate regular communication with our department members about what courses and activities will be assessed, as well as what tools we will use to conduct the assessment. We will also organize a “norming” session for a future English department so that we can better train faculty around using the assessment rubric to evaluate student proficiency in each category. Finally, we will ensure that faculty across course sections will use a consistent selection process and naming structure for the activity assessed to ensure greater validity and reliability in our assessment. |
| What data motivated the change? | The motivation for these changes comes down to the number of students who participated in the assessment compared to the number of students enrolled in these courses. There is a significant discrepancy between these numbers, and we posit that greater participation would yield more relevant data. Beyond that, we noticed inconsistent activities were being assessed, so we want to provide faculty with more transparent guidance on how to complete the assessment, along with what activity to assess. |
| Hypothesis about the effect the change will have? | We posit that these changes will provide us with more consistent, relevant, and accurate data about student proficiency in the areas assessed. We anticipate that increased participation will affect our proficiency rates, but we are not sure how. We also expect that our results will show greater consistency between students who demonstrate proficiency in this area and students who passed the class. We expect that proposed changes, namely more consistency in using the rubric to score projects among faculty, will yield more reliable results. We anticipate that students who fail to demonstrate proficiency in one or more of the GenEd assessment criteria and will be more likely to fail the course (as opposed to data showing that students who failed the assessment passed the course). |
| 2c. Critical Thinking: Evidence Evaluation | |
|---|---|
| Describe the change that was implemented. | In this round of data collection, we learned we need to refine our data collection process and methodology to yield more relevant and consistent data for analysis. Thus, in our next round of data collection, we will require all faculty teaching ENGL 1110 and 1110P to participate in the assessment, and we will work to ensure that all faculty assess the same activity (final research project). We plan to move forward with more consistent and timely communication of the requirements for assessment, and we will work with faculty to embed the assessment rubric in their Brightspace courses. To close the gap on student proficiency related to evidence evaluation, the FYC Committee will share resources related to teaching research processes, specifically gathering and evaluating sources. Committee members will share strategies that they will use in their courses and disseminate them to other faculty. |
| Type of Change |
|
| Change in Assessment Approach or Tools? | Yes, these changes will require a change in our assessment approach because we are moving from a small pool of faculty participation to broad faculty participation. Doing so necessitates more transparent communication about the importance of assessment, along with how assessment should be completed. We will initiate regular communication with our department members about what courses and activities will be assessed, as well as what tools we will use to conduct the assessment. We will also organize a “norming” session for a future English department so that we can better train faculty around using the assessment rubric to evaluate student proficiency in each category. Finally, we will ensure that faculty across course sections will use a consistent selection process and naming structure for the activity assessed to ensure greater validity and reliability in our assessment. |
| What data motivated the change? | The motivation for these changes comes down to the number of students who participated in the assessment compared to the number of students enrolled in these courses. There is a significant discrepancy between these numbers, and we posit that greater participation would yield more relevant data. Beyond that, we noticed inconsistent activities were being assessed, so we want to provide faculty with more transparent guidance on how to complete the assessment, along with what activity to assess. Although more than 80% of our students demonstrated proficiency in this area, we strive to close the proficiency gap as much as possible. We want to research and incorporate proven pedagogical approaches for teaching research processes at the college level to help students demonstrate competency in this critically important life skill. |
| Hypothesis about the effect the change will have? | We posit that these changes will provide us with more consistent, relevant, and accurate data about student proficiency in the areas assessed. We anticipate that increased participation will affect our proficiency rates, but we are not sure how. We also expect that our results will show greater consistency between students who demonstrate proficiency in this area and students who passed the class. We expect that proposed changes, namely more consistency in using the rubric to score projects among faculty, will yield more reliable results. We anticipate that students who fail to demonstrate proficiency in one or more of the GenEd assessment criteria and will be more likely to fail the course (as opposed to data showing that students who failed the assessment passed the course). |
| 3b. Information and Digital Literacy: Digital Literacy | |
|---|---|
| Describe the change that was implemented. | In this round of data collection, we learned we need to refine our data collection process and methodology to yield more relevant and consistent data for analysis. Thus, in our next round of data collection, we will require all faculty teaching ENGL 1110 and 1110P to participate in the assessment, and we will work to ensure that all faculty assess the same activity (final research project). We plan to move forward with more consistent and timely communication of the requirements for assessment, and we will work with faculty to embed the assessment rubric in their Brightspace courses. To close the gap on student proficiency related to digital literacy, we will encourage instructors to include online source acquisition and evaluation, citation mechanics, formatting, document design, and genre conventions in their teaching approach. We will guide instructors in assessing digital literacy in traditional research papers versus in multimodal research projects. For example, if an instructor requires students to produce a traditional research paper, then they can assess students’ accuracy in formatting the paper and citing sources; they can also assess students’ proficiency in including a range of credible online sources. If an instructor allows for multimodal research projects, we will encourage them to assess students’ understanding of genre conventions and application of citation practices common within that genre and medium. We noticed that not all instructors felt confident in what to assess with this particular criterion, so we will provide guidance through a norming session and follow-up discussion. |
| Type of Change |
|
| Change in Assessment Approach or Tools? | Yes, these changes will require a change in our assessment approach because we are moving from a small pool of faculty participation to broad faculty participation. Doing so necessitates more transparent communication about the importance of assessment, along with how assessment should be completed. We will initiate regular communication with our department members about what courses and activities will be assessed, as well as what tools we will use to conduct the assessment. We will also organize a “norming” session for a future English department so that we can better train faculty around using the assessment rubric to evaluate student proficiency in each category. Finally, we will ensure that faculty across course sections will use a consistent selection process and naming structure for the activity assessed to ensure greater validity and reliability in our assessment. |
| What data motivated the change? | The motivation for these changes comes down to the number of students who participated in the assessment compared to the number of students enrolled in these courses. There is a significant discrepancy between these numbers, and we posit that greater participation would yield more relevant data. Beyond that, we noticed inconsistent activities were being assessed, so we want to provide faculty with more transparent guidance on how to complete the assessment, along with what activity to assess. Although more than 80% of our students demonstrated proficiency in this area, we strive to close the proficiency gap as much as possible. We heard from several faculty that they were not clear on what to assess with this criterion, so we will work to develop more guidance through a norming session, examples, and follow-up discussions. |
| Hypothesis about the effect the change will have? | We posit that these changes will provide us with more consistent, relevant, and accurate data about student proficiency in the areas assessed. We anticipate that increased participation will affect our proficiency rates, but we are not sure how. We also expect that our results will show greater consistency between students who demonstrate proficiency in this area and students who passed the class. We expect that proposed changes, namely more consistency in using the rubric to score projects among faculty, will yield more reliable results. We anticipate that students who fail to demonstrate proficiency in one or more of the GenEd assessment criteria and will be more likely to fail the course (as opposed to data showing that students who failed the assessment passed the course). Finally, we anticipate that providing more robust guidance for faculty will increase participation in our assessment. |
| Learning Outcome (or Gen Ed Essential Skill) | Description of Assessment Tool | Population of Courses Assessed |
|---|---|---|
| 2b. Critical Thinking: Evidence Acquisition | Faculty will use students' final research projects to conduct the assessment in this round. Faculty will use the same rubric we developed for the round 1 assessment to score these activities. | English 1110, English 1110P |
| 2c. Critical Thinking: Evidence Evaluation | Faculty will use students' final research projects to conduct the assessment in this round. Faculty will use the same rubric we developed for the round 1 assessment to score these activities. | English 1110, English 1110P |
| 3b. Information and Digital Literacy: Digital Literacy | Faculty will use students' final research projects to conduct the assessment in this round. Faculty will use the same rubric we developed for the round 1 assessment to score these activities. | English 1110, English 1110P |
| Learning Outcome (or Gen Ed Essential Skill) | Summary of Second Round Results | Intepretation of Results, Pre- and Post-Change | Follow up questions, possible next steps |
|---|---|---|---|
| 2b. Critical Thinking: Evidence Acquisition | For English 1110: 105 students did not demonstrate proficiency (14.8% of participating students) while 604 students achieved proficiency (85.1% of participating students). For English 1110P: 17 students did not demonstrate proficiency (47.2% of participating students) while 19 students achieved proficiency (52.7% of participating students). Because there was no consistency in naming the activity, it is impossible to determine whether genre and medium affected students' ability to demonstrate acquisition of skills. The overall data for round 2 reveals that participating students are succeeding in demonstrating competency in evidence acquisition, but our English 1110P students seem to be struggling to achieve proficiency more than our English 1110 students. | How can we design an assessment tool that yields more accurate and consistent results? What activities would yield the most reliable assessment results? How can we motivate participation among faculty to ensure all courses are assessed? What additional support and guidance can we provide to encourage faculty participation? In our next assessment cycle, should we assess these same courses, or should we select other GenEd courses? Possible next steps: Identify a cohort of committed faculty who will teach English 1110P to shape pedagogical approaches, best practices, and course curricula and participate in assessment. Continue to narrow achievement gaps in our assessment by examining our teaching approaches to evidence acquisition, evidence evaluation, and digital literacy. Our department just began requiring faculty to use a Template Shell for our online English 1110 and English 1110P courses, which we hope will result in less variability in our course curricula and increase student proficiency. | |
| 2c. Critical Thinking: Evidence Evaluation | For English 1110: 105 students did not demonstrate proficiency (14.8% of participating students) while 604 students achieved proficiency (85.1% of participating students). For English 1110P: 17 students did not demonstrate proficiency (47.2% of participating students) while 19 students achieved proficiency (52.7% of participating students). Because there was no consistency in naming the activity, it is impossible to determine whether genre and medium affected students' ability to demonstrate acquisition of skills. The overall data for round 2 reveals that participating students are succeeding in demonstrating competency in evidence evaluation, but our English 1110P students seem to be struggling to achieve proficiency more than our English 1110 students. | How can we design an assessment tool that yields more accurate and consistent results? What activities would yield the most reliable assessment results? How can we motivate participation among faculty to ensure all courses are assessed? What additional support and guidance can we provide to encourage faculty participation? In our next assessment cycle, should we assess these same courses, or should we select other GenEd courses? Possible next steps: Identify a cohort of committed faculty who will teach English 1110P to shape pedagogical approaches, best practices, and course curricula and participate in assessment. Continue to narrow achievement gaps in our assessment by examining our teaching approaches to evidence acquisition, evidence evaluation, and digital literacy. Our department began requiring faculty to use a Template Shell for our online English 1110 and English 1110P courses, which we hope will result in less variability in our course curricula and increase student proficiency. | |
| 3b. Information and Digital Literacy: Digital Literacy | For English 1110: 140 students did not demonstrate proficiency (19.4% of participating students) while 580 students achieved proficiency (80.5% of participating students). For English 1110P: 17 students did not demonstrate proficiency (47.2% of participating students) while 19 students achieved proficiency (52.7% of participating students). Because there was no consistency in naming the activity, it is impossible to determine whether genre and medium affected students' ability to demonstrate acquisition of skills. The overall data for round 2 reveals that participating students are succeeding in demonstrating competency in digital literacy, but our English 1110P students seem to be struggling to achieve proficiency more than our English 1110 students. In addition, we noticed more English 1110 students included in this data set—a total of 720 students instead of 709 for the other two criteria assessed. We are not sure why additional students were included in data for this criterion. | Why does the data set for this criterion include 11 additional students not included in data for the other criteria we assessed? How can we design an assessment tool that yields more accurate and consistent results? What activities would yield the most reliable assessment results? In our next assessment cycle, should we assess these same courses, or should we select other GenEd courses? Possible next steps: Identify or create an assessment activity that can be used across course sections. Continue to narrow achievement gaps in our assessment by examining our teaching approaches to evidence acquisition, evidence evaluation, and digital literacy. Our department just began requiring faculty to use a Template Shell for our online English 1110 and English 1110P courses, which we hope will result in less variability in our course curricula and increase student proficiency. |
Describe any change in student achievement observed as part of this assessment process, and what led to those changes.
As our template shells have only started to be piloted in fall 2025, we have not yet gathered data to determine if our attempts to bring our curriculum into alignment has affected student achievement. We are hopeful that it will lead to positive results in terms of student achievement, but we will not have that data until the end of term.
We did notice, however, that we saw gains in English 1110 student achievement for evidence acquisition and evidence evaluation between rounds 1 and 2. We noticed slight decreases in student achievement in digital literacy.
We noticed also that our pool of participation for English 1110P dropped between rounds 1 and 2, which resulted in lower student achievement rates across the board.
This suggests that a broader pool of participation among faculty and students leads to more robust and accurate data sets, so we will strive to cultivate increased interest and participation in our department’s assessment practices.
Describe long-term changes in the program(s) that the assessment process led to, and what motivated those changes?
This assessment process led to the creation and implementation of template shells for our online sections of English 1110 and English 1110P because it pointed to inconsistencies in our curriculum and helped us identify areas where, as a department, we could strengthen our approach to teaching these core skills.
In our redesigned template shells for English 1110, we revised curriculum so that students are presented with a range of credible sources related to topics in writing studies; students gain understanding of the characteristics of reliable sources using model texts. They then use this knowledge, and a more narrowed research focus, to acquire additional sources to incorporate into their research process.
In addition, our process inspired our Associate Dean, Chair, and Writing Program Director to work together in designing more structured training and support for our part-time faculty—the majority of faculty who teach these classes—which allows department leaders and instructors to discuss curriculum, best practices, and teaching strategies.
Finally, we learned that we could gather more meaningful, consistent, and reliable data by requiring faculty to participate in assessment. This not only supports us in building a culture of assessment within our department but also ensures that we gain a more accurate picture of our courses, students, and programs.
What did you learn about the teaching and learning of "Practicing Community" in your programs?
The ability to acquire and evaluate sources of information to determine their credibility, reliability, and trustworthiness is critical in today's world, enabling students to identify misinformation and propaganda and to make informed decisions using verifiable information. Simultaneously, digital literacy—the ability to analyze and produce texts in digital environments—is an important 21st century skill that allows students to engage effectively in online communities and, hopefully, to become active members in creating and preserving a more just and democratic society.
We learned that our English 1110 students are doing well in these areas, and, for the most part, they are proficient in being able to gather and evaluate evidence and to produce rhetorically effective digital texts.
We also learned that with increased communication and guidance around our assessment, we see improved engagement and participation among faculty, which is important for us as we consider what practicing community means for our department.
Describe any external factors affecting the program or affecting assessment of the program.
Historically, our department hasn’t had a strong culture or understanding of assessment, and our assessment practices are limited to the bandwidth of individuals. Conducting a reliable and meaningful assessment requires collaboration, guidance, and practice—all of which require substantial time and effort. It also requires consistent coaching and transparent communication from those at the helm of CNM’s assessment practices—something our department has struggled with in the past.
To implement truly meaningful and widely accessible assessments, time needs to be more available. Unfortunately, many of our faculty are part-time, contingent labor with multiple jobs and responsibilities; many of our full-time faculty are working exhaustive hours just to keep up with their demanding teaching load, which requires hours of grading each week. These factors challenge us to devote the time needed for assessment, but we remain committed and are optimistic about our next assessment cycle.