Test Scoring

Hours: Monday-Friday, 7:45am-4:30pm
Contact: UW-Test Scoring, opscan@uww.edu, x4864 or 1149
Drop-off Location: Technology Support Center Helpdesk - Andersen Library 2000 Helpdesk hours: Mon-Thurs, 8 a.m. to 9 p.m., Fri. 8 a.m. to 4:30 p.m.

   

Deadline: For those planning to request teacher evaluation forms, the deadline for making prep requests for all courses (3-week, 8-week or 16-week) is two weeks prior to the end of the final day of class.

The scanning of true/false exams, multiple-choice exams or surveys on machine readable answer sheets is a service provided by iCIT. Results are sent to the instructors campus email account. Note: iCIT will not print results, as they can be printed by the instructor. If you need assistance printing, contact the Technology Support Center Helpdesk at x4357. Instructors may also request individual results to be sent to student email accounts. Please allow for a 24-hour turn around time for exam scanning services.

Test Request Forms and answer sheets are available in departmental offices or you may use the following link:

Instructions

Instructors requesting the scanning service must submit the following:

  1. Test Request Form
    Please submit a completed form with each scanning request.
  2. Answer Key
    Answer keys must be completed using a No. 2 pencil.
    The Name Field and Identification Field must be completed:
    • Last Name: print instructor's last name and the word "key".
    • Social Security Number: print course and section number with no blanks and left justify.
  3. Weighting Key (if applicable)
  4. Answer Sheets
    Answer keys must be completed using a No. 2 pencil.
  5. Student Information
    The Name Field and Identification Field must be completed:
    • Last Name: print first eight letters of last name
    • First Name: print first name
    • Social Security Number: use UW-Whitewater student ID Number. (DO NOT USE SOCIAL SECURITY NUMBER)
Note: Students must indicate their student ID# on the answer sheet in order for their scores to be sent via email. The instructor's email address must be indicated on the request sheet for scores to be emailed to students.
Results
Emailed results are available as four text files or four worksheets in an Microsoft Excel workbook.

Frequently Asked Questions

I have a question concerning the 24 hour turnaround. I heard one professor got the results by email before he got back to his office! How come?

It is entirely possible on a very slow day you may get the results that fast. We process all tests as quickly as possible, but the workload is extremely variable. To be fair to everyone we will process in first come, first serve order. Please allow for 24 hour turnaround. Keep in mind that if it is within 24 hours, that would normally mean within the 8 hours of working time. If there are any special needs, be sure to let us know.

I am concerned about errors in test scoring affecting my academic records. If for example I put down the wrong ID# and it is someone else's will that affect my records?

The test scoring process in no way updates any student records. Test scoring is a self contained system that does not update or change any other data. We just generate the scores for that particular test and we provide that to the instructor. He/she or the department keeps the running tally for the final grade and submits that to records. If you have any concerns contact your instructor or the department office.

Where can I drop off the exams after work?

You can drop off your tests at the Technology Support Center Helpdesk in Andersen Library Room 2000 until 6:00pm Monday through Thursday, and until 4:30pm on Fridays during the school year.

Whenever the Technology Support Center Helpdesk is closed, including weekends, arrangements have been made with the circulation desk at the Library for after hours drop-offs. You are encouraged to call x4864 and let us know of after hours drop-offs to be sure they will be picked up and be processed the next business day.

On my test I have four bonus questions. I would give credit only if they got the question right. How should I handle this?

I had an instructor who wanted a way to handle something like this. What we came up with was to run the bonus questions as a separate test. Then with the two output files being in Excel, it was easy to just add the bonus scores to the test scores.

I have already given the exam, and decided that one of the questions should be dropped. Is this going to be a problem?

No. Just leave that question blank on the answer key and make a note in the special instructions that the question or questions are to be omitted so we know that the omission is intentional. If you decided to drop a question after you've given the tests back to the students, we will usually need to rerun the results. You can request this by calling x4864. We usually do not need to rescan the tests.

I have already given the exam, and decided that one of the questions has two correct answers. What should I do?

Include that information in the special instructions or call Test Scoring at x4864. The tests would be run twice with the two different answers. You will get two reports, preferably in Excel format, and you would need to take the better of the two scores for each student. If you have more than two right answers or if any answer is correct, you will probably need to hand score that question. Having the output in Excel will simplify the process.

Why are we now dropping off tests at the Technology Support Center Helpdesk?

We moved the drop off location to the Technology Support Center Helpdesk to improve customer service. The Technology Support Center Helpdesk area is more publicly accessible. We appreciate your understanding and cooperation as we make this transition. All test scoring operations and the campus mail address for Test Scoring remain in McGraw Hall 208. Only the drop off location has changed.

I would like a hard copy print of my test results. How can I get that?

We are no longer provide hard copy printouts. However, you may request Notepad file output on the request form and you will receive four text files as attachments in your email. You can print these yourself. They will be EXACTLY like what we had in the past for standard reports. If you are having any trouble printing these out, check to see if the print setup is set to "landscape" and font size set to 8. Contact Test Scoring at x4864 or the Technology Support Center Helpdesk at x4357 if you need help with this.

Could you explain how to use the weight key?

The weight key is used to vary the value of each question from 1 to 5. A typical situation where you might use the weight key is if you want true/false questions and multiple-choice questions on the same test, and you want the former to be worth 2 points while the latter to be worth 5. On the weight key, you would fill in B for each true/false question and E for each multiple-choice question. (A=1, B=2, C=3, D=4, E=5) You can vary the value of any question as you choose. All five values could be used on one test. Students may however appreciate that you keep it simple. Test Scoring has sample tests on file showing how they work and affect the scores. Call x4864 if you would like to see an example.

It's Thursday, and I don't need the results until the following Tuesday, should I let you know?

Yes, that will be especially appreciated during busy times like during finals, or if we are short-handed due to illness or vacation.

Can you hold off emailing to the students until I have a chance to check the results?

Yes, Test Scoring can email results up to one week later. Just be sure to note that in the special instructions on the request sheet. Please call or email Test Scoring when you are ready to send the emails to the students. Normally, this will not be possible after one week.

I don't care for all the analysis in the reports, what can I do to get just what I want?

If you have been getting the output in Notepad format, you may try asking for it in Excel, then you can just highlight whatever you want and either print that or save it into a new Excel file. Call x4864 if you need tips on working with Excel for your needs. The Technology Support Center Helpdesk can also help you with technical questions.

One of the questions is being omitted, but I would like to see how the students responded to this question.

No problem. Just leave that question blank on the answer key. You would still get the analysis of how that question was answered. There have been several cases where the professor wanted to take a poll with a question on the exam and not have it scored.

The students would like to get their results by email, what should they do?

The students will receive their scores by email only if the professor requests it, and if the students fill in their seven digit student ID#'s. If a student does not want the results emailed for some reason, he can leave it blank.

What is the new policy on the returning of tests by campus mail?

Tests are now being sent back by campus mail immediately after processing. If an instructor needs to have the scan sheets back sooner, she may indicate that on the request form and arrangements will be made.

I am concerned about the use of Social Security numbers on the tests.

Since the new student ID's were introduced in the fall of 1999, it is no longer ever necessary to use Social Security Numbers (SS#) on any tests. If you have any instructions that still say to use the SS#, these are outdated and need to be corrected. Please disregard the words "Social Security Number" in the Identification box on the answer sheets and use only the student ID# if requested or if desired. In short, you should NEVER use SS# on tests.

Could you explain how I am to interpret the item analysis that comes with the test result report?

For some basic explanation of item analysis and statistics concepts, you can see the Statistics Explained page.

 

Explanation of Item Statistics

The following description of the test item statistics produced by this program is rather general in nature. For more specific information concerning any particular statistic, reference should be made to any good statistics or test and measurements textbook.

The column headed item gives the question number. The columns headed A, B, C, D, E, and omitted indicate the percent and number of students who responded accordingly. These can be used to determine the pattern of responses for multiple-choice items.

The difficulty index of a test item indicates the proportion of students who respond correctly to the item. For example, if the difficulty of an item were 65, this would indicate that 65 percent of the students answered the item correctly. The higher the difficulty index, the easier the item. A classroom test covering related subject matter should contain items with a fairly wide range of difficulty values. However, items with indices at or below the chance level (25 or lower for an item with 4 alternatives-20 or lower for an item with 5 alternatives) are undesirable. Equally undesirable are extremely easy items with difficulties approaching 1000, as they merely add a constant to the scores. Test reliability and validity will be maximized if most item difficulties are somewhat easier than halfway between the chance levels and 100. Under ordinary circumstances, then, a test consisting of items with 4 alternatives should contain many items with difficulties in 60-85 ranges and the remaining should be scattered between 25 and 100. Tests consisting of items with 2 alternatives (true-false items) should have difficulties between 50- 100, with a concentration in the 75-909 ranges.

The point-biserial correlation coefficient measures the relationship between the score on the item and the score on the test. The value of this statistic ranges between -100 and +100. A high positive value indicates that those who answered the item correctly also received higher scores on the test than those who answered the item incorrectly. A high negative value indicates that those who answered the item correctly received lower scores on the test than those who answered the test item incorrectly. A near-zero value indicates that there is little relationship between the score on the item and the score on the test. It is desirable to retain items with a high positive correlation coefficient and to eliminate those with near zero or negative values. As a rough guide, it is suggested that the items with negative or near-zero (10 or less) correlations be eliminated or substantially revised, and those with low positive (10-30) correlations be studied to determine how improvement might be accomplished.

The Kuder-Richardson internal consistency formula number 20 has been used to computer the reliability estimate provided in this analysis. A reliability coefficient of this type gives and indication of the extent to which individuals taking the test again will receive the same scores. Vales of the Kuder-Richardson reliability estimate range between 0.000 and 1.000. A value close to +1.000 indicates the test exhibits a high degree of reliability. Estimates should be interpreted cautiously if large numbers of students are unable to complete the test within the allotted time. For a typical 50-minute classroom examination covering related subject matter, a reliability coefficient of at least .75 is desirable. Reliability can be improved through item revision based upon the item analysis data computed in this program. Lengthening the test (when this is practical) will also increase reliability, particularly in the case of short examinations.

The standard error of measurement is an estimate of the probably extend of error in test scores. It is interpreted in the same manner as standard deviation. A standard error of measurement of 3.500, for example, indicates that for any particular test score, the odds are 2 to 1 that the student's true score (his average score on several similar tests) will not deviate from the one obtained by more than 3.500 points. The more reliable and error-free the test, the smaller the standard error or measurement. This direct application to scores makes the standard error or measurement especially useful when evaluating differences among students or assigning grades.