Is there a way to make up for the lack of time the student has but they have the attitude and ability?

A professor wrote in a newsletter: “However, many students are taking on too much, doing more than one ‘full-time’ thing: paid work, university courses, and other responsibilities. Many students who have the attitude and ability to succeed don’t have the time and energy needed to achieve their best.”

The question that followed was whether there was a way to make up for the lack of time the student has.

      • I know some people who favor mastery grading might not agree, but I think giving endless quizzes is not fair for such students.
      • But seriously, keep the graded components low stakes (does not discourage) and simple, e.g., three midterms, a final exam, weekly online HW that is formative (give multiple tries) but be low time consuming and challenging enough to keep that distributed practice going. Do not require attendance. After all, we are assessing learning – right.
      • How much they are working should not affect how students get graded – surely how you support students though is important
      • I offer free one-on-one tutoring of one hour a week (not many takers – less than 10% of the class).
      • Office hours are also held outside of the 8-5 time, and can be face to face or online as per their preference.
      • I also answer course content emails quickly.
      • Universal Design for Learning (UDL), which is a framework to accommodate diverse learners, I have let students in some courses choose between a final examination or a culminating project. The former takes less time to prepare but can be hard to ace, the latter takes more time but is easier to ace.

Multiple Chance Testing as a Gateway to Standards-Based Grading 


Multiple Chance Testing as a Gateway to Standards-Based Grading 

Autar Kaw

May 6, 2024

Traditional grading may not reflect student learning, which is a common concern. Imagine a large enrollment class that assesses learning via three midterm tests and a final exam, each weighing 25% of the semester grade. If a student scores 46%, 90%, 90%, and 90% on the four assessments, they will have a grade of C at the end of the semester. What are the alternatives? Some speak of using standards-based grading (SBG).  So, what is SBG? 

What is SBG? 

Standards-based grading (SBG) is an alternative method of assessing students. It emphasizes evaluating their mastery of specific learning objectives or standards rather than using points or percentages for assignments and exams. In SBG, students receive feedback on their progress toward each standard and are given multiple opportunities to demonstrate their proficiency. The goal of SBG is to foster a growth mindset where students view learning as a process of improvement rather than a competition for grades. 

My reservations about adopting SBG 

I considered using SBG in my Numerical Methods class, a required junior-level course in Mechanical Engineering at the University of South Florida. The class typically enrolls 60-120 students per semester.  I started watching YouTube videos, reading blogs, and analyzing journal papers on SBG. As I reviewed the many ways instructors use SBG, the idea of implementing it into my courses seemed overwhelming. Some had 30-50 standards in a course, and keeping track of each standard for every student would be overwhelming for students as well as the instructor.  

Some instructors were using short quizzes for each standard. Others asked students to master pre-class work, take in-class quizzes, do online homework, and complete short projects. But what happens when they do not master a standard the first time? They can show proficiency via retaking a quiz in specific quiz sessions held during class time, office hours, final exam sessions, etc.  

How many chances does each student get to show proficiency in a standard? How does the instructor have quizzes ready to check any standard a student asks for, or is it the same quiz as the one given the first time? Does the latest proficiency level replace all previous ones, or is the highest proficiency used? As per SBG philosophy, it should be the latest score, but many choose the highest. Another tenet of SBG is equity, but how about the student who cannot make it outside of class time, such as office hours, to show proficiency – they may be taking other classes, working off-campus, or caring for a loved one or children? What happens to the class time lost used for re-quizzing? Does it lessen the content covered in the course and reduce the student engagement opportunities for active learning? 

Also, since we still use traditional letter grades on transcripts, proficiency in meeting standards must be converted to letter grades during and at the end of the semester. Our students expect to know where they stand during the semester by a total score or letter grade. However, the grade would be complicated for the instructor to calculate as well as the student to follow during the semester, as the grading system involves some combination of having shown a certain level of proficiency in each assessment category, such as pre-class work, quizzes on mandatory and secondary standards, online homework and projects. Not only that, but there is also no average grade during the semester, as one must meet a certain number of standards to get a particular grade, and enough standards have not been covered until one is toward the end of the semester. 

However, just because a system is imperfect does not mean one should abandon SBG. Can we adopt a system that would maintain the essence of SBG but be less daunting for a large class and less challenging for students and the instructor? 

Standards-Based Testing with a Twist 

I used a subset of SBG called standards-based testing (SBT), and within that framework, I used multiple-chance testing (MCT) on the midterms and online quizzes with some twists.  

The traditional grading system in the course comprised 15% of the learning-management system (LMS) quizzes, three 15% midterm tests, 10% for projects, 5% for a concept inventory, and 25% for a final exam. We used MCT for the LMS quizzes and midterm tests, which is 60% of the grade. In addition, the final exam, a standalone grading component, also counts as another chance test. 

The course was divided into eight standards, each a chapter. This division clearly delineated the standard for the student.  

There are 30 LMS quizzes in the semester. Each quiz has three questions, two of which are multiple-choice and one algorithmic. These questions were chosen using question banks I have developed for the course. The students can make as many attempts as they wish before the weekly deadline, and the LMS automatically reports the highest score. If they wanted to attempt them again after the deadline, they could do so till the last day of class and recoup half of the missed grade, e.g., if they scored 6/10 before the deadline and 9/10 after the deadline, their score would be 6+(9-6)/2=7.5/10. If their score after the deadline was lower, their grade on a quiz stayed unchanged. 

The semester has three midterm tests, which check 3, 3, and 2 standards, respectively. Checking for multiple standards in a midterm maintains the interleaving effect, where students must figure out which standard the question belongs to. Higher-order thinking exercises can also be given where one standard is a prerequisite for another. Each standard is graded out of 20 or 40 points depending on the length of the chapter. For example, Standard 1 is a 2-week long chapter and is graded out of 40, while Standard 2 is a 1-week long chapter and is graded out of 20. The score for each standard is reported on the graded test. Triple feedback is given to the student on each question asked – the wrong answer is pointed out, how to get to the correct answer is shown, and, more importantly, reference is given to examples and problems the student can attempt to review the material. Students were encouraged to come to office hours for face-to-face or online help. 

A second-chance test was given two to three weeks after each of the three mid-term tests. The student could take the retest on any or all the standards of the midterm test that they had just taken. For example, in midterm test one, we had three standards. The retest was given for three standards as separate tests of 25 minutes each (e.g., individual tests were given for Standard 1 from 11 AM to 11:25 AM, 5-minute break, Standard 2 from 11:30 AM to 11:55 AM, 5-minute break, Standard 3 from 12 noon to 12:25 PM). A late policy was implemented that if a student left early during a retest of a standard, a student coming in later than the first person leaving could not take the test. This policy was adopted to maintain the academic integrity of the retest, but we did not need to use it. We also posted the retests on the LMS for students so that they do not just show up to get a copy of the retest.  

The student could recoup only half of the missed points, e.g., if they scored 24/40 in the midterm test on Standard 1 and 34/40 in the retest, their score would be 24+(34-24)/2=29/40. If their score in the retest was lower, they were not penalized, and their grade stayed unchanged. If a retest for a standard was taken, the updated score was also limited to 90%. This policy was adopted to avoid highly performing students taking the retest for just gaining a few more points, as their time would be better spent learning new course topics. Although it was not my intention, this policy helped reduce grading efforts. Only 60% of the possible retests were taken in the course. 

The final exam was a standalone category in the grade but also a proxy for a third-chance test for all eight standards. Questions from the final exam were allocated to each standard, and the scores were used as third-chance test scores. The scoring update policy was the same as for the second-chance tests. Some would argue that I should have used the final exam session to test for standards that the students wished to get retested in, but the effect of the final exam on long-term retention must not be ignored.  

Since we did not have an uncomplicated way to report updated grades to the students, we made a student-friendly Excel spreadsheet where students could enter their grades for all the quizzes and tests they had taken. The spreadsheet calculated the grade without and with the retests. The grade without the retests matched the overall grade reported on the LMS, so students knew their minimum grade at any time in the semester if they did not want to use the Excel spreadsheet. To calculate the final grade, one needs to get the grades from their LMS and use simple spreadsheet functions, but this process can then be automated for later semesters. 

The findings when comparing a course with MCT to that without 

We compared the student performance and affective outcomes for the course with and without MCT. The findings reported in a journal paper indicated that implementing MCT resulted in a higher percentage of students achieving a high final exam score (15% vs. 3%), a more considerable proportion of ‘A’ grades (36% vs. 27%), and a more positive classroom environment in terms of participation, unity, and satisfaction. During focus groups, students appreciated the enhanced learning experience, the opportunity for retakes, and the reduced stress associated with standards-based testing. A few mentioned the issue of not knowing their ongoing grade in the course. The journal article cited below provides more details of the study’s results.  

My questions to the reader are: Would you use multiple-chance testing? How would you implement it differently? How can you maximize the advantages of MCTs and minimize the drawbacks for students and instructors? Do you have a better way of reporting grades in LMS so that the current overall grade is reflected just in time? 

References: Autar Kaw and Renee Clark, Effects of Standards-Based Testing via Multiple-Chance Testing on Cognitive and Affective Outcomes in an Engineering Course, International Journal of Engineering Education, Vol. 40, No. 2, pp. 303–321, 2024,

Should we get into a tizzy about students meeting deadlines

April 7, 2024

In a recent paper on extending deadlines for student assignments, researchers point out that it is not an issue we need to sweat about.

“This study uses evidence to debunk common misconceptions about assignment extensions.”
“The extension without penalty system was used by 78% of the students, but half of them only used it once”

My two cents: There is always a happy medium between being strict and lenient. Extending deadlines for everyone is fair—not just for those who ask unless they have a reasonable excuse. Many extroverts get ahead because they ask—are we rewarding behavior or learning? Sure, one should also give a fixed number of unexcused deadline extensions so that private issues are not forced to be exposed.

In LMS, one can set a deadline and then “open until” a date. The two can act as deadlines and extended deadlines, respectively. I did this for a few assignments in a course many years ago, and the “open until” became the deadline, and it was all the same. Students catch up fast, and it makes no difference. Less than 10% of the students submitted on time. The extended deadline bugs students though, as the “open until” does not show up on their calendar, and they must manually keep track of deadlines – oh, the travesty.

Effect of Multiple Chance Testing on Student Performance and Perception

April 6, 2024

We just published an article in the International Journal of Engineering Education on multiple chance testing in an engineering course.

A. Kaw, R. Clark, “Effects of Standards-Based Testing via Multiple-Chance Testing on Cognitive and Affective Outcomes in an Engineering Course,” International Journal of Engineering Education, Vol. 40, (2), 2024, pp. 303-321. 

The article is behind a paywall, but you can ask me to send a preprint.

In this article, we explore the concept of standards-based grading and its potential benefits for student learning. We also discussed my reservations about adopting standards-based grading in a large enrollment class and proposed an alternative approach, standards-based testing with multiple-chance testing. The findings of our study indicate that implementing multiple-chance testing resulted in higher student performance, more ‘A’ grades, and a more positive classroom environment. Students appreciated the enhanced learning experience, the opportunity for retakes, and the reduced stress associated with standards-based testing. However, some students mentioned the issue of not knowing their ongoing overall grade in the course. I believe that this manuscript will be of interest to your readers, as it provides a practical approach to implementing standards-based grading principles in large enrollment classes. It also raises important questions about using multiple-chance testing and its potential advantages and drawbacks for students and instructors.

Abstract: Multiple-chance testing was used to conduct standards-based testing in a blended-format numerical methods course for engineering undergraduates. The process involved giving multiple chances on tests and post-class learning management system quizzes. The effectiveness of standards-based testing was evaluated through various forms of assessment, including an analysis of cognitive and affective outcomes, and compared to a blended classroom that did not use standards-based testing. Based on a two-part final exam, a concept inventory, final course grades, a classroom environment inventory, and focus groups, the results showed that standards-based testing had overall positive effects. Standards-based testing was associated with a more significant percentage of students (15% vs. 3%) earning a high final exam score, a higher proportion of A grades (36% vs. 27%), and a better classroom environment on dimensions of involvement, cohesiveness, and satisfaction. Focus group discussions revealed that students appreciated the benefits of enhanced learning, second chances, and reduced stress with standards-based testing. The study also included an analysis of the impact of standards based testing on underrepresented minorities, Pell Grant recipients (low socioeconomic groups), and low-GPA students, as well as an examination of test-retaking behaviors. The methodology and comprehensive results of the study are presented in this paper.

How do I solve a first order ODE numerically in MATLAB?

The other day a student came to ask me for help in solving a first order ordinary differential equation using the ode45 routine of MATLAB.  To use ode45, one needs to be familiar with how the inputs are required by MATLAB.

To solve a first-order ODE by ode45 is straightforward.

The ODE given is given as
3*dy/dx+7*y^1.2=5*x^1.1, y(2)=13.
Find the value of y(19).

Program without comments

clear all
dydx=@(x,y) (5*x^1.1-7*y^1.2)/3


Program with comments

clear all
% Assume that you are given a first-order differential equation
% 3*dy/dx+7*y^1.2=5*x^1.1, y(2)=13.
% Find the value of y(19).
% How would you solve it by using the ode45 MATLAB function?
% First you would need to rewrite ODE as dy/dx=f(x,y) form
% dy/dx=(5*x^1.1-7*y^1.2)/3, y(2)=13
% Define a variable of your choice and write what dy/dx is.
% @(x,y) means these are the independent and dependent variables in ODE
dydx=@(x,y) (5*x^1.1-7*y^1.2)/3
% Look at the ode45 help in MATLAB
% Left hand side yy vector is where you want the values stored.
% xx is the vector that will be chosen by MATLAB, not you, at which it will
% provide you the value of yy vector.
% Inputs to ode45 are the following.
% 1) dydx is the ODE in the form of Line 14.
% 2) [2,19] is the span of xx values. You can observe x=2 is where the
% initial condition is given and x=19 is the value at which you are
% seeking the y value at. These two inputs can be variables too.
% 3) Last input is value of the initial condition that is given at x=2.
% This input can be a variable too.
% Since last entry of yy vector would be where the xx span ends, and that
% is where you want to find the value of y


This post is brought to you by

Journal paper on use of adaptive learning in flipped classrooms published

August 28, 2023

As part of various instructional approaches, adaptive learning platforms are becoming increasingly popular. In this article, we focus specifically on the use of adaptive learning in personalized, preclass learning for a flipped classroom. In the past, data on student engagement with course content generated by adaptive learning platforms was not easily accessible. However, this data is now proving to be invaluable in gaining a deeper understanding of the learning process and improving it. Our goal is to examine the connection between interactions on adaptive learning platforms and overall student success. We also aim to identify the variables that have the greatest impact on student success. To achieve this, we conducted a comprehensive analysis of adaptive learning platform data gathered from a Numerical Methods course. Our analysis included aggregate statistics, frequency analysis, and Principal Component Analysis. We used this analysis to identify the variables that exhibited the most variability and provided the most information in the data. We then explored naturally occurring clusters of students using the Partitioning Around Medoids clustering approach. We found that overall performance in the course, as measured by the final course grade, is strongly associated with two factors: (1) the behavioral interactions of students with the adaptive platform and (2) their performance on the adaptive learning assessments. We also discovered distinct student clusters that exhibited different behaviors and success in the course. This information can be used to identify students who require more support and to design evidence-based strategies to support these students.

Yalcin, Ali, Autar Kaw, and Renee Clark. “On learning platform metrics as markers for student success in a course.” Computer Applications in Engineering Education (2023).

Asking ChatGPT to look at my student evaluations to give me advice on improving my teaching

I took the student evaluations of my course in Computational Methods taught to juniors in engineering and asked ChatGPT to summarize how I can improve my teaching.

Based on the student evaluations, here are some key points to consider for improving your teaching:

  • Adaptive Lessons: Students found the adaptive lessons to be beneficial as they encouraged independent learning and helped them overcome their fear of failure. Consider keeping or modifying this approach to continue fostering self-directed learning.
  • Course Structure: The course was praised for its structure, but some students felt that it covered too much material. Consider evaluating the course content to determine if any sections can be condensed or removed to enhance learning outcomes.
  • MATLAB Coding: Some students desired more time dedicated to MATLAB coding. Consider incorporating additional coding exercises or providing supplementary resources to further develop students’ programming skills.
  • Pre-Tests and Testing Materials: Students expressed a desire for more comprehensive pre-tests and supplemental testing materials. Consider providing multiple pre-tests and additional practice materials to help students prepare for exams effectively.
  • Real-Life Examples: Students appreciated when the course material was related to real-life examples. Continue incorporating practical applications to enhance student understanding and engagement.
  • Formula Sheet and Exam Format: Students suggested allowing the use of a formula sheet and including more applicable problems in exams rather than focusing solely on memorization. Consider revising exam formats and providing formula sheets to support students’ problem-solving skills.
  • Office Hours and Support: Students appreciated the availability of office hours and additional help sessions. Continue offering these resources to ensure students have ample opportunities to seek clarification and assistance.
  • Flipped Classroom Approach: While some students appreciated the flipped classroom approach, others found it redundant. Consider evaluating the effectiveness of this method and potentially modifying it based on student feedback.
  • Consistency in Interaction: Students mentioned inconsistencies in how you interacted with students inside and outside the class, raising concerns about fairness. Strive to maintain consistency in your interactions with students to ensure a fair and inclusive learning environment.
  • Computer Projects: Some students found the computer projects lengthy and confusing. Consider providing clearer instructions and support for these projects to enhance the learning experience.
  • Student Engagement: Students appreciated your efforts to engage them and create an enjoyable learning environment. Continue employing varied teaching methods, incorporating humor, and making connections with students to foster their interest and participation.
  • Feedback and Improvement: Students appreciated your dedication to improvement and willingness to listen to their opinions. Continue seeking student feedback and implementing changes accordingly to enhance the learning experience.
  • Clarity of Explanation: Some students found your explanations challenging to follow due to speaking too fast. Aim for clarity in your delivery, allowing students to absorb and understand the concepts effectively.
  • Feedback and Availability: Students appreciated your knowledge and availability outside of class. Continue providing clear feedback and being accessible to students, fostering their learning and growth.
  • Cultural Sensitivity: Be mindful of cultural sensitivity and ensure your teaching approach is respectful and unbiased toward all students.

Overall, the evaluations indicate that you are a knowledgeable and caring professor who puts effort into supporting student learning. By addressing the mentioned areas of improvement, you can further enhance the learning experience for your students.

A VBA function for Cohen’s effect size

While many social scientists tend to get excited to report p-values of less than 0.05 when comparing two groups, they tend to forget to report effect size. One can get a low p-value just by having a very large sample size but it is the effect size that tells you how much pragmatic difference there is between two groups. Although there are simple formulas out there for effect size, it is better to have a formula for two groups of unequal size. I wrote a VBA program for the Cohen’s effect size and you can use it readily.  You can download it form here.  If you are not familiar with how to enter the VBA program, here is a quick tutorial.  You just have to save the excel file as a .xlsm file.

Example of usage is
where A23:A78 has the control group numbers and B24:B67 has the experimental group numbers.

Function EffectSizeCohen(ControlGroup As Range, ExperimentalGroup As Range) As Variant
'This function finds the Cohen's effect size given the numbers from the control
'and experimental group
'ControlGroup: Excel range of numbers for control group, e.g. A120:230
'ExperimentalGroup: Excel range of numbers for experimental group, e.g. A120:230
'Cohen's effect size.
'See formula at

'Putting the ranges in a column vector. Note that excel stores it as as two dimensional matrix though
ControlData = ControlGroup.Value
ExperimentalData = ExperimentalGroup.Value

'Number of entries of the two groups
ControlLength = ControlGroup.Count
ExperimentalLength = ExperimentalGroup.Count

'Calculating the average and standard Deviation of control group
ControlSum = 0
For i = 1 To ControlLength
ControlSum = ControlSum + ControlData(i, 1)
Next i
ControlAvg = ControlSum / ControlLength

ControlVar = 0
For i = 1 To ControlLength
ControlVar = ControlVar + (ControlData(i, 1) - ControlAvg) ^ 2
Next i
ControlStd = (ControlVar / (ControlLength - 1)) ^ 0.5

'Calculating the average and standard Deviation of experimental group
For i = 1 To ExperimentalLength
ExperimentalSum = ExperimentalSum + ExperimentalData(i, 1)
Next i
ExperimentalAvg = ExperimentalSum / ExperimentalLength
ExperimentalVar = 0
For i = 1 To ExperimentalLength
ExperimentalVar = ExperimentalVar + (ExperimentalData(i, 1) - ExperimentalAvg) ^ 2
Next i
ExperimentalStd = (ExperimentalVar / (ExperimentalLength - 1)) ^ 0.5

'Calculating the Cohen's effect size
'See formula at
StdMean = (((ControlLength - 1) * (ControlStd) ^ 2 + (ExperimentalLength - 1) * (ExperimentalStd) ^ 2) / (ControlLength + ExperimentalLength)) ^ 0.5
EffectSizeCohen = (ExperimentalAvg - ControlAvg) / StdMean
End Function

Using PollEveryWhere in University of South Florida

Introduction: I have been using Poll Everywhere in my Numerical Methods course for a year.  As per their website, “Poll Everywhere is a web-based audience response system that lets speakers embed live activities directly into their presentations. Using a variety of activity types, you can turn a series of slides into an effective, interactive experience. Get to know participants, gauge their knowledge of a specific topic, and capture their valuable feedback at scale.”

I use Poll Everywhere to ask mostly conceptual questions in class.  Think-pair active learning strategy is implemented. Students answer a question by themselves (think) and then if the correct answer is chosen by less than 85% of the class, two students pair up with another student for discussion.  The poll is retaken and the instructor wraps up by discussing the question.

In my class, the participation of students is not recorded in any form, nor is it a part of a grade.  This makes for quicker learning and implementation of Poll Everywhere for the instructor, while students learn the content without pressure of grade or being found out.  Do students still participate – yes?  Do all students participate – no?  If you want to learn how to collect participation data and integrate it with CANVAS Gradebook, click here.

Getting Started at USF: The Getting Started document written specifically for the USF faculty is here.  This document will help you set up the Poll Everywhere account.

Do not forget to change your username to something short, but I advise you not to use a course name or something specific.  Last names are good to use. To change the username, log in to Poll Everywhere and click on Profile, and change the default name.  I changed it to kaw.  When students in my class are participating in the poll, they simply go to a browser and enter and respond to the presented question.

Although you can use Poll Everywhere directly via a web browser, I use it only through PowerPoint (PPT).  This blog is thus limited to the PPT implementation.

To use in a PPT, you have to download the Poll Everywhere add-on. You can download it here for Windows and here for Mac.  Once you execute the downloaded file, Poll Everywhere should appear as an option in the top ribbon of the PPT menu as shown in the figure below.  If it does not show up in the menu, go to File>Options>Add ins and add it.  Microsoft has a step-by-step guide for adding an add-in.

Inserting a question in PPT: The directions to insert a poll audience question in PowerPoint are given in the YouTube video below.

For PCs

For Macs

Start small with only multiple-choice questions, and you can explore other types of questions later.  In the above video, inserting an already-made activity is shown.  The directions on how to make such an already-made activity at the Poll Everywhere website are given here.

Visual Settings: When you make an activity, such as a multiple-choice questions, you need to choose “Visual settings” in the right-hand menu of the activity.  These are the choices I use to keep it simple.

I only allow website response.  Make sure the activity is unlocked.

I do not place any audience restrictions and the participants can be identified by their chosen arbitrary screen name!

I allow a student to respond only one time and let them choose only one option in multiple-choice questions.  The second option will need to be “as many times as they like” if multiple answers need to be picked.

For more information about activity settings, go here.

Using it in Class: When you wish to ask students the question, you present the PPT in the presentation mode.  Give the polling link to the students so that they can respond (it will show up on the top part of the slide – see figure below).  They can use their mobile devices to respond and a good practice is to countdown 5-4-3-2-1 when you think enough time has been given to respond. You can now click on “Responses” when you hover over the slide with your mouse.  If you want the students to retake a poll, click on More> Clear the responses. You can show the correct answer by clicking “Correctness.”

Other Resources:  The video below was highly beneficial to me in getting an overall view of Poll Everywhere.  It is an hour long media, but you will learn faster.

There is an advanced video as well if you so want to venture.

And do not hesitate to send an email to about an issue or a question.  They were most helpful.

This post is brought to you by

Balancing the social mobility index and reputation rankings

Autar Kaw
December 19, 2022

Social mobility is becoming ever so popular a criterion to evaluate university education.  However, we all know that reputation of university matters as well because it attracts high-caliber students, faculty, and staff, and hence presumably, the quality of education and opportunity of high impact practices, such as research experiences, internships, cooperative education, and capstone courses.

I wanted to see how well my university – the University of South Florida (USF), balances the two issues of social mobility and reputation.  USF is a Carnegie R1 institution, meaning that it is categorized as a university with “very high research activity”.  There are 146 R1 universities, amongst which 106 (actually, there are 107 but one of them Graduate Center, CUNY only admits graduate students) are public.

This blog is limited to the Carnegie R1 public universities.  I gave the same weight to the two rankings US News and World Report ranking and the Economic Mobility Index ranking .   The US News and World Report ranking  mostly use reputation in its calculations, and other factors include graduation and retention rates, selectivity of the incoming class, alumni giving rate, etc., which are substantially influenced by the wealth and income of the student’s family. The Economic Mobility Index ranking measures social mobility via out-of-pocket expenses, salary boost due to college degree, and time after graduation to recoup the money spent to go to college.

The weighted ranking is not hard science, but it gives us a glimpse of where we stand.  We at the University of South Florida are No. 17 out of 106 public Carnegie Research 1 universities.  Seven of the top 10 universities belong to the University of California system, while Florida is not far behind, holding the spots at 11, 13, 17, 22, and 25 in this combined ranking system.

University of California, Irvine 1
University of California, San Diego 2
University of Illinois Urbana-Champaign 3
University of California, Davis 4
University of California, Los Angeles 5
University of California, Santa Barbara 6
University of Texas at Austin 7
Rutgers University–New Brunswick 8
University of California, Berkeley 9
University of California, Riverside 10
University of Florida 11
University of Illinois Chicago 12
Florida State University 13
Stony Brook University 14
New Jersey Institute of Technology 15
University of North Carolina at Chapel Hill 16
University of South Florida 17
University of California, Santa Cruz 18
University at Buffalo 19
University of Washington 20
University of Connecticut 21
Florida International University 22
Arizona State University Campus Immersion 23
Binghamton University 24
University of Central Florida 25
Ohio State University 26
University of Georgia 27
University of Arizona 28
University of Michigan 29
University of Houston 30
Texas A&M University 31
Michigan State University 32
Georgia State University 33
University at Albany, SUNY 34
University of Colorado Denver 35
Wayne State University 36
University of Texas at Dallas 37
University of Maryland, Baltimore County 38
Temple University 39
George Mason University 40
North Carolina State University 41
University of Utah 42
University of Tennessee 43
University of Maryland, College Park 44
University of Minnesota 45
Indiana University Bloomington 46
University of Virginia 47
University of Alabama at Birmingham 48
University of Wisconsin–Madison 49
University of Iowa 50
University of Texas at El Paso 51
University of Massachusetts Amherst 52
Purdue University 53
University of Oregon 54
Georgia Institute of Technology 55
University of Texas at San Antonio 56
University of Pittsburgh 57
Washington State University 58
Virginia Tech 59
University of North Texas 60
Ohio University 61
University of Nevada, Las Vegas 62
University of Memphis 63
Utah State University 64
University of Hawaii at Manoa 65
Oklahoma State University–Stillwater 66
Pennsylvania State University 67
University of Texas at Arlington 68
University of Kansas 69
University of Kentucky 70
University of Oklahoma 71
University of Missouri 72
University of South Carolina 73
Clemson University 74
Oregon State University 75
University of Louisville 76
Virginia Commonwealth University 77
Old Dominion University 78
Texas Tech University 79
University of Mississippi 80
Iowa State University 81
Colorado School of Mines 82
University of Southern Mississippi 83
University of New Mexico 84
Louisiana State University 85
University of Delaware 86
University of Colorado Boulder 87
University of New Hampshire 88
West Virginia University 89
University of Cincinnati 90
University of Nevada, Reno 91
Auburn University 92
Mississippi State University 93
Colorado State University 94
University of Maine 95
University of Louisiana at Lafayette 96
University of Nebraska–Lincoln 97
University of Wisconsin–Milwaukee 98
Kansas State University 99
University of Arkansas 100
University of Alabama 101
Kent State University 102
University of Alabama in Huntsville 103
University of Montana 104
Montana State University 105
North Dakota State University 106

Autar Kaw is a professor of mechanical engineering at the University of South Florida. He is a recipient of the 2012 U.S. Professor of the Year Award (doctoral and research universities) from the Council for Advancement and Support of Education and the Carnegie Foundation for Advancement of Teaching. His primary scholarly interests are engineering education research, adaptive, blended, and flipped learning, open courseware development, composite materials mechanics, and higher education’s state and future. He has written more than 150 refereed technical papers, and his opinion editorials have appeared in the Tampa Bay Times, the Tampa Tribune, and the Chronicle Vitae.