Deficiencies in the Quality Indicator Framework Relating to General University
(A) Criterion- I
Metric 1.1.2. SOP by NAAC on this metric is not accurately framed. It forbids inclusion of a content change whereas, the template of this metric provides for a university to state the percentage of ‘content changed’. I recommend that in 5 years, if minimum of 20% courses in a program have undergone at least 20% of content change in each of such course/or 20% new courses added or mix of both in near equal proportion, a program should be considered in the category of ‘ syllabus revised’.
Metric 1.1.3. This metric should be removed. There won’t be any course, not having a focus on employability/entrepreneurship /skill development of some kind. SOP is merely trying to somehow justify the veracity of the question by asking, what courses directly meet the said objective. What, if it doesn’t do significantly in a direct way but does indirectly? Is that not enough? So, let this criteria go away?
Metric 1.2.1. This metric doesn’t provide a parity between various universities. Say, if a university has 50% or less health sciences programs along with engineering, management, applied sciences and other liberal education programs, it is at major disadvantage in this metric since health sciences regulatory councils rarely change their courses as often as NAAC expects and university has no mandate to add new courses in the health sciences. A university with pure engineering, management, sciences and liberal programs stand to gain since the university can undertake to review, change and add new courses. NAAC should tweak the metric to create parity or prevail upon health sciences regulatory councils to add the courses on regular basis. The weightage of marks needs a review. As against 30 marks, this metric deserves 15 marks, at best since 1.1.2 would also include the details of this metric.
Metric 1.2.2. NAAC should remove CBCS option. A worthwhile CBCS is at best possible in State/Central universities and not in private institutes which can merely implement a poor hybrid of it. A substantial uniformity in curriculum, scheme, syllabus, credits and electives are precursor to CBCS to allow horizontal and vertical mobility of students across programs and institutes. As regards electives, the problem of introducing electives by health sciences regulatory councils on regular basis persists, as mentioned in the above response for metric 1.2.1.
Metric 1.3.2. NAAC should remove stipulation of 30 contact hours or amend definition of value-added courses at page 148 of the Manual. There are a number of skills, particularly life and soft skills which are sufficiently embraced in less than 30 contact hours. Some value added courses might well be imparted on online mode, as effectively as by contact hours.
Metric 1.4.1 and Metric 1.4.2. Both these metrics are similar and need to be merged into one. Institutes should also be asked to upload feedback policies of each of the stakeholders to know the gaps. When merged, the weightage should also be reduced.
(B) Criterion 2
7. Metric 2.2.3. This metric, regarding differently abled students should be scraped by NAAC since admissions in this category are not in the control of universities. Universities at best can create facilities, carry advertisements on social and media platforms, grant scholarships but can’t bind students to join. In addition, if every institute starts pulling such students, where would there be enough students to take admissions in every institute?
Metric 2.3.2. This metric regarding ICT is hugely important. It entails institute creating a facility of subscribing to LMS, ERP, Databases, Library Management system and various research related databases. It involves management to spend a substantial amount of money for creating such facilities and training personnel to handle the same. It carries only 5 marks, at present. Ideally it should carry 20 marks.
Metric 2.5.2 and 2.5.3. These metric expect institutes to submit complaints and grievances by students about evaluation by students and resulting changes in marks. One fails to understand the reason for forming such metrics. Does it mean that more cases, more marks and less cases less, marks? If evaluation is fair and no complaint or very few complaints received, does it amount to poor performance of an institute? Well, if less number of cases mean more marks, why would any university disclose the true numbers? In my view this metric should be dropped.
(C) Criterion 3
10. Metric 3.2.1. Most private institutes shall have to depend on the sources of grant mentioned in this metric since government/research funding agencies have a bias towards Central and State universities and institutes of national importance. 3 marks given for this metric should be increased to at least 10 marks.
Metric 3.2.3. Either 3.2.1 and 3.2.2 metric should be scraped. Alternatively this metric be done away with. There is a near repetition of data. NAAC should tweak this and reconstruct.
12. There are a number of metrics in this criteria and others, which carry one mark. There is no point allocating one mark to any metric since institutes shall in any case score 1 mark with a little input, as evaluation is not done in fractions. Let there be at least 2 marks.
(D) Criteria 4
Metric 4.2.7. Development of e-content which includes video lectures and lab modules for online delivery as well as for MOOCs platform is a major initiative and the weightage of marks therefore should be increased from 3 marks to 10 marks.
(E) Criteria 5
Metric 5.2.1 and 5.2.2. Institutes can score optimum marks in either of the two and not in both since scoring top marks in both is not possible. Say, for example if institute has 100 percent placements, it would score 15 full marks in 5.2.1 but score no marks in 5.2.2 and vice versa. It puts wisdom on its head to have both. In fact, one metric combining both would solve the problem. Let NAAC work on it. When merged together the marks can be reduced to 20, at best.
(F) Criteria 6
Metric 6.2.1. 6.3.5, 6.4.1 Perspective planning, Performance appraisal and financial audits are extremely essential quality metrics. 2 marks to each of these metrics is a woefully low weightage. NAAC should enhance the marks in each case.
(G) Criteria 7
16 Criteria 7 is replete with one mark questions. This requires a definite review by NAAC. It gives a wrong impression as if NAAC QIF team had run out of marks at the end.
Loopholes in the system
- In most cases, particularly ‘Curricular Aspects’ and ‘Teaching-Learning’ criteria, the correctness of institute’s data is verified by the DVV team, relying heavily on the ‘Minutes of Meetings’ of BoS and AC. It is susceptible to institutes writing/re-writing of back-dated ‘Minutes’ of the past 5 years.
- ‘Students Satisfaction Survey’ has a potential for institutes’ intervention.
- Proofs for events covering social extension activities of the past 5 years are not hard to reconstruct. All of these can easily be managed/ stage-managed.
- Some institutes have already been picked up by NAAC, and there would be more in that category which don’t include all the programs, especially, the flogging/unapproved programs in AISHE, NIRF and Extended profile. This exclusion reduces students’ strength but shoots up their FSR ratio.
- DVV team is unable to ascertain faculty strength on ground as well as minimum qualifications for promotion to various designations, as per UGC criteria. The data on AISHE, Extended profile or NIRF can be cleverly matched.
- NAAC doesn’t provide any evidence as to what level of data would entitle an institute what percentage of marks. For example, it asks for faculty ratio but doesn’t provide any indications as to what would entitle 100 percent marks. This decision is left to whose wisdom?
By Prof JR Sharma-Consultant to Institutes of Eminence on Global Best Practices in Higher Education