About website data

The Quality Indicators for the Learning and Teaching (QILT) website is divided into two distinct areas:

  1. Institutions – This section allows users to compare overall results for up to six institutions.
  2. Study Area – This section allows users to select a study area, from a list of 21, and then compare results for that study area for up to six institutions.

Information provided on the QILT website is calculated at the study area level. Students doing a combined or double degree are generally counted more than once if their major courses are in different study areas.

The 21 different study areas used on the website have been compiled based on the Australian Standard Classification of Education (ASCED), which groups higher education courses, specialisations and units of study with the same or similar vocational emphasis. Click here to see a full breakdown of the 21 study areas.

All data is updated annually, but results are based on surveys from more than one year to improve the reliability of the information.

Data Year

Student experience, undergraduate level only (Student Experience Survey)

Two years of data is pooled from students who were in the first year or later year of their course in 2015 and 2016. Learner engagement results exclude “External” and “Distance” student responses.

Graduate employment outcomes, undergraduate and postgraduate coursework levels (Australian Graduate Survey (2014-2015) and Graduate Outcomes Survey (2016))

Three years of data is pooled from graduates who completed their courses in 2013–2015 and were surveyed in 2014–2016.

Graduate satisfaction, undergraduate and postgraduate coursework levels (Course Experience Questionnaire)

Two years of data is pooled from graduates who completed their courses in 2014–2015 and were surveyed in 2015–2016.

All of the data being reported on QILT comes from surveys of students and graduates. Because not everyone participates in the surveys, the results are not as accurate as if a response had been obtained from every student or graduate. Generally speaking, the more people who respond to a survey, the more reliable the findings.

The QILT surveys are professionally devised and implemented. The number of students surveyed depends upon the number of students within an institution and study area. Generally, a larger number of students requires a larger sample size to ensure the sample is representative of the student population, but the results still aren’t perfect. This is especially true if your search picks up data from only a small number of survey responses. An institution may have a small amount of students enrolled in a course and this can result in a small number of survey responses.

Combining data over multiple years generates consistent and more accurate results, and maximises the number of publishable results.

A filter is provided on the website to help you refine your search results. It is not intended to match the delivery locations for each study area offered by an institution. When a state is selected, it will display all institutions that have a campus in that state.

To help you understand the reliability of the data in your searches, the site marks a “confidence interval” on each result it returns. You will notice it as a black line, which looks something like an upper case I. When a confidence interval is marked like this, it is called a “whisker”. The top of the whisker marks how tall the bar might really be, if every single student had been asked about this particular item. The bottom of the whisker shows how short the bar might really be, if every single student or employer had been asked about this particular item.

A confidence interval graph has the following key features:

Snapshot of a graph showing confidence bars.  Words 'Upper limit of 90% confidence interval', 'average response' and 'lower limit of 90% confidence interval'

The smaller the whisker on the bars in your search, the more confident you can be that the information is reliable.

Knowing this is important when you are comparing two bars, and especially where the results are close. Because there is the chance of error in both bars—remembering this is marked by the confidence interval—you should use the whiskers to help you decide how big the gap between the two bars really is.

Graph showing two columns with confidence level indicators.  The confidence level indicators overlap.

In Figure 1, the 90% confidence intervals overlap therefore it cannot be concluded with a 90% level of confidence that there is a difference between the two institutions.

Graph showing two columns with confidence level indicators.  The confidence level indicators do not overlap.

In Figure 2, the 90% confidence intervals do not overlap therefore it can be concluded with a 90% level of confidence that the indicator in the first institution is lower than that in the second institution.

National averages have been applied across all indicators on the website, and are available in both chart and table format. The national average is calculated using data from all survey responses pooled at the national level, in the same way that overall institution results are calculated using data pooled at the institution level.

For most indicators, the national average is calculated as the arithmetic mean of all survey results. One indicator, salaries of graduates employed full-time, instead reports the national result as the median value of all survey responses. This represents the 'middle ground' of graduate salaries. The median is used because salaries earned by graduates are unevenly spread.

For further information on the calculation of indicators, see the following pages:

Student experience

Graduate employment

Graduate satisfaction

The QILT website displays the national average for a selected study area, as well as for overall institutional results. On graphs which include only one study area, the national average of the study area is represented by a thin grey line across the institutional results, see Figure 1.

Chart showing the Overall quality of education experience, with both National Average and an Institution that is higher than the national average

On tables, the national average is represented in a separate column.A table that is available when comparing institutions.  This table is demonstrating that the National Average is also displayed when reading figures.  The exact figures in this picture are not relevant.

If the number of survey participants is less than 25, then survey results won’t be reported, and the website will display the message ‘Numbers of survey responses too low’ or ‘L/N’. This is because results from such small samples could be misleading.

Where there are at least 25 survey responses and the raw results for an indicator are zero per cent, the website will display the result as ‘0’. For example, if 30 responses were received from graduates and none of these people were engaged in further full-time study, the result for this indicator would be ‘0%’.

If data is not available, the website will display the message ‘Survey data does not exist’ or ‘N/A’. This might happen if you search for a study area at a particular institution, but that institution does not teach in that study area, or teaches the study area at either undergraduate or postgraduate coursework level but not both.

In some instances, the website will display data for a particular institution and study area for either graduate employment/graduate satisfaction or student experience, but not the other. This is because the former Australian Graduate Survey calculates results based on a graduate’s self-reported major area of study, while the Student Experience Survey and Graduate Outcomes Survey in general use the program study area recorded by the institution. Sometimes a graduate’s self-reported major area of study may differ from the program study area recorded by the institution.

Student experience

This data is sourced from the Student Experience Survey (SES).

The SES is completed by current undergraduate level students. The survey is designed to collect information that will help both higher education institutions and the government improve teaching and learning outcomes, and reports on multiple facets of the student experience.

The six indicators from the SES displayed on this website each show the percentage of students who rated their higher education experience positively.

The indicators relate to:

  1. Overall quality of educational experience
  2. Teaching quality
  3. Learner engagement
  4. Learning resources
  5. Student support
  6. Skills development

Care should be taken when interpreting results from the SES provided on this website. The results are estimates only, because they are based on a survey which was not completed by all students. The accuracy of the figures varies depending on the number of students who completed the survey. Confidence intervals are displayed to provide a measure of accuracy of the estimates.

SES data on this website includes responses from both domestic and international on-shore undergraduate level students.

Graduate satisfaction

This data is sourced from the Course Experience Questionnaire (CEQ) for undergraduate and postgraduate coursework level graduates.

The CEQ is completed by graduates of Australian higher education institutions, approximately four months after completion of their courses. The surveys provide information about the quality of education provided at Australian institutions, by asking graduates to what extent they agree with a series of statements about their study experiences.

Three indicators from the CEQ are displayed on this website. The indicators relate to:

  1. Overall satisfaction
  2. Good teaching
  3. Generic skills

CEQ data on this website includes responses from both international and Australian resident graduates.

Care should be taken when interpreting results from the CEQ provided on this website. The results are estimates only, because they are based on a survey which was not completed by all graduates. The accuracy of the figures varies depending on the number of graduates who completed the survey. Confidence intervals are displayed to provide a measure of accuracy of the estimates.

Graduate employment

This data is sourced from the Graduate Destinations Survey (GDS) for 2014-15 and the Graduate Outcomes Survey (GOS) for 2016. These surveys have been completed by graduates of Australian universities four months after completion of their courses. They provide information the labour market outcomes and further study activities of university graduates.

Four indicators of graduate outcomes are displayed on this website. The indicators relate to:

  1. Graduates in full-time employment
  2. Graduates in overall employment
  3. Graduates in full-time study
  4. Median salary of graduates in full-time employment

Graduate outcomes data on this website only includes responses from Australian resident graduates. Data is displayed separately for graduates from undergraduate and postgraduate coursework level degrees.

Care should be taken when interpreting results from the GDS and GOS provided on this website. The results are estimates only, because they are based on a survey which was not completed by all graduates. The accuracy of the figures varies depending on the number of graduates who completed the survey. Confidence intervals are displayed to provide a measure of accuracy of the estimates.

Employer satisfaction

Information on the QILT website about employer satisfaction is sourced from the Employer Satisfaction Survey (ESS).

The ESS is undertaken by asking employed graduates who participated in the Graduate Outcomes Survey (GOS) four months after graduation to provide the contact details of their supervisor for follow up. The survey provides information about the quality of education provided at Australian institutions, by asking supervisors to provide feedback about the generic skills, technical skills and work readiness of the graduate employed in their workplace.

Over 3,000 workplace supervisors of recent graduates of all levels participated in the ESS in 2016, making it the largest ever Australian survey of employers of higher education graduates. While there are an insufficient number of responses to publish institution by study area results on the QILT website, the ESS is large enough to provide comparisons by broad field of education, type of institution, course characteristics, employment characteristics, occupation and demographic group.

Six indicators of employer satisfaction are displayed on this website. The indicators relate to:

  1. Overall satisfaction
  2. Foundation skills
  3. Adaptive skills
  4. Collaborative skills
  5. Technical skills
  6. Employability skills

Care should be taken when interpreting results from the 2016 ESS presented on this website. The results are estimates only, because they are based on a survey which was not completed by all supervisors.