Monitoring service delivery for universal health coverage: the Service Availability and Readiness Assessment
Kathryn O’Neill a, Marina Takane a, Ashley Sheffel a, Carla Abou-Zahr b & Ties Boerma a
a. Department of Health Statistics and Information Systems, World Health Organization, avenue Appia 20, 1211 Geneva 27, Switzerland.
b. Geneva, Switzerland.
Correspondence to Kathryn O’Neill (e-mail: firstname.lastname@example.org)
(Submitted: 21 December 2012 – Revised version received: 20 June 2013 – Accepted: 24 June 2013 – Published online: 30 September 2013.)
Bulletin of the World Health Organization 2013;91:923-931. doi: http://dx.doi.org/10.2471/BLT.12.116798
The goal of universal health coverage is to provide everyone with health-care services of good quality that meet their needs without the risk of financial hardship linked to paying for them.1 Universal access to services is a necessary precondition to achieving universal health coverage.2 The regular monitoring of access to services and service delivery is often a weak component of country and global monitoring of progress and performance. Yet health policy-makers, planners and managers need sound evidence on which to base decisions about resource allocation and for programme monitoring and evaluation. Annual reviews of health sector progress and performance at national and subnational levels, based on a broad set of indicators that cover all areas of performance, should include up-to-date, accurate information on service delivery. A fundamental component of the evidence base is the availability of health facilities and their readiness to deliver services. Some useful data, such as stockouts or the functionality of equipment, can be gathered through routine health facility reporting systems. However, information about the availability of health-care infrastructure, skilled health workers and resources for disease prevention, diagnosis and treatment is often incomplete or of poor quality, both in public and private facilities.3
Access is a broad term that encompasses varied dimensions, including availability, affordability and acceptability.4,5 The availability dimension relates to both the physical presence of facilities and the distribution of health-care infrastructure, health workforce and services. Several programmes have used tools to generate information about service availability and readiness; however these tools focus only on one particular service area.6–11 This fragmented approach runs the risk of leading to information gaps and duplication of efforts and limits the ability to monitor trends in a variety of key indicators. A comprehensive system is needed to assess the availability and readiness of essential services in a rapid, regular and harmonized way. The Service Availability and Readiness Assessment (SARA) provides a comprehensive approach for monitoring the supply of health services at the facility level by using a standard set of tracer indicators and summary measures to determine the extent to which minimum criteria for the provision of services are met.7–12 This article describes the SARA and the results of its implementation in six countries across three continents.
The starting point of the SARA is the master facility list.13 This is the source for the compilation of indicators about service availability and provides the sampling frame for the assessment of service readiness. The master list comprises all public, private non-profit, private for-profit and faith-based health facilities, including hospitals, health centres, dispensaries and specialized clinics. In addition to information relating to facility identification or signature domain – name, address and geo-location of the facility, etc.14 – the master list should include information on the beds, staffing and services available in each facility. For a country in which a master health facility list does not exist or is incomplete, a preliminary list should be created on the basis of the country’s health management information system, which contains the list of facilities reporting routine health statistics.
The master list also provides the sampling frame for the readiness survey. The overall sample size will vary from country to country, depending on available resources, precision requirements and the need for domain estimates.15 In general, a sample size that provides a margin of error of less than 10% is recommended. Two sampling methods have been used in the country application of the SARA. A nationally representative random sample of at least 150 health facilities – stratified by facility type and managing authority and weighted according to facility distribution among districts – can be used to obtain national estimates. If subnational estimates are desired, a district-level assessment with a census of all facilities in selected districts can generate results that can be used for local management.
Data collection is performed by several survey teams led by either national ministries of health or national institutes. Data are usually collected by teams of two surveyors who use both paper forms and CSPro (US Census Bureau, Washington, United States of America), an electronic census data processing system. The in-person facility visits take 2 to 4 hours on average and involve interviews with key informants and verification of reported availability and functioning of essential equipment and supplies, along with observation of availability of medicines and commodities on the day of the visit. This approach minimizes the reliance on recall and enhances data quality. The data entered are checked and validated and the results are automatically produced using Excel (Microsoft, Redmond, USA). Results and summary reports are disseminated to all national stakeholders. To promote transparency of results, data and reports should be posted on national ministry of health web sites or in other publicly available information repositories, with appropriate archiving of data and metadata. The readiness survey should be repeated annually.
Indicators of service availability
The assessment of service availability comprises both general and specific components. General service availability is concerned with the physical presence of items required for the delivery of services and encompasses health infrastructure, core health personnel and aspects of service utilization. Indicators include number and distribution of health facilities and core medical professionals per 10 000 population, to assess levels and distribution within the country.
Service-specific availability focuses on whether a specific type of health intervention is offered. Interventions may be defined by target population (e.g. pregnant women, infants or children) and by specific programme. Indicators include the proportion of facilities offering a defined service and the density and distribution of the facilities offering the service per 10 000 population.
Indicators of service readiness
The assessment of service readiness also consists of both general and service-specific components. General service readiness reflects the overall capacity of health facilities to provide basic services at minimum standards. Four domains of general service readiness are included in the SARA and indicators are tracked through tracer items that were selected on the basis of consultations with service delivery experts and experiences with different facility assessments over the past decade (Table 1).9,16,17 Individual tracer indicator scores may be summarized as composite measures, namely the proportion of facilities with all tracer items available on the day of the visit and the mean item availability score, with the latter measure more sensitive to change over time. For example, the essential medicines indicator comprises 14 tracer items. The composite measures would look at the mean of the 14 items available in each facility as well as the percentage of facilities with all 14 items available on the day of the survey.
Table 1. Tracer items for general service readiness employed in the Service Availability and Readiness Assessment
Service-specific readiness reflects the capacity of health facilities to provide interventions in 20 key programme areas: family planning, antenatal care, basic and comprehensive delivery care, child health, routine child immunization, adolescent health, malaria, tuberculosis, human immunodeficiency virus (HIV) infection testing and counselling, HIV care and support, antiretroviral therapy, prevention of mother-to-child transmission (PMTCT) of HIV, sexually transmitted diseases, diabetes, cardiovascular disease, chronic respiratory disease, basic and comprehensive surgery, and blood transfusion. The essential inputs needed to deliver service-specific interventions are described across four domains: (i) trained staff and relevant and up-to-date guidelines; (ii) functioning equipment; (iii) diagnostic capacities; and (iv) essential medicines and commodities. Within each domain, a mean score is calculated across the tracer items and an overall composite readiness index is calculated for each programme area based on the mean availability of tracer items across all domains. For simplicity, all tracer items are given equal weight. An example of a service specific readiness indicator can be seen in Table 2.
Table 2. Example of a service-specific readiness indicator for the Service Availability and Readiness Assessment
In Burkina Faso (2008), Cambodia (2008), Haiti (2008), United Republic of Tanzania (2009–2010) and Zambia (2008), facility assessments were conducted – on the basis of facility censuses in selected districts – using the SARA as part of an evaluation by the Global Fund.18 In 2010, Zambia repeated the SARA through a census of facilities in 17 districts.19 In Sierra Leone, the SARA was implemented in 2011 in a random sample of health facilities drawn from the national master list and results were weighted according to the distribution of health facilities.20 The SARA was repeated in 2012 in Sierra Leone to enable annual progress tracking. In Sierra Leone, the survey was performed before the annual health sector review so that the results could be used and analysed as part of the health sector performance assessment. All facility assessments from the six countries included private facilities. The analyses presented here focus on the common items across the assessments. Commonly available statistical software packages were used for analysis.21
Table 3 summarizes select aspects of service availability. Health facility density across the countries ranged from 0.8 facilities per 10 000 population (in Haiti) to 3.6 facilities per 10 000 population (in Cambodia). In the assessments in sub-Saharan Africa, health facility density ranged between 1.2 and 2.2 facilities per 10 000 population. Private for-profit health facilities were common in Cambodia (39% of all facilities) and Zambia (35% in the 2008 survey, which included the capital, Lusaka). By contrast, the private sector accounted for less than 10% of facilities in Burkina Faso.
Table 3. Service availability in selected facilities in six countries, according to the Service Availability and Readiness Assessment, 2008–2010
The density of health workers (i.e. physicians, nurses, midwives and clinical officers) ranged from 3.6 workers per 10 000 population (in Burkina Faso) to 22.4 workers per 10 000 population (in Cambodia). There were large differences between districts, with densities being highest in urban districts. The presence of nurses on the day of the visit was approximately 80% in most assessments but frequencies were much lower in the United Republic of Tanzania and Zambia (2008).
The proportion of facilities offering a specific service varied considerably across countries. Child immunization services were offered by at least two thirds of the facilities, most of which were publicly funded, in all country assessments. Family planning services were also commonly offered except in Cambodia, where less than half of the facilities offered such services. The proportion of facilities offering childbirth and delivery services varied from 42% in Zambia to 91% in Sierra Leone in 2008. These variations are to some extent driven by differences in organizational structures for the delivery of childbirth services.
General service readiness
Table 4 shows results for the four indexes of general service readiness, based on items common to all assessments. The average item availability for amenities and basic equipment ranged from 64% to 81%, with scores of > 80% on individual equipment items. The average scores for standard precautions against infection control were > 70% in all countries except Haiti. The highest average score – 87% – was noted in Zambia (2010). Laboratory diagnostic capacity was very low (< 30%) in Burkina Faso, Cambodia and Sierra Leone. The presence of 13 essential medicines – diazepam was added later to the SARA instrument – was low in all countries. It ranged from 27% in Burkina Faso, Haiti and Sierra Leone to 53% in Zambia (in 2010).
Table 4. Mean scores for service readiness in selected facilities in six countries, according to the Service Availability and Readiness Assessment, 2008–2010
Two examples illustrate further programme-relevant aspects. In Sierra Leone, private facilities scored higher than public facilities in all four domains of general service readiness, with overall scores of 62% and 45%, respectively. The starkest differences were observed in the domains of laboratory diagnostic capacity (30% versus 8%) and essential medicines (61% versus 31%). In the 2010 Zambia SARA, the availability of essential medicines on the day of the visit was 49% overall but ranged from 32% to 60% across districts. In general, overall availability was higher among the four assessed urban districts (range: 53–60%) and lower in the nine assessed rural districts (range: 32–46%); availability ranged from 39% to 59% among the four periurban districts evaluated. Although the availability of antibiotics to treat infectious diseases was relatively high (71% on average), the availability of medicines to treat non-communicable diseases was consistently low (37% on average).
The proportion of health facilities in Sierra Leone with tracer items for child immunization (among facilities offering immunization) is shown for 2011 and 2012 in Fig. 1. The proportion of facilities with pentavalent vaccines (diphtheria-tetanus-pertussis [DPT], Haemophilus influenzae type b [Hib] and hepatitis B [HepB]) in stock declined from 81% to 70% between 2011 and 2012 (P = 0.049, Fisher’s exact test). There were similar declines for other vaccines.
Fig. 1. Percentage of facilities in Sierra Leone equipped with tracer items for child immunization services, among facilities providing such services according to the Service Availability and Readiness Assessment 2011 and 2012 (n2011 = 190, n2012 = 90)
In Zambia, about 64% of facilities in the 17 districts surveyed offered childbirth and delivery services in 2010. Fig. 2 shows the mean readiness score, by facility type, based on 14 tracer items. On average, health facilities had 9 of the 14 tracer items, for an overall readiness score of 61%. For hospitals this was 85%. Eighteen per cent of hospitals had all 14 tracer items, compared with 1% of primary care facilities. Only 38% of primary care facilities offering delivery services had a neonatal bag and mask compared with 77% of hospitals, and only 32% had injectable magnesium sulfate for the treatment of eclampsia, compared with 91% of hospitals. Across all facility types, the availability of staff who had been trained in the Integrated Management of Pregnancy and Childbirth in the preceding two years was generally low.
Fig. 2. Percentage of facilities in Zambia equipped with tracer items for basic obstetric care services, by district, among facilities providing such services (n = 362), according to the Service Availability and Readiness Assessment, 2010
In Burkina Faso, Cambodia and the United Republic of Tanzania, the SARA revealed that the proportion of health facilities offering malaria services was > 90% in the two African assessments and 62% in Cambodia. Among facilities offering malaria services, the majority had country-recommended anti-malarial drugs in stock and trained staff and treatment guidelines. However, diagnostic tests (rapid test or blood smear) were less commonly available, ranging from a low of 6% in Burkina Faso to 57% in Cambodia. Artemisinin combination therapy was available in 76% of facilities offering malaria services in the United Republic of Tanzania.
Tuberculosis treatment services were offered by less than half of the facilities in Burkina Faso and the United Republic of Tanzania, but by 52% of the facilities in Cambodia. Four drugs (isoniazid, rifampicin, ethambutol and pyrizamine) were commonly available in Cambodia (84%) and the United Republic of Tanzania (74%) but not in Burkina Faso, where availability was very low (39%). About one third of facilities offering tuberculosis services did not have trained staff or guidelines.
PMTCT services are relatively new and are offered by a rather small number of facilities in Burkina Faso, Cambodia and the United Republic of Tanzania. In the facilities offering these services during antenatal care in these three countries, training and guidelines were generally present but medicines (nevirapine or zidovudine) and diagnostic tests (rapid or other test) were not. This brought down the overall readiness score to below 25%. In Zambia, the proportion of facilities offering PMTCT services increased from 50% in 2008 to 66% in 2010. Readiness to provide PMTCT services also increased. The percentage of facilities with all tracer items for PMTCT services increased from 33% in 2008 to 56% in 2010, while mean readiness scores increased from 71% to 83% (Fig. 3). A marked increase in the availability of antiretroviral drugs was observed between the two surveys, indicating a significant scale-up in these services.
Fig. 3. Percentage of facilities – in eight Zambian districts combined – equipped with tracer items for prevention of mother-to-child transmission (PMTCT) services, among facilities providing such services (n2008 = 162, n2010 = 207), according to the Service Availability and Readiness Assessment, 2008 and 2010
As countries seek to scale up and monitor progress towards the goal of universal health coverage, there is likely to be increased demand for regular and reliable data on health-care infrastructure, on the availability of skilled health workers and on the capacity of health facilities and staff to provide the full range of essential services required to offer coverage with quality health-care services to all those who need care.
Use of the SARA has several potential advantages. It encourages the maintenance of a harmonized national service monitoring system with a standardized set of indicators that includes all key health services. It is likely to cost less than fragmented data collection and promotes country ownership and transparency. The most effective application is when the SARA is planned and conducted on an annual basis just before a country planning cycle to inform health sector reviews. Results are disseminated to all key national stakeholders and analysed together with data from other data sources, such as population surveys, quality-of-care surveys and routine facility reports, to provide a comprehensive analysis of health system progress and performance. Deficiencies and gaps need to be addressed as part of annual operational health plans and investment plans. And, as shown by the results of the eight surveys, the SARA generates objective and comprehensive information on the status of a country’s health services that can be used to support operational programme planning and management and to monitor country progress towards improving access to health services as a necessary precondition to achieving universal health coverage.
Several issues concerning methodology – with potential variations across countries and over time – should be borne in mind. In places where the master facility list is sufficiently complete and up to date, as is the case in Kenya,22 strong multi-stakeholder coordinating groups or regulatory bodies for the licensing of health facilities have been established through various national institutes, including national statistical offices, mapping agencies and in-country partners. In other countries, however, maintaining the master facility list continues to be difficult. The completeness of the health facility master list is likely to improve if systematic assessment is conducted – through, for instance, a facility accreditation system – and there is regular district reporting of new, continuing and discontinued/closed facilities, coupled with a complete facility census once every 5 or 10 years.
The SARA does not address other dimensions of access that require more complex measurement strategies, such as geographic barriers, travel time and facility use patterns. A potentially valuable indicator would be the proportion of the population living within a specified distance (e.g. 5 km) or travel time (e.g. within 1 hour) from a health facility. Such a figure can be computed through spatial analysis if facility locations and geocodes, population distribution, road network and transport facilities are known exactly. This method has not found large-scale application because of its data demands and analytical complexity. Some countries rely on subjective reporting by facilities and districts of the proportions of their populations living within a specific travel time or distance to health facilities, but the data are often of questionable quality.
The SARA does not generate data on service affordability or quality. Data on service costs have been collected during previous facility assessments but did not appear to be a reliable reflection of the cost to users. Both service availability and readiness are preconditions for quality care but they are not indicators of quality in themselves. The SARA is designed to assess only the underlying prerequisites of service quality. Other instruments have been developed to measure client satisfaction and knowledge and health worker practices through provider interviews, client–provider observations and client exit interviews.17 A quality-of-care study or a disease-specific survey could be combined with and implemented along with the SARA as an additional module. This would reduce field costs and promote harmonization in data collection and analysis.
In light of the increasing demand for harmonization and alignment of partner support for a strong national health strategy through the International Health Partnerships (IHP+), there is renewed impetus to reduce fragmentation of data collection and parallel disease reporting systems and to invest in a more harmonized approach to data collection and analysis through a common monitoring and evaluation platform.3,16 The call for better accountability of results within the context of the recommendations of the Commission on Information and Accountability is also adding weight to this approach.23 The SARA is an example of such a harmonized approach to data collection. A greater number of programmes and donors, including the Global Fund and the GAVI Alliance, are leaning towards investing in and using the SARA as the standard method for monitoring service delivery in a comprehensive way, with reduced fragmentation and duplication in tools and expenditures.
The Service Availability and Readiness Assessment (SARA) method was developed through a joint collaboration of the World Health Organization (WHO) and the United States Agency for International Development (USAID). The method builds upon previous and current approaches designed to assess service delivery, including the service availability mapping (SAM) tool developed by WHO and the service provision assessment (SPA) tool developed by ICF International under the USAID-funded MEASURE DHS project. It draws on best practices and lessons learnt from the many countries that have implemented health facility assessments, as well as on guidelines and standards developed by WHO technical programmes. The authors are grateful for the inputs of WHO staff with expertise in specific intervention areas and would like to thank the Ministries of Health of Burkina Faso, Cambodia, Haiti, Sierra Leone, the United Republic of Tanzania and Zambia for their collaboration and support in data collection and in country-specific analyses.
- The world health report: health systems financing: the path to universal coverage. Geneva: World Health Organization; 2010. Available from: http://whqlibdoc.who.int/whr/2010/9789241564021_eng.pdf [accessed 25 November 2012].
- Evans DB, Hsu J, Boerma T. Universal health coverage and universal access. Bull World Health Organ 2013; 91: 546-546A http://dx.doi.org/10.2471/BLT.13.125450 pmid: 23940398.
- Monitoring the building blocks of health systems: a handbook of indicators and their measurement strategies. Geneva: World Health Organization; 2010. Available from: http://www.who.int/healthinfo/systems/monitoring/en/index.html [accessed 27 August 2013].
- Tanahashi T. Health service coverage and its evaluation. Bull World Health Organ 1978; 56: 295-303 pmid: 96953.
- Penchansky R, Thomas JW. The concept of access: definition and relationship to consumer satisfaction. Med Care 1981; 19: 127-40 http://dx.doi.org/10.1097/00005650-198102000-00001 pmid: 7206846.
- Edward A, Matsubiyashi T, Fapohunda B, Becker S. A comparative analysis of select health facility survey methods applied in low income countries. Chapel Hill: University of North Carolina, Carolina Population Center; 2009 (MEASURE Evaluation Working Paper Series WP-09-11).
- Bryce J, Victora CG, Habicht JP, Vaughan JP, Black RE. The multi-country evaluation of the integrated management of childhood illness strategy: lessons for the evaluation of public health interventions. Am J Public Health 2004; 94: 406-15 http://dx.doi.org/10.2105/AJPH.94.3.406 pmid: 14998804.
- Needs assessment of emergency obstetric and newborn care. New York: Columbia University; 2010. Available from: http://amddprogram.org/d/content/needs-assessments [accessed 27 August 2013].
- Cameron A, Ewen M, Ross-Degnan D, Ball D, Laing R. Medicine prices, availability, and affordability in 36 developing and middle-income countries: a secondary analysis. Lancet 2009; 373: 240-9 http://dx.doi.org/10.1016/S0140-6736(08)61762-6 pmid: 19042012.
- Gupta N, Dal Poz MR. Assessment of human resources for health using cross-national comparison of facility surveys in six countries. Hum Resour Health 2009; 7: 22 http://dx.doi.org/10.1186/1478-4491-7-22 pmid: 19284604.
- MEASURE DHS Demographic and Health Surveys [Internet]. The Service Provision Assessment (SPA). Calverton: Measure DHS; 2011. Available from: http://www.measuredhs.com/What-We-Do/Survey-Types/SPA.cfm [accessed 27 August 2013].
- Fronczak N, Fapohunda B, Buckner B, Schenk-Yglesias C. Using health facility profiles as a monitoring tool: an example based on data from three Africa countries. Chapel Hill: University of North Carolina, Carolina Population Center; 2007 (MEASURE Evaluation Working Paper Series WP-07-101).
- Creating a master health facility list. Geneva: World Health Organization; 2012. Available from: http://www.who.int/healthinfo/systems/WHO_CreatingMFL_draft.pdf [accessed 27 August 2013].
- Health Facility Assessment Technical Working Group. The signature domain and geographic coordinates: a standardized approach for uniquely identifying a health facility. Chapel Hill: University of North Carolina, Carolina Population Center; 2007 (MEASURE Evaluation Working Paper Series WP-07-91).
- Turner AG, Angeles G, Tsui AO, Wilkinson M, Magnani R. Sampling manual for facility surveys: for population, maternal health, child health and STD programs in developing countries. Chapel Hill: University of North Carolina, Carolina Population Center; 2001 (MEASURE Evaluation Manual Series, No.3).
- Monitoring and evaluation of national health strategies: a country-led platform for information and accountability. Geneva: World Health Organization; 2011. Available from: http://www.who.int/healthinfo/country_monitoring_evaluation/documentation/en/index.html [accessed 27 August 2013].
- Health Facility Assessment Technical Working Group. Guidance for selecting and using core indicators for cross-country comparisons of health facility readiness to provide services. Chapel Hill: University of North Carolina, Carolina Population Center; 2007 (MEASURE Evaluation Working Paper Series WP-07-97-en).
- The Global Fund to Fight AIDS, Tuberculosis and Malaria [Internet]. The five-year evaluation. Geneva: Global Fund; 2010. Available from: http://www.theglobalfund.org/en/terg/evaluations/5year/ [accessed 27 August 2013].
- Zambia service availability and readiness assessment 2010 summary report. Lusaka: Republic of Zambia, Ministry of Health; 2010. Available from: http://www.who.int/healthinfo/systems/sara_reports/en/index.html [accessed 27 August 2013].
- Sierra Leone service availability and readiness assessment 2011 report. Freetown: Government of Sierra Leone, Ministry of Health and Sanitation; 2012. Available from: http://www.who.int/healthinfo/systems/sara_reports/en/index.html [accessed 27 August 2013].
- United States Census Bureau [Internet]. International programs – Census and Survey Processing System overview – people and households. Washington: US Census Bureau; 2012. Available from: http://www.census.gov/population/international/software/cspro/ [accessed 27 August 2013].
- Noor AM, Alegana VA, Gething PW, Snow RW. A spatial national health facility database for public health sector planning in Kenya in 2008. Int J Health Geogr 2009; 8: 13 http://dx.doi.org/10.1186/1476-072X-8-13 pmid: 19267903.
- Keeping promises, measuring results: report of the Commission on Information and Accountability for Women’s and Children’s Health. Geneva: World Health Organization; 2011. Available from:http://www.who.int/woman_child_accountability/resources/coia_resources/en/index.html [accessed 27 August 2013].