Customer satisfaction is arguably one of the most important key performance indicators (KPIs) in the business world. Facilities management (FM) researchers and practitioners have begun to examine this KPI and adapt it for use in facility services. Most of the adaptations have regarded internal benchmarks and use in quality management. Though research demonstrates the effectiveness of competitive benchmarking and, more specifically, the effectiveness of customer satisfaction as a competitive benchmark, there is little research on how to use customer satisfaction as a competitive benchmark in FM. Researchers have focused on determining the appropriate content for customer service surveys but have not documented how and to what extent the surveys are currently being used. To help fill this gap in the research, an industry-wide study on operations and maintenance (O&M) was conducted for an FM organization in 2017. Approximately 700 respondents in the United States completed the survey, and the responses were analyzed to examine the use of customer/occupant satisfaction surveys in the nation's FM industry. The results indicate that two-thirds of the industry uses these surveys at varying frequencies; the frequency differs somewhat according to the size of the facility in Rentable Square Feet (RSF), as there is a positive linear correlation between RSF and survey frequency. Future research on the use of these surveys as a competitive benchmarking tool needs to focus on standardizing these surveys to enable more equitable comparisons and engaging in in-depth interviews to evaluate the process by which FM departments manage their occupant satisfaction for continual improvement.

While the customer that most businesses monitor is that of the consumer, the primary customer of FM services are the employees of the company who occupy the buildings. This means that the occupants are the primary interest in assessing customer satisfaction. Customer satisfaction has been thoroughly discussed and addressed by both practitioners and researchers but has only begun to be addressed by researchers focusing on facility management in the past twenty years. Customer satisfaction is consistently considered to be one of the most important KPIs in FM and in industry overall (British Institute of Facility Management [BIFM], 2004; Fibuch & Van Way, 2013; Haverila, Martinsuo, & Naumann, 2013; Lavy, Garcia, & Dixit, 2010; Meng & Minogue, 2011; Walters, 1999).

Previous researchers' findings indicate that FM professionals may not understand how to use benchmarking to assist in performance management and continual improvement (Simões, Gomes, and Yasin ,2011; Massheder & Finch, 1998a, 1998b; Yasin, 2002). Most of the research on the use of customer satisfaction in FM has viewed this KPI as an internal benchmark. A limited amount of research is available on this KPI's use as a competitive benchmark in the FM industry, though the need for industry standards has been noted for some time (Tucker & Pitt, 2009a). The external benchmarking process is critical to achieving customer satisfaction, as the process draws upon perceptions external to the company, helping to overcome potential internal bias in company objectives. The purpose of this research study is to assess how practitioners in the industry are using the surveys, how frequently they are doing so, and whether their use varies by facility size, and has an effect on the financial indicator of maintenance costs per Rentable Square Foot (RSF).

Measuring customer/client satisfaction has been considered critical to the benchmarking process since the process was developed at Xerox (Fibuch & Van Way, 2013). Stauffer (2003) suggested that the entire benchmarking process should be approached from the customer's point of view. Customer satisfaction has also been considered the top KPI for FM for a considerable time (Walters, 1999). This KPI has been adopted at an increasing number of FM organizations (Simões et al., 2011). Various researchers have suggested that customer satisfaction and/or service delivery is critical for aligning facility performance with overall business objectives (Pitt & Tucker, 2008; Tucker & Smith, 2008; Walters, 1999). Further, customer satisfaction is considered a holistically oriented core indicator for FM functions (Lavy, Garcia, & Dixit, 2014a; Lavy, Garcia, & Dixit, 2014b). Customer satisfaction is the most widely recognized and used KPI in FM and in industry overall. Based on the prevalence of usage reported in surveys it may be the most important KPI as well (BIFM, 2004; Fibuch & Van Way, 2013; Haverila et al., 2013; Lavy et al., 2010; Meng & Minogue, 2011; Walters, 1999).

High levels of customer satisfaction and loyalty tend to lead to increased revenue, profitability, and stock prices (Haverila et al., 2013). The BIFM indicated that promoting customer satisfaction will be one of the most important facility issues through 2019 (BIFM, 2004). Lavy et al. (2010) reported that facility managers regard customer satisfaction as a top KPI regardless of whether a business- or facility-based perspective is used for ranking.

With customer satisfaction such a fundamental KPI for FM, it is essential to understand how this KPI is used to manage facilities, as well as to relate this KPI to hard performance indicators to quantify its impact on performance. Some researchers have noted that benchmarking customer satisfaction can be challenging because it is difficult to detect relationships between soft (quality-based) metrics and hard cost data (which are essential to facility cost-savings) through simple statistical analyses (Wong, Leung, & Gilleard, 2013). However, a number of studies have found that customer satisfaction is directly related to FM functions. According to Kärnä & Junnonen (2016), customer satisfaction is one of the major contributing factors of project success. (Au-Yong, Ali, & Ahmad (2015) surveyed FM professionals and found that customer satisfaction negatively correlates with variance in office maintenance downtime. Rani, Baharum, Akbar, & Nawawi (2015) that end-user satisfaction positively correlates with proactive maintenance and negatively correlates with corrective/breakdown maintenance (Rani, Baharum, Akbar, & Nawawi, 2015). Despite the importance of customer satisfaction in terms of maintenance performance management, there is limited research on facility-oriented benchmarking. In a review of 251 articles on maintenance performance management, Simões, Gomes, and Yasin (2011) noted that only 11% of articles even mention benchmarking. Other researchers have noted that benchmarking may not be fully understood or used correctly in the context of FM services (Massheder & Finch, 1998a, 1998b; Yasin, 2002).

A growing body of research reinforces the idea that soft metrics, such as customer satisfaction, are just as essential to managing facility performance as are harder metrics, such as costs. Maintenance decision makers tend to achieve optimal solutions by using heuristics that are supported with qualitative and quantitative assessment data (Kumar, Galar, Parida, Stenström, & Berges, 2013). Tucker and Pitt (2010) suggested that FM performance managers should develop a mixed-model that incorporates qualitative data pertaining to customer perceptions of FM service as well as quantitative data, such as Likert-scale ratings. These researchers contend that simple quantifications of satisfaction are incomplete and that they must be complemented by data on the perceptual processes behind the ratings.

Understanding how to better provide facility services requires a more complete understanding of customers' needs and perceptions, along with the factors that shape perceptions of service quality. Coenen, Waldburger, and von Felten (2013) found that the three main drivers of customer satisfaction are “the match between order and outcome of an FM service, the transparency of the process, and solution orientation of FM service employees regarding customer needs” (p. 274).

Large-scale interorganizational evidence shows that competitive benchmarking is a means of improving performance and reducing costs. A growing number of cities and states are requiring that energy rankings be benchmarked for large buildings in their jurisdictions. A study on the effects of this public benchmarking process indicates that in just the first few years after these requirements were implemented, energy consumption in those jurisdictions decreased by 2–3% (Palmer & Walls, 2015). There are some successful industry-driven surveys of customer satisfaction, used for competitive benchmarking, such as the J. D. Power Electric Utility Business Customer Satisfaction Survey. This survey, which has been used since 1999, measures customer satisfaction regarding 87 utility companies. The survey examines factors related to satisfaction, and the responses are scored to arrive at a ranking based on a 1,000-point scale. The survey has served as a longstanding benchmarking study, and customer satisfaction has improved over the years at all the participating utility companies. These companies generally attribute the increased satisfaction ratings to improved communication with customers (Andrejasich, 2017; Lustig, 2014).

Satisfaction surveys are used in a variety of contexts in FM because the discipline typically covers a wide array of noncore business services, from ongoing O&M to project management and holistic occupant satisfaction surveys (Tucker & Pitt, 2009b). Researchers on customer satisfaction in FM have explored detailed aspects of facility services, with the goal of identifying generic customer performance benchmarks that can be used for competitive benchmarking comparisons (Tucker & Pitt, 2009a, 2009b). According to Tucker and Pitt (2009b), facility services that are critical to customer satisfaction include building fabric, mechanical and electrical (M&E) engineering, waste management, grounds and gardens, cleaning, catering, mail services, security, health and safety, reception, and help desk. The researchers surveyed 230 members of the BIFM; the respondents rated the services regarding their efficiency, their criticality, and whether they were provided in-house or contracted out. Services that were rated as low in efficiency tended to be outsourced, whereas services rate high in efficiency tended to be handled in-house. The respondents also tended to rate frontline (soft) services, including health and safety, mail, and reception, as the most critical (Tucker & Pitt, 2009b). This research suggests that customer satisfaction regarding FM services is complex and may need to be measured based on multiple factors related to perceptions of service delivery. Tucker and Pitt (2010) contended that customer performance managers need to combine quantitative satisfaction survey data with qualitative data in order to accurately understand customer expectations and perceptions.

In a later study on the same topic, Tucker and Pitt (2010) identified areas of concern through conducting qualitative case studies of a customer performance management systems. The researchers provided the following recommendations, among others, regarding customer satisfaction surveys in FM:

  • Benchmark customer satisfaction across individual services on a permanent basis.

  • Review existing surveys to ensure that the level of detail is sufficient to measure satisfaction regarding particular services.

  • Review the current survey process to ensure the survey frequency is appropriate. A monthly survey may contribute to survey fatigue; once per quarter may be more appropriate.

In order to be effective, benchmarking needs to be a continual process (Camp, 1989; Spendolini, 1992), including when considering customer satisfaction (Pitt & Tucker, 2009a, 2009b). Despite the volume of research on the use of customer satisfaction for managing the performance and quality of FM services, little research is available on how the FM industry is specifically using occupant satisfaction surveys and how frequently surveys are being conducted. Before competitive benchmarking standards for occupant satisfaction can be developed for the United States, it is essential to accurately assess the current use of this KPI.

The specific content of the survey being analyzed in this study is the effect of FM customer/occupant satisfaction survey use and frequency on maintenance costs per rentable square foot and facility size. The aim of the research questions is to assess how facility management practitioners in the industry are utilizing the surveys, how frequently they administer the surveys, and, differences in use and frequency stemming from facility size, as well as the effect of their use on the financial performance metric of maintenance costs/sf. Determining when and if they are being used is a fundamental step in establishing industry standards that will allow for competitive benchmarking for continual improvement.

  • RQ1: What is the rate of use of customer/occupant satisfaction surveys in the FM industry in the United States and how often are they conducted?

  • RQ2: Is there a relationship between the use of customer/occupant satisfaction surveys and the financial metric of maintenance costs per rentable square foot (RSF)?

  • RQ3: Is there a relationship between the size of the organization/facility and how customer/occupant satisfaction surveys are conducted?

The methodology for this study integrates (a) a model for guiding the benchmarking process and (b) a research methodology applied to survey content, for the purposes of quantifying industry trends and testing research hypotheses that were developed based on a review of the literature. Spendolini's (1992) benchmarking model was used and adapted because of its succinct methodology and because it fit the unique demands of the researchers and the FM organization surveyed in the study. The study was completed over a period of 18 months (August 2017 to February 2018).

Identification of industry need: Benchmarks, teams, and partners

The steps in the first phase—that of identifying industry need—included forming a benchmarking team, determining what to benchmark, and soliciting benchmarking partners. This phase of the research methodology overlapped with the benchmarking methodology in terms of purpose and timing. The industry need for current data was identified by an FM organization, which funded the study. The benchmarking team consisted of personnel at this organization, twelve subject matter experts (SMEs) selected by the FM organization, and researchers at UNC–Charlotte.

The team used a previous survey on facility O&M as the basis for the new survey. The previous survey regarded facility characteristics, settings, and uses; utility costs and consumption rates; maintenance costs and staffing; janitorial costs and staffing; sustainability and green initiatives; facility practices and procedures; planning horizons; and other topics. For the updated survey, the researchers suggested content based on a literature review of benchmarking practices, such as multiple-choice range options for monetary costs and the inclusion of questions regarding the use of customer satisfaction.

The benchmarking partners in this process were respondents who completed the survey, members of the FM organization, and the researchers. Participation was also open to the general public. Participation in the survey was incentivized by offering a copy of the benchmarking report to all participants.

Survey development and refinement

The survey had previously been updated and disseminated in 2012 but had experienced a low response rate and survey abandonment, leading to an incomplete data update. A goal in the current study was to achieve higher response and completion rates and response completion. In addition to collecting updated data on facility O&M benchmarking, the FM organization wanted to add new survey material based on the organization's identification of industry trends. Therefore, sections on (a) security costs and practices and (b) technology were added to the survey. Also, during the survey development process, the previous survey content was repeatedly reviewed and modified according to input from the SMEs. Through emails, the SMEs provided feedback regarding every question in the previous survey in terms of content, wording, order, appearance, and survey flow. The SMEs' industry expertise helped ensure that the survey could be easily completed and that it contained language familiar to those in the FM industry.

Also, with the help of the SMEs, a pilot survey was created over a 3-month period and was then tested at a major FM conference in October 2016. The purpose of this survey was to collect information on facility operating costs, with the results used to refine cost range parameters so the final survey could list multiple-choice ranges rather than exact costs. The survey results contained estimates regarding cost ranges for janitorial, maintenance, and utility functions of FM. These values were analyzed and compared to previous data to construct cost range intervals that would result in an approximately normal frequency distribution of responses.

Because previous survey data suggested that respondents tended to round their financial information or were hesitant to provide the information, the researchers proposed that the survey allow respondents to select from a range of values instead of entering specific numbers. For example, if a respondent opted to use the value-range format, he or she would be presented with the following six options as answers to the question “What is the annual cost of external building maintenance?”:

  • $0–$50,000

  • $50,001–$100,000

  • $100,001–$250,000

  • $250,001–$750,000

  • $750,001–$1,500,000

  • More than $1,500,000

The next step in the process was to calculate the midpoint for each range and assume that the resulting number was the respondent's actual cost of external building maintenance. For instance, the midpoint of the range $50,001–$100,000 is $75,000.50 [($50,001 + $100,000) / 2 = $75,000.50]. The underlying assumption was that the ranges offered were based on the normal expected values for each cost category, per historical O&M cost data. The midpoint approach assumed that the respondents were close to the middle of each range. Midpoint calculations of range estimates have been shown to have an equivalent validity in data analysis (He & Hu, 2009).

The revised industry-wide survey was piloted with SMEs and facility managers prior to the release of the survey to the public on February 13, 2017. The pilot participants, who completed the survey online via Qualtrics, provided feedback regarding the wording, presentation, logic, and accuracy of the survey items, helping ensure that the survey would effectively collect data needed throughout the industry.

The final version of the survey consisted of 134 questions. However, not every participant was asked every question, because the survey's presentation logic used a participant's responses to previous questions to eliminate upcoming questions that were not needed. Using this survey logic reduced repetition and completion time, helping to combat survey fatigue. The survey was organized into the following sections (blocks) for presentation purposes in Qualtrics: general information, contact name, facility description, janitorial, maintenance, maintenance plans, sustainability, utilities, energy management practices, security, technology, organizational, and costs. The survey required approximately 90 minutes for the average participant to complete.

The respondents had multiple options for how to provide cost data. The respondents could provide the data directly in the survey or could enter the data into a provided Excel file and then upload the file to Qualtrics. Respondents also had the option to provide data on more than one facility by filling out a multifacility costs file.

Data collection

The finalized survey was administered through Qualtrics. An email with the link to the survey was sent to the FM organization's members. The survey was open from February 13, 2017, to April 19, 2017, and emails were sent out weekly to invite and/or remind people to participate. The originally scheduled end date was March 31, but the FM organization extended the data collection period to obtain a greater number of responses. As an incentive to participate in the survey, individuals were told that participants would receive access to the survey results. The raw data were exported from Qualtrics to Excel files in preparation for data cleansing, quality management, initial analysis, and production of the industry-wide O&M benchmarking research report.

Data cleansing and quality management

The data were cleansed and managed for quality from mid-May to August 2017. The raw data were initially examined and sorted to exclude data from participants who did not provide all requested information. The first phase of data cleansing and quality management involved merging the various sources of data into one Excel file. There were different formats of data because respondents could choose to identify their costs directly in the survey or in an Excel file that was then uploaded onto Qualtrics platform. Additionally, respondents could provide data on multiple facilities by filling out a separate Excel file. The data from these files were merged into a Master Excel File and the data was analyzed in SPSS.

In addition to the survey data, an organization provided an Excel file with data regarding more than 800 buildings managed by the federal government. The data in this file was also added to the Master Excel File, providing information primarily related to facility demographics, characteristics, and operating costs.

The second phase of cleansing and quality management consisted of checking the master Excel file for errors, typos, and logical inconsistencies to ensure the self-report data were high quality. Survey responses that contained unusual data were marked, and the participants who gave these responses were contacted for the purpose of verifying the information. These respondents were contacted first via email. Those who did not send a response confirming the accuracy of their data were contacted via telephone. Data that were not confirmed by the respondents were excluded from the data analysis. A total of 1,479 respondents met the criteria of completing the survey and, if applicable, verifying their responses. The respondents were primarily facility managers and were located throughout the United States.

RQ1: What is the rate of use of customer/occupant satisfaction surveys in the FM industry in the United States and how often are they conducted?

The survey results indicate that 66% of respondents used customer/occupant satisfaction surveys as part of their FM programs (N = 606). The results of the survey question on survey frequency resulted in a total of 43 respondents who indicated that they utilized satisfaction surveys on a continual basis either through work orders, daily incidents, or projects. Though there was no option for this type of response in the survey choices, the response was regularly provided in the write-in “other frequency category”. This response category was removed for the analysis in this paper as a continual frequency of evaluation is not consistent with known uses of occupant satisfaction surveys. The assumption is that respondents who indicated this frequency were referring to ongoing customer satisfaction evaluations rather than more specific occupant satisfaction surveys. Respondents who indicated collecting this data in a manner that could not be quantified, such as through informal conversations with staff, were also excluded from the analysis. After excluding those responses, the mean frequency of use of the surveys is 1.89 times per year (μ= 1.89, sd. = 1.97).

RQ2: Is there a relationship between the use of customer/occupant satisfaction surveys and the financial metric of maintenance costs per rentable square foot (RSF)?

A two-tail Spearman's rho rank correlation was used to determine if there was a linear relationship between survey frequency and maintenance costs per RSF. The results indicate there is not a significant linear relationship between the two metrics (ρ = .032, p = .478, N = 606). Levene's test for equality of variances was conducted to assess the variance in the sample. The results indicated there was not a statistically significant different amount of variance, F = .085, p = .771, α = .05. Likewise, a t-test did not indicate a significant difference in mean values for the two groups, t = .918, p = .359, α = .05 (See Appendix A for full details of analysis).

RQ3: Is there a relationship between the size of the organization/facility and how customer/occupant satisfaction surveys are conducted?

Spearman's rho rank correlation was used to determine the strength of the linear relationship between survey frequency and RSF. The results indicate there is a significant but weak positive linear relationship (ρ = .164, p = .000, N = 606). There is a small tendency for occupant survey frequency to increase as the size of the facility or organization increases.

This research question was further tested to see if there were differences in facility size (RSF) between those who use occupant satisfaction surveys and those who do not. Levene's test for equality of variances was conducted to assess the variance in the sample. The results indicate there was a statistically significant amount of variance between the two groups, F = 10.455, p = .001, α = .05. An unequal variance two tail t-test was performed and a significant difference in mean values for the two groups was found, t = 3.334, p = .001, α = .05. Respondents who indicated that they use occupant satisfaction surveys reported managing significantly larger size facilities than those who did not use the surveys.

Research Question 1

The responses to the use of occupant satisfaction surveys convey an inconclusive picture of how occupant satisfaction surveys are utilized by facility managers. Only 2/3 of respondents even indicated that they use customer/occupant satisfaction surveys, 1/3 of the respondents indicated they only perform the surveys once a year, 7% of the respondents referred to ongoing customer satisfaction surveys rather than occupant surveys and were excluded from the analysis, and the remaining frequencies of the surveys varied from every 5 years to monthly. This suggests there was some confusion in the responses that can come from one of either two sources, industry professional's inexperience with specific occupant satisfaction surveys, or the phrasing of the question that refers to “customer/occupant” satisfaction surveys. The results also suggest that there is considerable variance in the methodology by which facility managers gather the data for evaluating satisfaction, though this variation only occurs in 1/3 of the respondent data, with the other two thirds being characterized by lack of use or use on an annual basis.

Research Question 2

The results of this analysis are consistent with previous research, which suggests that quantifying the relationship between customer satisfaction and hard or financial performance metrics, such as maintenance costs per Rentable Square Foot (RSF) is rather difficult. The analysis of the relationship between the two variables suggests there is not a linear relationship between survey frequency and maintenance costs per rentable square foot, nor is there any particular difference in this financial KPI between those who use the surveys and those who do not. Though this analysis provides no support for an obvious linear relationship between the two variables, it is known from past literature that the actual scores of customer satisfaction are related to how maintenance activities are performed within a facility (Au-Yong, Ali, & Ahmad, 2015; Rani, Baharum, Akbar, & Nawawi, 2015). This study did not evaluate actual scores on the Likert-scale surveys, but rather the use and frequencies of those surveys.

Research Question 3

The results for the analysis of this research question suggests that the size of a facility may play a small but important factor in the use and frequency of use of customer/occupant satisfaction surveys. With survey frequency being positively correlated with facility size and facility managers who use the surveys reporting managing substantially larger facilities than their counterparts who don't use the surveys, there is evidence of a rather curious trend in its use. Several respondents indicated that they gathered occupant satisfaction data informally through personal conversations and relationships with staff. This scenario is more likely to occur in smaller facilities and organizations where ongoing personable relationships can be maintained to provide continual data. Thus, qualitative data may be more accessible in smaller facilities and may serve to supplement or replace more formal occupant satisfaction surveys, so there may be a tendency in smaller facilities to use the formal surveys less frequently. Likewise, larger organizations may be more likely to rely upon formal tools, such as surveys to gather the required information.

There appears to be a good amount of evidence in previous research that competitive benchmarking can improve performance and customer satisfaction. Though some small-scale research has been conducted on using customer satisfaction as a competitive benchmark, there is a clear need to assess how occupant satisfaction surveys are currently being used and how more regular survey data such as that gathered on work order and projects, may supplement the occupant surveys to give a complete picture of satisfaction with all services provided by facilities management. Given the wide disparity in how often, if at all, FM organizations implement customer satisfaction surveys, developing a standardized process for benchmarking customer satisfaction in the FM industry will be a complex process that will need to include consideration of all the individualized services that FM is responsible for and the various approaches to gathering the required data for continual improvement. Standardizing the benchmarking process for similarly purposed service providers may inspire company competition that significantly increases customer satisfaction. Since customer satisfaction has also been clearly linked to financial performance, standardizing how customer satisfaction is measured may offer an additional way to increase the perceived value of FM services offered. This study represents a small step in the process needed to evaluate the use of occupant satisfaction surveys in the Facilities Management industry for the purposes of identifying the need for and how to develop a standardized process for evaluating occupant satisfaction so that it can be competitively benchmarked for the purposes of continual improvement.

Recommendations for Facilities Management Practitioners

Based upon the literature, survey, and analysis, the following recommendations can be made for facilities management practitioners regarding customer/occupant satisfaction surveys:

  • Occupant satisfaction surveys that are implemented at a frequency of annually or less likely offer insufficient data to manage a continual improvement process. While there is no correct frequency with which they should be administered the literature suggests that quarterly is both sufficient for data collection and infrequent enough to prevent survey fatigue or burn-out.

  • Occupant satisfaction surveys should assess satisfaction with all services provided by facilities management. They are a wholistic assessment of occupant satisfaction rather than an assessment of specific projects or work orders.

  • Results of surveys should be acted upon and there should be clear organizational goals for the data. If the data is not being used to effect continual improvement, then it is not being utilized effectively and may represent a waste of resources.

Recommendations for Competitive Surveying for customer/occupant satisfaction surveys in FM

  • Though the terms customer and occupant are used interchangeably in Facilities Management literature, questions on competitive benchmarking surveys should be specific, rather than all-inclusive. Using the term “Occupant Satisfaction Surveys” should help to eliminate confusion between these surveys and specific project or work order surveys.

  • Though frequency of use may be a somewhat important aspect of the use of these surveys, future research and questions on competitive benchmarking surveys should also ask about the following:

    • Specific facility services rated on occupant satisfaction surveys

    • Results of those surveys

    • How data is tracked and organizational goals with data (standards)

    • How action is taken to correct problems

    • Qualitative data gathered through meetings or personal interactions

Future Research

This study did not assess the content of occupant surveys or evaluate the process by which results from surveys are implemented to effect continual improvement. Future studies of how occupant satisfaction surveys are utilized in the facilities management industry should address both the content of the surveys and the qualitative processes and data utilized in comprehensive customer/occupant satisfaction evaluation programs. This sort of data may best be gathered through follow-up interviews or surveys focused specifically on this topic. Interviews may offer particular insights into the complexity of the occupant satisfaction programs and how they are utilized to manage quality of facility services to effect continual improvement.

American Society of Testing & Materials (ASTM)
. (
2016
).
Standard Practice for Building Floor Area Measurements for Facility Management (ASTM E1836/E1836M-09(2016))
.
West Conshohocken, PA
:
ASTM International
.
Andrejasich
,
K.
(2017
,
Jan
.
12).
Business customer satisfaction with electric utilities hits high, JD Power says
.
SNL Energy Power Daily
.
Au-Yong
,
C. P.
,
Ali
,
A. S.
, &
Ahmad
,
F.
(
2015
).
Participative Mechanisms to Improve Office Maintenance Performance and Customer Satisfaction
.
Journal of Performance of Constructed Facilities
,
29
(
4
)
, 04014103.
British Institute of Facility Management (BIFM)
. (
2004
).
Rethinking facilities management: Accelerating change through best practice
. .
Camp
,
R. C.
(
1989
).
Benchmarking: The search for industry best practices that lead to superior performance
.
Milton Park, England
:
Taylor & Francis
.
Coenen
,
C.
,
Waldburger
,
D.
, &
von Felten
,
D.
(
2013
).
FM Servicebarometer: Monitoring customer perception of service performance
.
Journal of Facilities Management
,
11
(
3
),
266
278
.
Fibuch
,
E.
, &
Van Way
,
C. W.
(
2013
).
Benchmarking's role in driving performance
.
Physician Executive
,
39
(
1
),
28
32
.
Haverila
,
M. J.
,
Martinsuo
,
M.
, &
Naumann
,
E.
(
2013
).
Drivers of customer satisfaction and relationship quality in system delivery projects
.
Journal of Strategic Marketing
,
21
(
7
),
613
636
.
He
,
L. T.
, &
Hu
,
C.
(
2009
).
Midpoint method and accuracy of variability forecasting
.
Empirical Economics
,
38
(
3
),
705
715
.
Kumar
,
U.
,
Galar
,
D.
,
Parida
,
A.
,
Stenström
,
C.
, &
Berges
,
L.
(
2013
).
Maintenance performance metrics: A state-of-the-art review
.
Journal of Quality in Maintenance Engineering
,
19
(
3
),
233
277
.
Lavy
,
S.
,
Garcia
,
J.
, &
Dixit
,
M.
(
2010
).
Establishment of KPIs for facility performance measurement: Review of literature
.
Facilities
,
28
(
9/10
),
440
464
.
Lavy
,
S.
,
Garcia
,
J. A.
, &
Dixit
,
M. K.
(
2014a
).
KPIs for facility's performance assessment, part I: Identification and categorization of core indicators
.
Facilities
,
32
(
5/6
),
256
274
.
Lavy
,
S.
,
Garcia
,
J. A.
, &
Dixit
,
M. K.
(
2014b
).
KPIs for facility's performance assessment, part II: Identification of variables and deriving expressions for core indicators
.
Facilities
,
32
(
5/6
),
275
294
.
Lustig
,
M.
(2014
,
Feb
.
17).
“JD Power: Improved communications helped raise utility satisfaction ratings.”
SNL Canada Energy Week
.
Massheder
,
K.
, &
Finch
,
E.
(
1998a
).
Benchmarking methodologies applied to UK facilities management
.
Facilities
,
16
(
3/4
),
99
106
.
Massheder
,
K.
, &
Finch
,
E.
(
1998b
).
Benchmarking metrics used in UK facilities management
.
Facilities
,
15
(
5/6
),
123
127
.
Meng
,
X.
, &
Minogue
,
M.
(
2011
).
Performance measurement models in facility management: A comparative study
.
Facilities
,
29
(
11/12
),
472
484
.
Palmer
,
K.
, &
Walls
,
P.
(
2015
).
Does information provision shrink the energy efficiency gap? A cross-city comparison of commercial building benchmarking and disclosure laws
.
Resources for the Future
.
Rani
,
N. A. A.
,
Baharum
,
M. R.
,
Akbar
,
A. R. N.
, &
Nawawi
,
A. H.
(
2015
).
Perception of maintenance management strategy on healthcare facilities
.
Procedia—Social and Behavioral Sciences
,
170
,
272
281
.
Kärnä
,
S.
, &
Junnonen
,
J.
(
2016
).
Benchmarking construction industry, company, and project performance by participants' evaluation
.
Benchmarking
:
An International Journal
.
Simões
,
J. M.
,
Gomes
,
C. F.
, &
Yasin
,
M. M.
(
2011
).
A literature review of maintenance performance measurement: A conceptual framework and directions for future research
.
Journal of Quality in Maintenance Engineering
,
17
(
2
),
116
137
.
Spendolini
,
J. M.
(
1992
).
The benchmarking book
.
New York, NY
:
American Management Association
.
Stauffer
,
D.
(
2003
).
Is your benchmarking doing the right work?
Harvard Management Update
,
8
(
9
),
3
.
Tucker
,
M.
, &
Smith
,
A.
(
2008
).
User perceptions in workplace productivity and strategic FM delivery
.
Facilities
,
26
(
5/6
),
196
212
.
Tucker
,
M.
, &
Pitt
,
M.
(
2009a
).
Customer performance measurement in facilities management: A strategic approach
.
International Journal of Productivity and Performance Management
,
58
(
5
),
407
422
.
Tucker
,
M.
, &
Pitt
,
M.
(
2009b
).
National standards of customer satisfaction in facilities management
.
Facilities
,
27
(
13/14
),
497
514
.
Tucker
,
M.
, &
Pitt
,
M.
(
2010
).
Improving service provision through better management and measurement of customer satisfaction in facilities management
.
Journal of Corporate Real Estate
,
12
(
4
),
220
233
.
Walters
,
M.
(
1999
).
Performance measurement systems—a study of customer satisfaction
.
Facilities
,
17
(
3/4
),
97
104
.
Wong
,
P. Y. L.
,
Leung
,
S. C. H.
, &
Gilleard
,
J. D.
(
2013
).
Portfolio performance benchmarking with data envelopment analysis
.
Asia-Pacific Journal of Operational Research
,
30
(
5
)
, 1.
Yasin
,
M. M.
(
2002
).
The theory and practice of benchmarking: Then and now
.
Benchmarking
,
9
(
3
),
217
243
.