When reliable and practical evidence related to design features and principles is used, facility design decisions can contribute to more effective health services delivery. Recent trends suggest a heightened interest in evidence-based healthcare facility design. However, it is not clear how healthcare organizations engaged in renovation or new construction projects are actually using different types of information during the design process, or the kind of sources of evidence being used.

In order to examine how evidence was being used in design decision-making, interviews and observation sessions were conducted at three Pebble Partner healthcare facilities at different phases of the evidence-based design (EBD) implementation process. The site visit for the predesign phase was conducted at a Midwestern community hospital (Waukesha Memorial Hospital, ProHealth Care); the design phase at a Northeastern academic medical center (Princeton University Medical Center); and the construction phase at a Pacific Coast long-term care safety net facility (Laguna Honda Hospital). Questions addressed during these site visits included:

  • What sources of information are used by EBD practitioners?
  • How are these sources of information used to inform the EBD decision-making process?
  • How is the “value” of evidence weighed by EBD practitioners?
  • What are additional factors, barriers, or constraints that may facilitate or impede the use of evidence as part of the design decision-making process?

What sources of information are used by EBD practitioners?

Participants at all three sites indicated they used a range of information sources during the design process. This included reading materials, such as peer-reviewed journal articles, trade journal articles, reports, white papers, and opinion pieces, as well as consultants, and design team presentations and talks. They also mentioned accessing Web sites of relevant organizations in the field to get cutting-edge information about different topics.

While these published and unpublished information sources were considered useful, it was evident from the discussions at all sites that “experiential knowledge” trumped all sources of information in terms of usability and relevance to the design process. This included information obtained during site visits to other hospitals, discussions with peers regarding specific issues, day-to-day experience of working in their current hospitals, participation in peer groups, and formal and informal discussions with community leaders and stakeholders.

There was unanimous agreement among participants across all three sites that irrespective of where they were in their design phase, site visits were critical sources of information. Time and again, participants referred to the importance of being able to “touch and feel” design strategies under consideration. Observing something functioning, seeing what worked and what didn’t, and being able to critically evaluate whether a certain feature would likely work within the context of their facility and organization was described as very helpful.

While similar sources of information were referred to as being important during all three phases, there appeared to be a greater emphasis during the design phase on using knowledge from day-to-day experiences. At Princeton, as participants described how they made fairly detailed design decisions regarding placement of different types of spaces, it became clear that they relied upon experiences working in their current environment. This was less evident during the pre-design phase, when participants were told to think of ideal scenarios. They referred to this as their “dream phase.”

How are these sources of information used to inform the EBD decision-making process?

During both the predesign and design phases, participants described how design recommendations based on different information sources were vetted against guiding principles that had been laid out for the project. For example, while financial issues were important during the predesign phase, participants felt they should not be restricted by budgets. However, during the design phase, the guiding principles prevented decisions from being swayed by individual preferences and information regarding financial feasibility was often weighted more heavily.

At ProHealth Care, decision making during predesign was often guided by comparing the information obtained from different sources (site visits, literature, and peer discussions) for consistency. If different types of information reinforced claims, there was more confidence about implementing those ideas. Recommendations were then translated into design ideas through annotated diagrams. Princeton also emphasized the importance of triangulating multiple information sources and comparing the design recommendations against its day-to-day working experience to determine applicability. Suggested recommendations needed to make sense to be useful. At both facilities, information from peers and site visits were deemed more useful than all other sources of information, especially for new or untested ideas.

Both Princeton and Laguna Honda described the need to conduct pilot tests to understand how designs would work. Princeton planned to build a series of mock-up patient rooms with increasing levels of detail and functionality to give its staff an opportunity to test functionality. Project team members planned to use the knowledge from the mock-up testing to make small changes in the room design as they entered the construction phase. Laguna Honda also described how it was pilot testing a range of operational strategies in its current facility, for example, testing the effectiveness of placing “do not disturb-administering patient care” signs on med carts as well as on nurses’ uniforms to minimize interruptions during medication dispensing.

How is the “value” of evidence weighed by EBD practitioners?

Participants at all three of the facilities understood the importance of ascertaining the credibility of a single piece of evidence. At ProHealth Care, a structured process called the “Iowa Model of Evidence-Based Practice to Promote Quality Care” was used as a standard for evaluating individual studies as well as a body of evidence. Across the three organizations, the credibility of a single piece of evidence was usually determined by examining the source of the information, including who conducted the research and where it was published.

Some participants described how the credibility of a study could be determined by examining the rigor of the methodology, appropriateness of the study design, and sample size. A key concern was whether variables had been sufficiently isolated in the studies so that any causal claims could be effectively judged. Participants noted that in the case of healthcare design research, many variables changed. In light of that finding, they questioned how it was possible to attribute the outcome to any particular design intervention.

Similar to its study of a single piece of evidence, ProHealth Care routinely used the “Iowa Model” to evaluate bodies of evidence for the design project and for operational and clinical decisions. Design recommendations provided by architects and consultants were often assumed to be supported by a strong body of evidence. The architects were rarely questioned about the credibility of the body of evidence being presented. Additionally, if participants considered the source to be credible, the recommendation was usually assumed to be credible. Most importantly, if consistent patterns or themes were evident from multiple sources of information about a design strategy, participants considered the overall body of evidence more credible. Participants at the design phase also mentioned the importance of corroboration from peers in determining the credibility of a recommendati
on put forth by a body of evidence.

Participants at all sites mentioned that a design recommendation based on a body of evidence was more likely to be accepted and implemented if it seemed like common sense. They also mentioned the recommendation would have a greater chance of acceptance if it matched their own experiences. Further, the recommendation would likely pass the common sense threshold if it was something they had been exposed to through discussions with peers, conference presentations, Webinars, or other means.

Princeton and Laguna Honda both described the importance of a design recommendation having the ability to meet the goals set forth in the guiding principles. Participants from Princeton also mentioned that they referred to the guiding principles to determine feasibility. Having the guiding principles in place during the design team meetings helped gain acceptance for design recommendations that served the organization’s goals, rather than individual interests.

A key factor that enabled acceptance of a design recommendation was financial feasibility. At Princeton, a standardized room concept was approved once it was shown to be financially feasible. Thereafter, it was not challenged in the hospital design team meetings. Additionally, if a design strategy had been implemented elsewhere and its successes and failures were well documented, the design teams usually had a better basis by which they made decisions. Further, hospital teams were more likely to accept a recommendation if they had a chance to visit a site and observe functionality and consistency with what was presented. This was especially true for ideas that were new to their organizations.

A key criterion the participants used for determining whether a design strategy based on a body of evidence was actionable was whether it would work within the context of their facility. Participants stated that site visits were particularly beneficial in this regard. The visits gave them an opportunity to see a design strategy (e.g. same-handed, canted rooms) in action, and whether it worked as claimed and if they had the framework in place to support implementation. However, organizations also mentioned examining a study or studies for relevance by examining the study site, size, and type of the hospital, number of patient beds, volume, etc. According to the CEO at Princeton, in order for a design strategy to be actionable, it had to be financially feasible and have a positive impact on patient safety outcomes.

Again, pilot testing was useful before deciding to adopt a design or operational strategy. Participants from both Laguna Honda and Princeton described situations where discussions with staff were critical to implementing design strategies relative to operational efficiencies.

What are additional factors, barriers, or constraints that may facilitate or impede the use of evidence as part of the design decision-making process?

Participants at all three facilities were forthcoming about perceived barriers and facilitators to the use of evidence in the design decision-making process. Indentified barriers included:

  • Lack of replication of findings in EBD;
  • Time lag in research becoming available;
  • Lack of government or leading agency that develops research, creates broad-based consensus, regulates, and disseminates the research;
  • Lack of access to published studies;
  • Lack of adequate evidence;
  • Costs of implementing EBD strategies in design; and
  • Lack of validation tools for assessing study findings.

Overall, the session participants stressed the need for more research to be conducted on key topics with findings vetted and available to decision makers in a timely fashion. The need was stressed for some central agency/agencies to fund research, monitor the quality of research, and to regulate and disseminate findings. Participants also expressed that a lack of standard tools for assessing the quality of studies was a barrier to the use of evidence.

Participants felt that the use of evidence would be facilitated by:

  • Having information regarding design strategies and concepts available prior to site visits;
  • Having access to documented success stories and proven track records that can be emulated;
  • Promoting research by making it a priority from a regulatory standpoint, increasing funding for research, and making research protocols/tools available;
  • Creating an EBD checklist to ensure implementation of key EBD strategies;
  • Offering timely and easy access to research findings;
  • Participating in educational conferences;
  • Having evidence distilled down and available in an easy-to-use format; and
  • Creating packages for decision makers, including solutions that offer the best return on investment.

Through the multiple observations, interviews, and site visits, it is evident that design decision making is a complex process, and many organizations struggle to incorporate research evidence while making the decision to use specific design interventions. Some key questions that can be used to evaluate the evidence that is presented during design team discussions include the following:

  • How do you feel about the quality of the studies that are used to support a finding of effectiveness?
  • To what extent do you feel there are enough well-executed studies to back up these findings?
  • To what extent do these studies agree with one another about whether the design feature is effective?
  • How much of a difference is likely to result from implementing this feature?

The content above was gathered for a research initiative prepared for the Agency for Healthcare Research and Quality and developed in conjunction with Battelle Centers for Public Health Research and Evaluation and Manila Consulting Group. Special thanks to Edward Liebow, PhD (Battelle), Stephen Tregear, PhD (Manila Consulting), Jessica Williams, PhD, MPH, RN (Manila Consulting), Xiaobo Quan, PhD (The Center for Health Design), and Callie Fahsholz, EDAC (The Center for Health Design) for their support in the development of the conceptual framework for evidence evaluation and for participating in the site visits.

Amy Beth Keller, MArch, EDAC, is a Research Associate/Pebble Design Strategist for The Center for Health Design (CHD). Anjali Joseph, PhD, EDAC, is CHD’s Consulting Director of Research, and Ellen Taylor, AIA, MBA, EDAC, is a Research Consultant for CHD. Healthcare Design 2010 November;10(11):18-23 The Pebble Project creates a ripple effect in the healthcare community by providing researched and documented examples of healthcare facilities where design has made a difference in the quality of care and financial performance of the institution. Launched in 2000, the Pebble Project is a joint research effort between The Center for Health Design and selected healthcare providers that has grown from one provider to more than 45. For a complete prospectus and application, contact Mark Goodman at mgoodman@healthdesign.org.