Research in practice is a commitment to better healthcare design, by informing the decisions and measuring impact after occupancy. The goal of research isn’t academic; it’s rooted in real-world priorities to create a better environment, enhance the human experience, enable a better quality of life, and—most important—provide value to all stakeholders by showing a tangible return on investment.

To do so, many design firms are instilling a culture of research and a commitment to metrics that matter. The challenge is to integrate research with design, making it part of the process. The following six-step process highlights how research thinking can be incorporated into existing processes to enable designers to target and achieve better outcomes.

1. Create designs based on key performance goals of the organization.
Setting a target based on the needs of an organization and defined performance indicators is key. Goals must be clearly identified and sync with the organization to achieve successful endgame metrics.

Aim for a target shared between the design team and the facility. For example, goals may include enhanced experience, safety, efficiency, lifecycle cost, and return on investment. For healthcare organizations, these could be linked to length of stay, HCAHPs scores, infection rates, fall rates, errors, nurse retention, etc.

Understanding which of these goals is a priority, and what metrics to use to assess these goals, is vital. An understanding of organizational metrics helps target success metrics and poses questions for discovery.

2. Gather knowledge, understand users, simulate scenarios, and test prototypes, using tools that balance technology with empathy.
The exploration phase is the most exciting phase of project start-up, and research in this phase can also be fun.

In an information age, there’s danger of data overload, so it’s vital to have filters in place, ensuring that what’s consulted is relevant and credible. There are excellent online resources where synthesized information is available, such as the Knowledge Repository at The Center for Health Design; Research Design Connections; the Environmental Design Research Association; and Building Research Information Knowledgebase, the research database by National Institute of Building Science.

The key is to synthesize and own this information—whether it’s sketches, mind maps, concept sketches, etc. The knowledge must be made actionable.

Next, use tools such as mock-ups, rapid prototyping, observations, storyboards, experience mapping, and simulations to investigate concepts under development. The approach must include empathy, (understanding users and processes) as well as technology (testing materials and simulating scenarios).

Most of this is part of the due diligence of a design team and can be converted to systematic research by using the right tools, documenting the process, and committing to appropriate analysis.

3. Link design solution to performance hypothesis.
Once key concepts have been explored, converge on potential design solutions and map each design solution against the outcome intended.

Every design decision is a performance hypothesis. For example, strategically located handwashing sinks will increase handwashing compliance and reduce infection rates; evidence-based positive distractions, such as art, in a patient’s sight line will reduce stress and promote recovery; a supported path to the bathroom will reduce fall rates, etc.

In fact, each decision made in design is an implicit assumption of a desired human and environmental performance. The next step is to craft these assumptions into a clear, testable hypothesis.

4. Identify key metrics for design and performance and collect baseline data. All measures are not quantitative.
To test a hypothesis, one must know what to measure, so match the design to outcomes. Following this step, design metrics can be matched to outcome/performance metrics.

For example, track walking distances, ergonomics, and lines of sight for a nurses’ unit (design metrics) and then test for any impact on staff satisfaction scores or staff turnover (performance metrics).

5. Ensure design is implemented as planned, aiming f

or targeted performance goals.
Once the design is complete, it’s important to track the project through documentation and construction phases to make sure the hypotheses are protected.

Many good ideas in the initial design phase don’t see the light of day because the rationale for how a particular solution could improve a space and have long-term benefits isn’t woven into the actual documents. To resolve this, in even a single plan, the multiple design features linked to performance outcomes should be called out. Then, specific metrics can be placed next to them, making it very clear that value-engineering will have an impact beyond first costs.

Often, multiple design features (daylight, proximities, visibility lines, ergonomics) contribute to a single metric (satisfaction). Or a single design feature (daylight) may contribute to multiple performance metrics (energy, satisfaction, mood). The more these links are made tangible, the easier it is for a team to make tough decisions based on a long-term, tangible return on investment, rather than short-term first costs.   

At the end of the day, the entire building and its design work to create a better experience—a better measurable performance.

6. Test the hypothesis.
When the building is complete and occu

pied, test the hypothesis to see if the design performed. To test success, a baseline is needed to measure against. This could be current conditions for an existing project or the industry benchmark for new greenfield projects.

Ask the question: “To what extent did the project succeed/fail (meet the target you’d set for your project and achieve the outcomes hypothesized) and why? Conduct a facility performance evaluation to look at the facility as a whole or invest in key research studies on hypotheses harvested during the process.

It’s important to remember that design doesn’t “cause” improved outcomes; it creates compelling conditions for improved outcomes. Sometimes a target may be achieved while other times it could be missed.

In healthcare design, a lot of factors have to converge (including operational processes and organizational culture) to bring about improved outcomes, making it difficult to tease apart and focus on the role of design. This is where research is perhaps the most impactful—using systematic investigation to test out impact of design while also accounting for other factors that could have an effect.

If a systematic approach has been taken during design and documentation, and if metrics have 

been tracked throughout the process, then this “test” becomes quite easy.

As daunting as it may seem, research in design practice isn’t rocket science. All it takes are six simple steps, a commitment to m
easuring, and an ongoing dialogue between research and design.

Upali Nanda, PHD, is the director of research at HKS and executive director of CADRE. She can be reached at unanda@hksinc.com. Tom Harvey, FAIA, is a healthcare principal at HKS and president of CADRE. He can be reached at tharvey@hksinc.com.