Project Wavelength: prototyping research visibility

Most research insights die in storage. Decks sit in Google Drive, reports get filed, and six months later someone runs the same study because no one remembered the last one. Research siloes form in various areas of a company - UX, product, customer support. From the customer side, they give input, but don’t always see how their voices influenced anything. At a tooling level, lots of research repository tools are either bloated, confusing, or have annoying access controls that don’t do much to democratize research activities or insights at scale. Lather, rinse, repeat research atrophy.

At a prior job, I got to thinking about how to fix this. Having set up lots of CABs and ongoing customer feedback loops and panels over several companies in my career as well as wrangled research repositories and insights portals, I had seen this problem repeat. I found and executed partial solves along the way, probably the most well-done being an internally built insights knowledge base for a global candy manufacturer. But what I was thinking at this point, was a way to scale and make it interesting, and bring customers along for the journey. And so, I came up with the idea of Project Wavelength, which I ran as a small experiment.

To start, I set up weekly “transmissions” - lightweight polls, quick data visualizations, and short blog posts that turned findings into simple stories. Internally, we learned more about our customers, and we helped stakeholders see throughlines across studies. Externally, it closed the loop with customers by publishing back how their input shaped product decisions.

But the real opportunity is bigger: use a framework like this as the communication layer of your customer research program, and it stretches far beyond just product design.

From one-off polls to research infrastructure

Connect the dots across methods

Polls are just one format I went with because it was fast and easy to build out a proof-of-concept. A scaled Wavelength program would weave together:

  • Customer Advisory Boards (CABs): distill what exec-level customers are saying into digestible themes for internal teams.

  • Formal research studies: usability, concept testing, or longitudinal research, summarized into simple narratives with key takeaways.

  • Ad hoc feedback loops: interviews, beta feedback, support tickets turned into “signals” that can be packaged and shared.

All of these can live as “transmissions.” Some are quick signals, others are deeper dives. The key is consistency in format and cadence, and a taxonomy model that ties them all together.

Show both breadth and depth

Think of it like layers:

  • Snackable signals: weekly polls, quick reads, simple charts.

  • Contextual stories: monthly blog posts tying multiple studies into a narrative.

  • Deep dives: quarterly reports that connect dots across CAB insights, large-scale research, and customer outcomes.

  • Annual presentations of impacts, outcomes, and future roadmaps.

The rhythm matters as much as the content. It builds a culture where people expect to hear from research regularly.

Close the loop with customers

This is where most programs fall short. CABs and studies often extract feedback but never circle back. By publishing transmissions that are both internally and customer-accessible (in a blog, email digest, or dashboard), you show customers what happened with their input. Even simple phrasing like “You told us X, so we did Y” builds trust and increases participation.

Make it infrastructure, not a side project

To scale, this type of work needs to be systematized:

  • Templates for poll design, blog posts, and visuals

  • Defined publishing cadence (weekly, monthly, quarterly)

  • Clear ownership (who translates findings, who publishes them)

  • Distribution channels (internal comms, customer newsletters, dashboards)

This turns research communication into an organizational capability, not something that disappears when one person leaves.

Anchor it to business outcomes

The point isn’t engagement metrics; it’s influence. Success looks like:

  • Stakeholders referencing transmissions in roadmap discussions

  • Product priorities aligning with CAB or survey insights

  • Customers seeing tangible outcomes from their feedback, which increases retention and advocacy

Why it matters

I concepted Project Wavelength as a small prototype based on what I already knew about the success of well-executed insights portals, but it further demonstrated the bigger model: research gains impact when it’s treated as communication infrastructure.

  • It amplifies customer voices internally.

  • It builds credibility for research and design externally.

  • And it strengthens long-term customer relationships by proving that input matters.

Using a framework like this is a significant way to scale and elevate research impact beyond the design and research team, and bring other teams insights into the fold. Not as a one-off report, but as a constant signal pulsing and storytelling across the organization.


Since my initial Project Wavelength experiment, I’ve also experimented with Obsidian as a research insights repository using the atomic research model. I’m also working on a post on the details of tooling and infrastructure you’d need to set something like this up in practice. Stay tuned for those posts in the future.

Christine
User experience designer by day. Runner, blogger, artist, wanderluster by evening and weekend.
http://www.christineesoldo.com
Next
Next

Design unfiltered: accountability without authority