Last year, my team explored different business models that might create new sources of revenue for our organization. I knew the project was going to be risky given we were a new business within an existing organization. Here’s what worked best with this project:
- Intentional communication
- Public testing dashboards
- Team agreements
- Test plans
- Regression tests
At first, we started by creating a list of the different business models that could work. We prioritized the list and documented our assumptions about the exchange of value between each interested party (us, clients and consumers). Our relationship with our consumer is everything so we had to be very thoughtful about what tests we ran in our existing experience.
Intentional communication
We spent a lot of time upfront communicating the assumptions we held and how we were going to validate these assumptions. We sent an email out every two weeks about what we’d learned and what we were planning to do next. In hindsight, I’m not sure how many people actually read the email. We should have measured that!
We learned to be careful about how we communicated the results of our tests because stakeholders drew hasty conclusions which haunted us throughout the project. In our business model, we had an assumption about how many consumers would answer a new question on our sign up form. In an early test, we saw that a much higher number of consumers shared their information than we’d assumed in the business model. Outside the team, this percentage became the new bar against which we measured all future tests but it was a flawed test and it was hard to rein back what had been shared. We learned a lot about experiment design from our mistakes in those first tests.
Public Testing Dashboards
One of our best ideas was to keep track of tests using a testing dashboard where anyone could see what tests we’ve run and the status of those tests. It was was just a simple wiki page. We went back to it time and again as we moved forward. It was so helpful to have one source for all our results.
To keep track of our work as we built tests, we started with a Kanban board with a validation column so that no work could be completed unless it had been validated. We found this cumbersome and switched back to our Sprint board. For each piece of the value chain we wanted to test, we created a story to represent the work needed to build the test. Then we created a “test” story where we defined the goals of the test. These test stories had to get done before we could move on to our next story. This worked better for our team.
Team agreements
Team agreements helped us work collaboratively. We had a team agreement that anyone could stop the factory line. In other words we had a team agreement that we would stop any test at any point if we thought our tests are hurting core business metrics. We would pause what we were doing to take the time to investigate. It was frustrating at times because it felt like it slowed us down. I just wanted to keep validating assumptions. However, in the long run, it helped us build trust within our organization and ultimately helped us run faster.
We also had a team agreement that we would continue to test one assumption in the business model until everyone on the team felt comfortable moving on to the next assumption. I liked how this made us feel like a collaborative team. But, it’s human nature to want to keep testing when a test fails. We want our tests to pass. We would rework the test, fix it, try it another way and on and on. Since we had the team agreement that we wouldn’t continue until we were all satisfied, we iterated too long on some of our tests.
Test plans
Test plans – we needed better test plans before we started testing. A test plan is a written expression about what are you testing and its objective. It defines your path if the test passes, fails or is inconclusive. Knowing how you are going to proceed before you test is essential and also helps the team move faster. I wish I had known and did this sooner than we did.
Regression tests
Another good practice we established was to set up regression tests for any previously tested assumptions that we’d validated. We would kick off those regression tests on a regular basis to make sure that our assumptions still held true. Because we were running tests within our existing consumer experience, we had to make sure we were not harming our existing business metrics (e.g., new user sign ups). So the regression tests helped ensure this.
While we ultimately didn’t move forward with the project, we learned how to search for truth without building a full solution that wouldn’t work. In the process, we built our capability in exploring and testing new business models.