Easy to Find: The Data-Driven Approach to Development
A data-driven approach to development allows you to pick a specific pain point, attempt to address it, and reasonably quantify the outcome of your attempt.
When we think about our daily interactions with technology, “searching” becomes synonymous with “surfing.” Searching has become ubiquitous with the internet—nearly any “connected” action we can perform begins with some kind of a search. This means two things: first, that as consumers of technology, we’ve come to expect seamless search experiences; and second, that the companies providing us these opportunities to search have a whole lot of data around how we’re doing it.
At Guru, we look at this data constantly in order to continue improving our search performance—and often, what we find surprises us. And although we ultimately believe that the best search is no search at all, we know that optimizing search will continue to help our customers find the knowledge they need.
Searching for an answer
In our recent efforts to improve our search performance, we thought of several ways we could categorize a successful or unsuccessful search. Was it session duration, Cards viewed, total clicks, number of queries? There were many ways in which we could have categorized searches as “good” or “bad,” but ultimately we decided to evaluate the actions that took place after a user typed into that familiar top bar and clicked enter.
Enter our data team to bring our curiosity to light. After working with them to determine the best way to evaluate our user data, they built a sunburst chart of all of the actions users were taking after their first query. After spending a good 5 minutes admiring their impressive work and making sense of the data visualization in front of us, we were ready to dive in and start evaluating which paths we liked, which we didn’t, and which we’d need to investigate further in order to have a firm opinion.
Why take a data-driven approach to problem-solving?
Taking a data-driven approach to large problems provides the unique opportunity to pick a very specific pain point, attempt to address it, and reasonably quantify the outcome of your attempt. For example, if our team simply set out to “make search better,” there would be a lot of possible activities for us to do. We could try to increase the speed in which results populated, investigate tweaking our algorithm, or look into suggesting results to customers in new ways. And all of these activities would be worthwhile endeavors and likely improve search in some way—but taking a data-driven approach geared towards moving the needle on one specific outcome wins every time. Why? Let’s consider both methods.
Say we went with the blanket, let’s-try-everything-we’ve-ever-thought-of-at-once approach to improving search. We’d likely have lots of engineers, data scientists, product managers, and other colleagues focused on individual tasks, working towards a specific improvement for which they were fully or partially responsible. They would likely finish these projects at dramatically different rates based on complexity, and then move on to the next thing. Simple enough. But when it would come time for our team to reflect on the original task at hand—improving search–it would become very difficult to evaluate our success. Because even if every metric we were using to benchmark success moved in the right direction, how could we ever know which project(s) had caused the improvement? Or, if our metrics had moved in the wrong direction, how would we know which projects to pull back on?
Why choose a narrow focus for development?
By taking a more focused, fix-one-problem-at-a-time approach, we are better able to safeguard against these kinds of challenges. For example, when it comes to search, taking a more focused approach would mean that instead of setting out to “make search better,” we’d set out to improve one specific path on our sunburst chart which we determined to be undesirable. For example, we could choose to look at users who search again immediately after their first search, without ever viewing a card. From there, we can consider all of the reasons why that might happen—is the desired card not appearing in the search results? Is it too far down on the page? Did the user realize they were searching the wrong key terms and decide to try again? From there, we can consider many paths to resolving this pattern, and design our next tasks accordingly. This kind of problem-based planning keep our entire team focused on solving smaller challenges quickly as a team, and allows us to evaluate if we’ve made our desired impact quickly and efficiently.
Since search is a core component of any knowledge management tool like Guru, we know that it’ll always be a primary focus for us. Taking a data-driven approach allows us to ensure that we are thoughtful and intentional in how we approach solving each piece of the puzzle.
When we think about our daily interactions with technology, “searching” becomes synonymous with “surfing.” Searching has become ubiquitous with the internet—nearly any “connected” action we can perform begins with some kind of a search. This means two things: first, that as consumers of technology, we’ve come to expect seamless search experiences; and second, that the companies providing us these opportunities to search have a whole lot of data around how we’re doing it.
At Guru, we look at this data constantly in order to continue improving our search performance—and often, what we find surprises us. And although we ultimately believe that the best search is no search at all, we know that optimizing search will continue to help our customers find the knowledge they need.
Searching for an answer
In our recent efforts to improve our search performance, we thought of several ways we could categorize a successful or unsuccessful search. Was it session duration, Cards viewed, total clicks, number of queries? There were many ways in which we could have categorized searches as “good” or “bad,” but ultimately we decided to evaluate the actions that took place after a user typed into that familiar top bar and clicked enter.
Enter our data team to bring our curiosity to light. After working with them to determine the best way to evaluate our user data, they built a sunburst chart of all of the actions users were taking after their first query. After spending a good 5 minutes admiring their impressive work and making sense of the data visualization in front of us, we were ready to dive in and start evaluating which paths we liked, which we didn’t, and which we’d need to investigate further in order to have a firm opinion.
Why take a data-driven approach to problem-solving?
Taking a data-driven approach to large problems provides the unique opportunity to pick a very specific pain point, attempt to address it, and reasonably quantify the outcome of your attempt. For example, if our team simply set out to “make search better,” there would be a lot of possible activities for us to do. We could try to increase the speed in which results populated, investigate tweaking our algorithm, or look into suggesting results to customers in new ways. And all of these activities would be worthwhile endeavors and likely improve search in some way—but taking a data-driven approach geared towards moving the needle on one specific outcome wins every time. Why? Let’s consider both methods.
Say we went with the blanket, let’s-try-everything-we’ve-ever-thought-of-at-once approach to improving search. We’d likely have lots of engineers, data scientists, product managers, and other colleagues focused on individual tasks, working towards a specific improvement for which they were fully or partially responsible. They would likely finish these projects at dramatically different rates based on complexity, and then move on to the next thing. Simple enough. But when it would come time for our team to reflect on the original task at hand—improving search–it would become very difficult to evaluate our success. Because even if every metric we were using to benchmark success moved in the right direction, how could we ever know which project(s) had caused the improvement? Or, if our metrics had moved in the wrong direction, how would we know which projects to pull back on?
Why choose a narrow focus for development?
By taking a more focused, fix-one-problem-at-a-time approach, we are better able to safeguard against these kinds of challenges. For example, when it comes to search, taking a more focused approach would mean that instead of setting out to “make search better,” we’d set out to improve one specific path on our sunburst chart which we determined to be undesirable. For example, we could choose to look at users who search again immediately after their first search, without ever viewing a card. From there, we can consider all of the reasons why that might happen—is the desired card not appearing in the search results? Is it too far down on the page? Did the user realize they were searching the wrong key terms and decide to try again? From there, we can consider many paths to resolving this pattern, and design our next tasks accordingly. This kind of problem-based planning keep our entire team focused on solving smaller challenges quickly as a team, and allows us to evaluate if we’ve made our desired impact quickly and efficiently.
Since search is a core component of any knowledge management tool like Guru, we know that it’ll always be a primary focus for us. Taking a data-driven approach allows us to ensure that we are thoughtful and intentional in how we approach solving each piece of the puzzle.
Experience the power of the Guru platform firsthand – take our interactive product tour