Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Feature Evaluation: Scoring your features objectively


Startups are almost always constrained on resources and driven by innovation. Time is an important currency for every startup and deciding what not to do is as important as deciding what to. These decisions or one can say prioritisation is more than just product decisions. Every single feature is an opinion and often hard work and research too which makes them personal. Hence person proposing the feature often becomes a point of consideration as well. Imagine a situation where the feature request may be coming from your CEO or even investor. Taking the right decision over personal biases gives rise to the need of a framework which is data driven and objective towards a goal of evaluating features on a simple scale of 1 to N. Composition of such a framework starts by identifying the bigger objective. Its starts with a fundamental alignment on the WHY (bigger vision) of the company since everything else (WHAT and HOW) just revolves around it. If the alignment on the bigger picture is missing than the stakeholders will end up arguing on whether to take a bus or a boat without knowing where they are travelling to.


Let us start by simply listing down bad practices before discussing the right ones to abide by our motto of deciding what not to do being as important as deciding what to. So what must not be the central theme of feature prioritisation:

  1. Simple Intuition: Simple intuition based features with no data driven hypothesis should be completely avoided. We should encourage a practise where everyone should require to perform a basic due diligence for data to back their hypothesis for the feature with numbers and market insights. Not only does this fend of people who are into a habit of giving non serious suggestions but also encourages people to understand more about analytics and market insights.
  2. Copy the Competition: Copying features from the competition just for the sake of it isn’t a healthy practise especially in the startup ecosystem where you are constrained on resources and you’ve to be very mindful of where to invest your time and money. Again the genesis of the idea can be from the competition but that must not be the sole inspiration of doing it.
  3. Cool Tech: Product decisions must keep the company interests ahead of individual aspirations and inclinations. Just because some cool tech can be built into the product doesn’t justify doing it. Although when doing so for future readiness is an exception where being an early adopter can give you a certain edge. These features if possible must be designed as experiments and their induction in the product must be subject to results and potential impact.

Now that we’ve spoken enough about what not to do, lets discuss on what are some of the good practises to include as a part of feature evaluation.

  1. Coherence with the product: Coherent features often appear as a very organic extension of the product. Lets suppose we are building a chat application with basic chatting abilities like sending text and Pictures etc. Introduction of any new format of data like GIF / Videos / Audio seems like a very organic extension as it enhances my ability to chat in a variety of different ways. Now consider adding a feature to set a video profile pic. Latter clearly isn’t as organic as the prior.
  2. Audience for this feature (Reach): what is the size of the audience which is going to realise this feature. Is this feature somewhere 5 clicks deep in the product which is accessed by only 1% of your audience or this feature is right there on the first screen and accessible to 100% of your audience.
  3. Effort / Impact ratio: What is the effort involved in productising this feature and what is the estimated impact on KPI. If the feature is heavy on effort and low on impact it scores low and if the feature is low on effort and heavy on impact than it scores well.
  4. Measurability: Can the impact of this feature be quantified in terms of improvement in any metric etc. If we can’t measure the success of a feature in terms of metric it scores low in this criteria as its not a metric mover.
Effort Impact

If we filter ideas and product suggestions through these 4 filters we’ll be clearly able to see beyond bias on what should take precedence and what should be put on hold or avoided overall.

Also Read

Agile Methodology Done Right

This post first appeared on The Impact Of Audience Response System On Your Events, please read the originial post: here

Share the post

Feature Evaluation: Scoring your features objectively


Subscribe to The Impact Of Audience Response System On Your Events

Get updates delivered right to your inbox!

Thank you for your subscription