Exploratory Micro-Entrepreneurship Thoughts Chapter 1

Written by eyaltoledano | Published 2017/01/09
Tech Story Tags: startup | entrepreneurship | marketing | digital-marketing | mvp

TLDRvia the TL;DR App

Don’t miss the context behind why we’re building 12 MVPs in 12 months.

I don’t really know yet the extent of the processes we’ll need to build to get really good at coming up with ideas, building MVPs and testing them... yet.

Specifically, a few core functions will mean do-or-die and we’ll need to build and iterate on a playbook for each of those capacities:

  1. Coming up with really good ideas two or three times a year (slow enough so you can keep up but not fast enough that you get an itchy trigger-finger). This also means sorting and prioritizing through a backlog of ideas, doing some initial research on sizing and what the launch and LTVs might look like.
  2. Building the MVPs rests in the very capable hands of Mubs and Seth — I’m not sure if there are plans to create a pan-project framework at some point since some projects might not involve any code. Regardless, I’m confident both Mubs and Seth will build and make the MVPs to look amazing.
  3. Validating will comprise some of the most intense and difficult activities at Digital Founder … for the first year or two. The solution to decreasing this difficulty is to create a personal trampoline we can jump from whenever a new project is created. A DistroKit — a series of playbooks that get you some sample of quality traffic for each customer persona so you can reasonably test how well the target customer would react to a new product.

The goal is for those playbooks to provide safe passage across for the cohort next year who will also attempt the traverse together (and possibly with our help).

Can you actually validate something in 15 days?

At a higher level, something needs to be said about the business model assumptions we’re looking to prove.

Specifically, big ones like “validating in 15 days” comprises the most risk. In theory it sounds nice, but is this actually doable?

By doable, I don’t mind tactically. I mean strategically, can we achieve some level of statistical significance on our MVP efforts with so little time? How will I know that the projects that take off now won’t fizzle out later?

I know and have seen data with my own eyes that products that do take off and continue to take off tend to do that early and dramatically and in spite of seemingly obvious issues.

There’s a tipping point where the problem you solve is so acute that the solution will be accepted to solve it no matter how ugly or stinky it might be. But that doesn’t mean that something that takes off will continue to take off or that we should make it at standard to ship ugly and stinky stuff.

That’s one major key.

Define “validation” so that projects can scope themselves

The primary goal of the initial year or two is to create immediate passive revenue for its makers. This cancels out our burn and lets us continue to explore freely. That’s when we can take on bigger challenges and try to get really good at validating meaningfully very quickly.

Doing this doesn’t require millions — our initial cohort of Digital Founders would be pretty happy splitting $50,000/mo in recurring profit. This is a pretty specific, measurable, attainable and realistic goal we could achieve by pooling our talents and diversifying our sources — queue the 12 MVPS in 12 months.

We know the Power Law is going to drive the majority of our revenue. So we need to ask ourselves, what does a majority of revenue even mean_?_

Between Power Law and 80/20 — somewhere in the middle

We can probably state that some extent of the 80/20 rule will apply here— though the Power Law is far more unforgiving and looks more like 99/1. Considering talent and favorable market conditions, we can say we’ll probably end up sitting somewhere in the middle.

So if we’ve achieved our break-even point at $50,000 — and that performance is enabled primarily by one asset who in turn delivers 80% of the portfolio performance, that will mean that one of our 12 MVPS — the “unicorn of the cohort” — will deliver a full $40,000/mo in revenue. Conversely, the entirety of the other 11 products will produce, altogether, $10,000/mo for the portfolio.

We can then say that the 80/20 rule applies once again for the 11 “trailing products”, where 1 of the 11 products will deliver 80% of the group’s revenue, which would look like something around $8,750/mo. Suddenly, we’re looking at two products of 11 which deliver $48.75K out of $50K in revenue.

For us, in terms of options, that means the other 9 products are either canceled early or left to grow slowly and passively due to some natural ceiling. The point is — we know 2 products are going to deliver the majority of the value and we need to be really quick at figuring out if the product we’re working on each given month is that one product — and I’m giving us four weeks to figure that out before it needs to be paused, put on autopilot or killed.

Thus, we won’t actually run into management problems running the MVPs. Either the product will be a winner worth doubling-down on immediately, or it won’t and will be paused, placed on autopilot or killed. What is much harder is actually having one winner out of 12 attempts. That’s the ambitious part.

Why not keep the underperformers? Some are worth keeping to build portfolio revenue, but others are discarded because there’s just no point — we’ll know very early if a project is either going to be a Tier 1, Tier 2 or Tier 3 project in the portfolio. If it doesn’t look like it will be either, working on that product makes very little sense given the roadmap we have ahead of us and given the Power Law.

The time it takes to validate stuff should actually be small, by design

In CRO, there’s an important concept related to the composition of split tests. When you choose a KPI to improve, you generally have an idea of how much you need it to improve by in order for you to hit your goals.

Generally, this approach works out well at the early startup stage where the gains need to be astronomical — but it doesn’t work at all if you’re trying to achieve some incremental gain.

I know that sounds counter-intuitive, but if you hash through the reasoning, and ask yourself, “why in the world would I try to test things that have a tiny chance to work when I know that changing the button’s colour will get us a bit more juice?”

The answer will be: statistical significance.

When you run a test with the goal to achieve a very small improvement, the time it takes for that test to run needs to be huge for you to actually observe that small change and be able to confirm it with reasonable (95%) statistical confidence.

Instead, it makes more sense to literally shoot for the moon, because when you create a test with a requested change that is gigantic, it will be obvious very quickly whether you’re getting your desired change or not.

It will take dramatically less time to test huge ideas and be accurate in your assessment about whether or not it will be big— and you won’t need to waste your time (a) being lazy with tiny improvements and (b) wasting your time fiddling your thumbs in the process.

So for us, it’s important for the ideas to be bold and dramatic in a way that answers a clear problem that either we feel or that we can observe and communicate about. It’s fine to work on small apps if we need to fill the #2 or #3 spot with something that is quick which can reasonably and reliably deliver monthly value. But generally, we should try to hit that $40K product goal with each product attempt, and then consider keeping it if it isn’t hitting that goal.

Operations

The next question has to do with operations and actually running these 12 MVPs as mentioned above. How are we going to fit actually running an increasingly large portfolio with the R&D activities?

In losing theory, we could spend our entire life trying to build micro-products and never get even a single winner. For a team of seasoned makers with great cross-talent and personality synergy, that seems really unlikely.

In winning theory, we’ll need to spend between 1 and 12 months of “soul-searching” before one of the projects clearly indicates it is capable of delivering our yearly MRR goal (that it is the year’s unicorn).

Sure, it’s possible we’re terrible at what we do and all 12 products will tank. But we’re actually pretty good at what we do. That is, after all, why we’re undertaking this perilous challenge in the first place — we believe we can create a winner within 12 attempts.

So we need to solve the potential issues which will arise if, say, we’re at month 10 and we still haven’t hit a product which can deliver unicorn potential — where in Year 1, unicorn status for us is $40K/mo — , and say there’s a mountain of customer support tickets starting to come out of the other businesses in the portfolio — whom are not yet delivering that value.

We’re all still freelancers, and until the month where we’ll observe very quick indicators that a $40K/mo product is about to be a reality, many of us depend on converting time to money to continue doing this. The ole’ chicken and egg problem.

So how on Earth should we deal with that without cannibalizing on R&D time?

Pause, Autopilot or Kill

This is why each idea has to be really big and needs to be tested really fast before you either pause it, place it on autopilot, or kill it.

It also goes to show how focused the test needs to be in order to actually deliver traffic in that time frame and the appropriate number of conversions to provide significance needed to pause, autopilot or kill — each monthly attempt just tries to validate, it isn’t designed to hit $40K in one month.

Pausing a product means we just make it invisible. If we’re pausing a product, it means it never was able to score any customers, or the technical difficulty suddenly became a game changer, or any other blocker that means we can’t do this — yet.

Placing a product on autopilot means we’ll do everything that needs to be done to maximize the passive nature of managing the product; from installing self-serve Q&A, chat systems, marketing automation and more to make sure our product can keep slowly growing on its own while we focus elsewhere.

Killing a product really is the last but quickest choice to make for a product who’s just not worth spending time on. For those products who fail to create revenue, generate real value or actually solve a problem — we kill. Otherwise, it’s worth open sourcing, making freely available, or simply pausing.

The Bullseye Framework

I like to think of Gabriel Weinberg’s approach to traction where he suggests that startups tend to die because they run out of time trying to discover the one channel which will be responsible for the majority of their growth. I agree with the approach and tend to use it myself when trying to achieve market/fit with clients or myself.

Doe it sound familiar?

It’s a pretty close representation of the model we’ll use at Digital Founder on a product scale rather than on a channel scale. I suspect each product will have its own Bullseye framework to be filled — but the entirety of the portfolio can seek to fill something pretty similar to the Bullseye Tracking Sheet.

The basic story behind the Bullseye framework is that at any given moment, there’s maybe 20 channels your startup can go out and try to test different markets and ways to reach the market. Of course, as a marketing leader, your goal is to maximize the number of users/customers you’re trying to reach at the lowest possible cost per acquisition.

We’ll take a similar approach — once again lightly modifying a known recipe:

Instead of listing Long-shot, Promising and Inner Circle channels, we’ll list Long Shot startup ideas, promising ideas and ideas we should probably focus on right now. We have an idea of what each product we’ll build is going to be over the next 12 months, but it isn’t clear which we should build first, and we should rank and optimize for choice by the same standards the Bullseye Framework aims to achieve.

That is, for each startup idea we have, we’ll need to rank it by:

  • status
  • potential revenue
  • whether we think it’s a tier 1, tier 2 or tier 3 projects
  • number of customers we believe we can realistically acquire this year
  • if paid media is involved, the approximate cost of each user
  • the expected effort required to test the idea

The status lets us indicate whether this is just an idea (1), whether we’re testing it (2), whether it’s been tested and paused (3), whether we’re actively focusing on building it more (4) or if it has been killed (5).

The potential revenue relates to our yearly goal. How much revenue (not profit) do we think this product realistically stands to make with moderate success?

The answer tells us whether it’s a tier 1 ($40K), tier 2 ($10K) or tier 3 ($2.5K) project. The idea tier should follow the potential revenue and the expected effort for testing it.

For some ideas, like ecommerce, there are ways to literally turn cents into dollars with good content. If we have previous experience in some area and have an idea, it’d be smart to keep in touch with the approximate cost per acquisition for each user.

Finally, we’ll need to stay in tune with reality when it comes to the complexity of the MVP. Tier 1 ideas might be worth building, but they must respect the two-ish week deadline for v1 delivery or the whole roadmap is put in peril.

Thus, large ideas can still be tested, but they are tested in different chunks and possibly released in sequence. That actually has some marketing benefit to it — story for another day though.

We should probably plan to do this exercise once per month so we can make sure to highlight any changes to possibilities newly made possible.

In the near future, it will be important to start thinking about the areas of focus where we should operate given our collective experience. But before that, I should probably introduce my co-conspirators. And I’ll do that next time.

Did you enjoy this text? Make sure to ❤️ this post so others can benefit from it.


Published by HackerNoon on 2017/01/09