<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[User Oriented Development Process]]></title><description><![CDATA[Principles, examples, HOW-TOs it is all here]]></description><link>https://uodp.club/</link><generator>Ghost 3.9</generator><lastBuildDate>Tue, 14 Apr 2026 23:37:12 GMT</lastBuildDate><atom:link href="https://uodp.club/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Balancing Customer Problems and KPIs in Maturing Products]]></title><description><![CDATA[<p></p><p>I often hear young leaders need clarification about what comes first: problems/customers or KPIs. And how should they report the progress? This is especially problematic for products that still need to be mature but with a steady flow of customers. Usually, such teams are wholly swamped with customer requests/</p>]]></description><link>https://uodp.club/balancing-customer-problems-and-kpis-in-maturing-products/</link><guid isPermaLink="false">65344be1f4a27600011e320b</guid><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Sat, 21 Oct 2023 22:11:01 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2023/10/Screenshot-2023-10-21-at-3.10.29-PM.png" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2023/10/Screenshot-2023-10-21-at-3.10.29-PM.png" alt="Balancing Customer Problems and KPIs in Maturing Products"><p></p><p>I often hear young leaders need clarification about what comes first: problems/customers or KPIs. And how should they report the progress? This is especially problematic for products that still need to be mature but with a steady flow of customers. Usually, such teams are wholly swamped with customer requests/problems, and quite often, due to the fires/pressure they are under, they need help to define and maintain clear direction to follow.</p><p>If this is the problem you also encountered, this is the article for you (also, it is short :) ).</p><p>Let's start with the UODP way. It is always in the following order:</p><ul><li>find a customers</li><li>find their problems</li><li>find KPIs that you can use to identify how successful you are in solving these problems</li></ul><p>In the case that we are speaking about, if you already have a product that customers are using, you have already more or less successfully solved the customer's problem.</p><p>So you know the customers, they are willing to sit with you in the room and give you an apparent list of sub-problems they need to solve with your product to use it more. In this case, there are almost always the following KPIs that I suggest:</p><ul><li>Usage:</li><li>If any proxy for usage can be used (revenue/MAU/DAU/rps/etc.) - use this as the primary KPI.</li><li>If not, the second best is the number of customers onboarded (anecdote), but ideally, I suggest investing in telemetry.</li><li>Stability - how stable is your product? It would help if you had more than growing your business. It is equally essential to keep existing users happy (<a href="http://uodp.club/uodp-and-reliability/">UODP and reliability</a>).</li><li>Support/Oncall - how many resources are you wasting on the support? Scaling a product will always come with the increased resources required to do the support. So you have to keep an eye on the support load to make sure that it is going out of hand you will react to it.</li></ul><p>Are many articles written about how to set each of the KPIs specified? Here, I want to give a very generic canvas.</p><p>So, to recap, you have to know:</p><ul><li>List of customers you are onboarding</li><li>How will you measure the success of the onboarding (usage/revenue/qps/etc.) - at least testimonials (but this is the worst possible case)</li><li>How you will measure the quality of your service</li><li>How will you make sure oncall/support resources are within acceptable limits</li></ul>]]></content:encoded></item><item><title><![CDATA[User-Oriented Development Process
Problem Statement]]></title><description><![CDATA[<p>Many companies in this world never connect their success with the success of their customers. Some of them are doing KPIs/OKRs, doing everything right, and yet constantly delivering mediocre things from the perspective of the customers's adoption. Usually, this is because companies build generalized products (and as you know:</p>]]></description><link>https://uodp.club/user-oriented-development-processproblem-statement/</link><guid isPermaLink="false">60b67eccfe2c8b0001f11e9d</guid><category><![CDATA[uodp]]></category><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Mon, 28 Aug 2023 17:47:11 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2023/08/maxresdefault-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2023/08/maxresdefault-1.jpg" alt="User-Oriented Development Process
Problem Statement"><p>Many companies in this world never connect their success with the success of their customers. Some of them are doing KPIs/OKRs, doing everything right, and yet constantly delivering mediocre things from the perspective of the customers's adoption. Usually, this is because companies build generalized products (and as you know: "<a href="http://uodp.club/building-generalized-solutions-is-killing-your-product/">Building Generalized Solutions is Killing Your Product</a>.")</p><h1 id="solution">Solution</h1><p>User-Oriented Development Process (UODP) aims to precisely solve this problem; it is all about connecting customer success with the company's success. On a high level, this is done in a straightforward definition of done.</p><p>In this article done - means that the team has the following:</p><ul><li>Well document the problem that we were aiming to solve at the beginning</li><li>A customer testimony that includes confirmation that the customer is using the product/solution/project and it solves the problem described above.</li><li>KPIs that confirm customers' testimony</li></ul><p>The process of getting to the done is called: "onboarding customer." As you can see, you can onboard the same customer multiple times by solving different customer's problems.</p><h1 id="implementation">Implementation</h1><p>To start the implementation of UODP, the pre-requirement is to have a list of customers that have to be onboarded; they should be sorted in the prioritized order.</p><p>From this, we will identify the highest customers on this list that we can sit with and partner with. UODP does not work if a customer does not want to be with us in the same room. A customer has to be a partner; we should be launching with customers, not at them.</p><p>When we have a pool of customers, we can start building a pipeline of ICs in charge of onboarding. While this IC can be anyone, EM/Eng IC/PMT/TPM, it usually works best if onboarding POC is an engineering IC. This is important since the "last mile" of onboarding often requires quite deep engineering knowledge of the stack to proactively identify which parts can be shortcutted and which need to implement last-minute fixes or features.</p><p>The person in charge of onboarding a customer should ideally be:</p><ul><li>IC or eng leader that can deliver features/fixes or influence org to do so</li><li>Act 10% of the time as outbound PM.</li><li>Act 10% of the time as TPM.</li></ul><h2 id="onboarding-process">Onboarding Process</h2><p>Create an onboarding tracker. It can just be a Google Sheets document with the following columns:</p><ul><li>customer name</li><li>Customer team (quite often, you can be onboarding different teams within the same customer)</li><li>Customer side contacts (should be the concrete name)</li><li>Your side contacts (specific DRI)</li><li>Stage of onboarding</li><li>Link to a ticket/doc/etc</li></ul><p>With a pool of customers and DRIs, we can start the onboarding process that includes the following 3 stages:</p><ul><li>STAGE 0: Pre-engagement, collect all the data:</li><li>	Add the customer name and you as a DRI to the tracker.</li><li>	Create a ticket and assign it to you (add it to the tracker.)</li><li>	Find all the stakeholders in the company who are already working with the customer.</li><li>	Create a centralized document that describes all the problems we know about from the customer (and existing workstream)</li><li>	Create and execute group/company-wide reviews to present everything you know about the customer, where everyone in the org should be able to learn about the customer and ask questions.</li><li>STAGE 1: initial engagement:</li><li>	Create a v-team for onboarding customers that should include a direct way of communication with the customer (Slack/Discord/etc.)</li><li>	Create initial sync with the customer and present the document to:</li><li>		Confirm that we have everything (no other joined initiatives we have been missing).</li><li>	Learn about problems that customers have.</li><li>	Build and own the roadmap (should be sub-tickets to the main ticket).</li><li>STAGE 2: active engagement</li></ul><p>Many customers will have the same problems as others, so, at any point, each layer (customer =&gt; problem =&gt; eng initiative) has many-to-many relationships. That is why building a hierarchy of bugs/tasks/features is essential to represent it. Suppose one person onboarding customer X and another onboarding customer Y. In that case, they might be adding comments to the same problem that both customers (X and Y) require a company to solve.</p><p>It is suggested to start the UODP process before the official planning seasons so your customer's DRIs can STAGE 2 right in time for the planning.</p><h2 id="incentives-alignment">Incentives Alignment</h2><p>An essential part of the UODP process is adjusting incentives during your company's performance review cycle. UODP will only work if you connect the customers' success to the DRI who onboarded them. A person that does onboarding, by nature, will be asked to prioritize customer's needs over everything else. And should be recognized for that. For example, if a customer's ask might be solved by updating docs, DRI should be updating docs vs. building a new feature or product. This often conflicts with what is expected to be evaluated during the performance review process of engineers (this is the case in almost any company). If not addressed, this will incentivize people to go back to the usual way of execution: prioritizing work that will get a person to the next level vs. work that will lead to customers' success.</p><h1 id="main-biases-against-uodp-we-heard-them-all-">Main Biases Against UODP (We Heard Them All)</h1><h3 id="we-are-over-optimizing-for-one-customer-">We are over-optimizing for one customer.</h3><p>The most frequent one, by using UODP we will be over-optimized around one specific customer. A long answer to this is, "<a href="http://uodp.club/building-generalized-solutions-is-killing-your-product/">Building Generalized Solutions is Killing Your Product</a>."</p><p>Shorter answer based on very subjective experience: It is much more likely to build a product around one customer/use case (so, at minimum, have one happy customer) and find a way to scale it vs. trying to build a generalized product and onboard at least one satisfied customer on the day one.</p><h3 id="will-customers-always-know-what-to-build">Will customers always know what to build?</h3><p>"If I had asked people what they wanted, they would have said faster horses." Ford.</p><p>While this is true, customers often need to learn what they need/want, so we should not confuse product vs. problem. UODP is about solving problems that customers are having. While customers might not know which products they will need ("faster horses"), customers always know what the main issues they are facing in their lives (time to get from A to B takes too long). So, while UODP makes sure that we are focusing on the real problem that real customers are facing, UODP means something other than that we will build a product precisely as customers describe it. So, let's distinguish listening to customers about their problems from listening to them about how they want them solved.</p><h3 id="is-this-a-job-for-the-pm">Is this a job for the PM?</h3><p>To some extent, this is a job for a leader that has deep technical knowledge about the product; it can be EM/PM/TPM/IC, but it does not matter since none of these roles canonically can represent everything that will be required from the DRI that is onboarding customers. That is why it is essential to find the right leaders when assigning customers to them.</p><p>Usually, the best PM can describe the products that will cover 90% of requests from any of the main customers. However, the remaining 10% that are unique to each customer will ultimately determine if we are successful in the onboarding process.</p><p>But one of the most important reasons is that if this job is done by PM only, it can not scale since PM, at best, can focus on only so many customers at a time.</p><p>However, there are other parts of UODP where PM can and should play critical roles, specifically:</p><ul><li>Facilitate customer relationships and maintain a list of customers who are ready to be onboarded</li><li>Make sure that we are onboarding the right customers, who are representative of the bigger group of customers</li><li>Represents customer who have NOT agreed to be with us in one room</li></ul><h1 id="appendix-learn-more">Appendix Learn More</h1><p>There are many places where you can learn more(external):</p><ul><li>Talk "<a href="https://www.youtube.com/watch?v=ex4Fx91JTrA&amp;t=2164s">User-oriented development process</a>"</li><li>Article <a href="http://uodp.club/how-to-identify-a-successful-product/">How to Identify a Successful Product</a></li><li>Article <a href="http://uodp.club/do-not-work-on-the-project-that-solves-more-than-one-problem/">Do Not Work on the Project that Solves More Than one problem</a></li><li>Article <a href="http://uodp.club/definition-of-done-for-eng-leader/">Definition Of Done (For Eng Leader)</a></li><li>Article <a href="http://uodp.club/how-to-identify-the-right-customer/">How to Identify The Right Customer</a></li><li>Article <a href="http://uodp.club/uodp-and-reliability/">UODP and Reliability</a></li><li>Article <a href="http://uodp.club/first-steps-towards-uodp/">First Steps Towards UODP</a></li><li>Article <a href="http://uodp.club/building-generalized-solutions-is-killing-your-product/">Building Generalized Solutions is Killing Your Product</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Escape the Firefighting Trap: Strategies for Teams to Focus on Important Work]]></title><description><![CDATA[<p>Do you find yourself constantly fighting fires and needing help keeping up with urgent customer requests? Have you and your team been so busy with these urgent tasks that you haven't been able to focus on developing anything new in months? It's a frustrating situation that can leave you wondering</p>]]></description><link>https://uodp.club/escape-the-firefighting-trap-strategies-for-teams-to-focus-on-important-work/</link><guid isPermaLink="false">63fbfc483bd05600010fdd64</guid><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Mon, 27 Feb 2023 00:47:27 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2023/02/29xp-meme-videoSixteenByNineJumbo1600-v6-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2023/02/29xp-meme-videoSixteenByNineJumbo1600-v6-1.jpg" alt="Escape the Firefighting Trap: Strategies for Teams to Focus on Important Work"><p>Do you find yourself constantly fighting fires and needing help keeping up with urgent customer requests? Have you and your team been so busy with these urgent tasks that you haven't been able to focus on developing anything new in months? It's a frustrating situation that can leave you wondering what went wrong and if there's a way to fix it.</p><p>The good news is that there is a solution, and this article will cover it. However, before we dive into the solution, there are a few pre-requirements to consider:</p><ul><li>if you're serious about fixing this issue,</li><li>if you're a team leader or manager, or at least willing to propose solutions to your team leader or manager, and</li><li>if your team is currently running or willing to adopt some sort of sprint process, then keep reading.</li></ul><p>But be warned: there's no silver bullet that will fix everything overnight, and you'll have to put in the work to execute the steps outlined in this article.</p><hr><p>I present to you the ultimate solution: "UODP based three stages technics of working with toxic requests to the team." Not satisfied? Good. Let me dive deeper with you so you can be appropriately converted into a believer.</p><p>Let us define one simple and quick definition, toxic request in this article means any request to the team that comes outside of the team and satisfies one of the following parameters:</p><ul><li>urgent (from the perspective of the requesting side)</li><li>directed to one of the team members and not to a team manager</li><li>broadcasted via chat or mail to the team</li><li>urgent support request that does not have any feature/bug-fixing work</li><li>urgent bug/feature that takes priority over everything else the team is doing now-now</li></ul><p>Now we are ready to outline the solution. On the high level, it has three steps:</p><ul><li>Identify. Identify which exact type of toxic requests the team is dealing with.</li><li>Isolate. Isolate the part of the team to shield the rest from the toxic requests, so at least some part of the team can do a meaningful development.</li><li>Reduce. Reduce the number of toxic requests (make it your business KPI)</li></ul><h1 id="identify">Identify</h1><p>You can not do shit unless you know what you are dealing with. And there are several things that you have to do to even start identifying it (as a manager). You have to have a rotation on your team with the primary, and the primary should own the task to be the first level of defense for everything customers ask of the team.</p><p>Expectations to primary should be to act as a shield. And by "acting as a shield," I mean something like this (you, as a leader, should create a set of expectations tailored to your team and your situation):</p><ul><li>be answering questions in chat</li><li>be proactively triaging bugs/requests</li><li>be answering questions in the mail</li><li>be reacting on any team-level alerts</li></ul><p>Again, this is just an example. It is your job, as a leader, to create expectations tailored to the team.</p><p>In addition to this generic list of expectations to the primary, there are some reporting expectations on top:</p><ul><li>To have a ticket for the oncall, which is daily populated with rough items oncall spent time with.</li><li>If any request takes more than one hour, such a task should be filed as a ticket and added to the current sprint.</li><li>It is ok to ask other team members to help you. However, in such cases, the primary should:</li><li>Explicitly create a ticket</li><li>Assign it to a team member who is asked to do the work</li><li>Clearly mark such a ticket as additional work on the sprint (so it can be reviewed at the sprint end).</li></ul><p>Since it is hard to say how much work will be required from the primary, it is better to start with a one-week rotation and expectation that 100% of the primary's time will be allocated to this work. Let me repeat this: at the beginning, no, other than being primary oncall, work should be assigned to the primary during the oncall week.</p><p>But setting expectations is easy. Execution and holding the team accountable is the hard part. This is where many leaders need to catch up. Any reasonable team will likely agree to all these things. However, after the first several sprints, you will see that the team still needs to utilize this newly crated shield. Customers will keep coming directly to the team members instead of going to the primary, and team members will not be redirecting such customers to the primary and instead will be trying to help directly. Everything will remain as is, at least in the first several sprints.</p><p>Fear not. This is just beginning and absolutely expected. After new expectations are set on each sprint review, you should start doing per-person reviews, and for any task that has not been finished due to the new urgent work being pushed to the plate, ask the following questions:</p><ul><li>Why was it impossible to redirect it to primary?</li><li>Why is this request not filed as a task?</li></ul><p>In 2-3 sprints, you will see that folks will start filling everything as tasks, and you will start getting meaningful information from the primary about the workload and nature of the workload and finally be getting closer to the assessment of the baseline of the toxic requests. This is the starting point. Knowing the type of toxic requests you are dealing with, you can move to the second stage.</p><h1 id="isolate">Isolate</h1><p>Knowing the amount of toxic tasks, you should understand how many full-time team members you need weekly to work on them. Now you can make an educated judgment call and have as big a rotation as you like (I will tell you what to do if you think you need all the folks on the rotation).</p><p>Now with that in place, here is a unified algorithm (or checklist if you are more of a pilot person) on how to use this newly created shield for anyone on the team. For any incoming toxic request, ask the customer:</p><ul><li>Is it urgent?</li><li>	yes =&gt; generate a ticket and redirect to the primary</li><li>	no:</li><li>		Can it wait till the next sprint?</li><li>			yes =&gt; create a ticket, and add to the list of the things to triage during the next sprint planning</li><li>			no =&gt; create a ticket and redirect to the primary</li></ul><p>And for the primary, for any request/ticket:</p><ul><li>Is it urgent?</li><li>	yes =&gt;is it important in your opinion?</li><li>		yes =&gt; do you have cycles to work on it?</li><li>				yes =&gt; work on it</li><li>				no =&gt; find whom to delegate to (secondary/tertiary/etc.)</li><li>		no =&gt; escalate to the manager to re-confirm and to communicate back to the customer (after all, it is not an engineer's job to be upsetting customers).</li><li>	no =&gt; add to the backlog to plan for the next sprint</li></ul><p>At this point, your team should have a tiny amount of cases where the shield is not used, and for each such case where the shield did not work, there should be a ticket on the sprint board that you, with your team, should discuss on the retrospective to see what is going on and why shield did not work (and how to fix it).</p><p>Next, let's discuss three main symptoms when the shield does not work and what to do:</p><ul><li>people are not utilizing the primary</li><li>there is, indeed, a lot of requests, so the primary has been overrun by the customers</li><li>the primary can not help with the specific topic and have to engage with subject matter expert</li></ul><h2 id="primary-is-underutilized">Primary is Underutilized</h2><p>This is indeed quite common at the beginning. Usually, if folks are not used to working in teams where you have primaries. This is indeed mentally hard if a customer asks you because you are the subject matter expert, and you still have to redirect the request to the primary, for whom it will take twice as long to do the same work.</p><p>We are all human. After all, jumping on the more understandable task is in our nature. If a customer is asking me specifically, this probably means that:</p><ul><li>I am probably a subject matter expert in this topic (and for me, this means that the task is clear and understandable)</li><li>The customer's task is very likely urgent</li></ul><p>Chances are, my current task is less understandable (and maybe even slightly less urgent). Given all these, it is easy to understand why we do what we do and jump on any opportunity to help the customer (regardless of whether primary can do it on our behalf). I had seen teams where for each new high-priority bug, more than HALF of the team would jump right away (they even had a chat notification so anyone could be distracted right when the new high-priority bug came in). Needless to say, such teams rarely move any long-term initiatives forward, and as a result, customer experience degrades more and more each quarter.</p><p>Hopefully, you, by now, have the answer to this problem (if you have been reading carefully enough). You should set expectations that no additional work should be done without the ticket on the board. And so, such cases (when new tasks were added to the sprint and were NOT escalated to the primary) will become prominent to you, and you can keep providing feedback to the person who keeps doing this. And if the pattern persists, you can always set a formal expectation to redirect such work to the primary in the future. Now it will be up to that person to deliver (or not to deliver on the job expectations).</p><h2 id="a-lot-of-requests">A lot of Requests</h2><p>Primary just can only deal with some things. That is fine. As I have mentioned, you can create as big a rotation as needed, and I had teams with up to <strong><u>three</u></strong> people on the rotation full-time. After all, your goal at this stage is to be honest and identify how many resources you need to cover all toxic requests. We will be talking about reducing them later.</p><p>There are still several things I want to mention. If you have more than one person on the rotation, the essential rule is: the primary owns the rotation. Think about the primary as a manager for everyone on the rotation for one week; the primary is solely responsible for the outcome of the rotation.</p><p>Now back to the question from above: what if you have SO many fires/tasks to do that you need your entire team to be oncall? If you need ALL of the team, chances are that you are NOT helping with every request anyway and dropping the ball somewhere. There is good news here, you are very-very-very likely, can allocate some resources (maybe an IC/week per sprint) to do some meaningful work, and no one will notice since, again, chances are that you are not solving all the incoming requests anyway.</p><h2 id="primary-can-not-help">Primary Can Not Help</h2><p>I've been on teams where one person did NOT have enough knowledge to cover everything. In this case, it was not a problem of capacity but a problem of expertise.</p><p>I have seen two primary strategies to deal with it:</p><ul><li>via fragmented rotation</li><li>via team education</li></ul><p>The first one is simple, if you are lucky enough, the fragmentation of knowledge in your team can be split into two groups. And having two folks (one from each group) on the rotation would guarantee that you will always have full coverage. One should be primary and the other secondary.</p><p>While I have seen this working well in several teams, this requires a lot of luck:</p><ul><li>you do need to have knowledge clustered into two groups</li><li>groups should be of more or less equal size</li><li>amount of people should be considerable enough for the rotation to be meaningful</li></ul><p>The second way is to introduce the following policy for any request that comes into primary for which primary can not help directly due to the knowledge gap:</p><ul><li>Primary still has to do the job via consulting with the knowledge expert. The knowledge expert still going to have added work on the sprint board, but the work should be in the form of educating the primary on the required topic.</li><li>primary, on the other hand, will own tasks to: learn, help the customer, and, most importantly updated the runbook/wiki/docs and potentially run team level training to make sure that next primary will be able to help with the similar request</li></ul><p>It will take up to two quarters (usually), but ultimately this will elevate the entire team to the level where the team can help with almost any requests related to the team's products.</p><p>At last, we came to the final stage of the system (remember first two: identify/isolate):</p><h1 id="reduce-function reduce() { [native code] }1">Reduce</h1><p>Now that we know the nature of the toxic requests, we have part of the team isolated to just help with toxic requests, we have a part of the team that reliably can do meaningful development, we are ready for the third and final stage: reduce the number of toxic requests. Here is the business KPI you should use to hold the team accountable: reduce team capacity required to run rotation.</p><p>Such reduction can be made by first asking different questions about the nature of toxic requests:</p><ul><li>Are they bugs, and will they be reduced naturally after the team fixes most high-priority bugs?</li><li>Do we need to improve testing to reduce the bugs we introduce per release?</li><li>Do we need to improve our documentation?</li><li>Do we need a road show to explain how our products work?</li><li>Do we need to invest in UX?</li><li>Are we properly evaluating importance/urgency (maybe it is ok NOT to make some of the requests at all and stay more focused)</li></ul><p>Again, these are just examples. I am sure you will come up with your own set of questions by reaching this point.</p><p>Small side note. Quite often, people might ask you a question about the relevance of this KPI to the business. Usually, it goes something like this: "if I am going to work X months on improving UX, how can I use this on my promo/scoring case? This has nothing to do with the revenue/usage increase (or whatever other important metrics the company's VP/Director/CEO pays attention to)."</p><p>The answer is always simple, if one can deliver on the KPI to reduce toxic requests, one can reduce the size of the rotation from, for example, 2.5 full-time engineers to 1.5 engineers. This means that such a person giving back 1 full-time IC to the team. So one can directly claim which business features were delivered per year by this team thanks to this work.</p><p>With that: may the UODP be with you! And have fun prioritizing.</p><p>A VERY IMPORTANT LAST COMMENT: feature requests with a high impact from the customer are not fire, not toxic requests, and if your team is confusing "work backward from the customer's needs" with "constant fire fighting," this confusion is entirely different problem that is not in the scope of this article.</p><p>There are so many things left unturned. I kept the article short and covered only the essence, but as a result, I cut a lot of the content, so please consider asking questions in the comments so I can make an educated call about the topics to write about next.</p>]]></content:encoded></item><item><title><![CDATA[Definition Of Done (For Eng Leader)]]></title><description><![CDATA[<p>Your effort can be considered done when a customer whose problem you were aiming to solve provides a testimony that they have solved their problem with the solution you have provided.</p><p>If you are onboarding a mass of customers, you have to have at the same time:</p><ul><li>Testimonies from the</li></ul>]]></description><link>https://uodp.club/definition-of-done-for-eng-leader/</link><guid isPermaLink="false">63925a5f17152b0001df5513</guid><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Thu, 08 Dec 2022 21:44:10 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2022/12/done-white-stamp-text-on-green-1594663264Nt6.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2022/12/done-white-stamp-text-on-green-1594663264Nt6.jpg" alt="Definition Of Done (For Eng Leader)"><p>Your effort can be considered done when a customer whose problem you were aiming to solve provides a testimony that they have solved their problem with the solution you have provided.</p><p>If you are onboarding a mass of customers, you have to have at the same time:</p><ul><li>Testimonies from the customers that would be considered an excellent proxy to the general population of the customers that you are onboarding</li><li>KPIs measurements (that were set upfront) that show proof that customers have been solving their problems with the solution you provided</li></ul><p>If you do not have KPIs set, <a href="http://uodp.club/okr-best-practices/">first do that</a>.</p><p>Keep in mind that this definition of done is problem specific. You can have the same customer you are onboarding again and again. In fact, the more successful your solutions are, the more infinite-re-onboarding is likely.</p><p>Quite often, I use "onboarded customer" instead of "done" to emphasize that I am using the definition from this article. "Onboarded customer" usually has more apparent intent than using the word "done," which is highly overloaded.</p>]]></content:encoded></item><item><title><![CDATA[Embrace KPIs That You Do Not Control]]></title><description><![CDATA[<p>How often have you seen business KPIs your org/team does not directly influence pushed your way? Did you try to push back? Did you try to reduce the scope of the KPIs to what is only doable by your team alone? If you did, read this article to find</p>]]></description><link>https://uodp.club/embrace-kpis-that-you-do-not-control/</link><guid isPermaLink="false">638bfd470e5bbd0001608509</guid><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Sun, 04 Dec 2022 01:53:13 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2022/12/478552927.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2022/12/478552927.jpg" alt="Embrace KPIs That You Do Not Control"><p>How often have you seen business KPIs your org/team does not directly influence pushed your way? Did you try to push back? Did you try to reduce the scope of the KPIs to what is only doable by your team alone? If you did, read this article to find out why you should not have.</p><p>To be more specific, let me describe a generalized scenario I witnessed many years ago.</p><h1 id="the-project">The Project</h1><p>Once upon a time, there was a product that included many moving parts and teams: UI/backend with several teams/DB/DevOps, and even QA. The product was successful for years, but lately, several releases were, put it lightly, less stable than desirable. Subsequently, customers lost some trust in it. Our story begins when the org decided to turn that around and set a new yearly KPI to improve the trust score. Trust score is measured by surveying customers. KPI is straightforward: move the score from X to Y. While this is the right business KPI, many teams pushed back on having it in precisely this form on their charter. UI team lead argued since the UI team is only responsible for the UI, they would want to have something like: "reduce the number of negative comments related to UI specifically."</p><p>In the end, all teams have their reduced-in-scope KPIs created. And everything seemed fine at the beginning. Almost a quarter later, something odd happened. While nearly all teams reported fantastic progress, the results of the answers to the binary question about the stability of the product (do you think it is stable?) were mostly the same. So while each team delivered everything they committed, their leads had a difficult task to deliver news to their teams/orgs that, even though they had achieved the goals, results still needed to be better (by a lot).</p><p>So, what exactly has happened? Business KPI was formulated on the highest level and passed down to different orgs. Each team created corresponding KPIs limited in scope to only what they could achieve. As a result, no one in the company (among engineers who knew how everything worked) was assigned to this high-level KPI. In such cases, if the company is lucky, when everyone finishes doing their part, hopefully, everything will be resolved, but more often than not, people will see sub-optimal results in the end.</p><p>Such a situation also means that you do not have an org structure that follows UODP principles, but this is a topic for a different article. After all, only some have the luxury to start yet another huge re-org, and one must provide a more straightforward solution. Also, in many cases, big reorgs do not work anyway (<a href="https://www.youtube.com/watch?v=yDcaRklX7q4">a lovely song that tells the story of why</a>)</p><h1 id="so-what-to-do"><strong>So, What To Do?</strong></h1><p>A straightforward answer is that each team helping with the business KPI should have a high-level KPI on their charter. And on top, to have sub-KPIs reflect more specifically what the team can deliver.</p><p>There is always a question that comes immediately: wait, why? Why should I (the potential owner of the KPI) own something that our team can not impact alone should be assigned to me? This is unfair!</p><p>Yes, this <strong>is</strong> unfair. However, it reflects the harsh truth of how the team will be perceived. Many of us find comfort in hiding how our work results will be perceived beyond the curtains of measuring only what we can impact.</p><p>So back to our example. We started with UI and one of the backend teams. Both teams allocated the owner from their side for the high-level business KPI: "improve trust for our product from X to Y" KPI.</p><p>Magical things started to happen immediately. Now, by having business KPI directly on the team's charter owner has begun to ask the right questions. UI folks start asking: can our team do more and help the backend team, which is accountable for many more complaints than the UI team? And overall, everyone was asking: who is driving the initiative overall? Do we have a sync between all stakeholders around this? Where/how do we report progress? The moment a person realizes that they are on the hook (partly) to deliver business goals, all these uncomfortable questions (for most of the managers) start to pop up.</p><p><em>The fascinating part is, from time to time, if you practice this, you will find that there are no answers to all these questions. No one was appointed as a high-level business KPI goal owner at all, and no one bothered to put in the room all the stakeholders. And in such case, your team has an opportunity to spearhead the effort org/company-wide.</em></p><p>But back to the example. Immediately the change led to the creation of the v-team that horizontally cut across many different departments. After just the first two meetings, v-team realized that the UI team, for example, has way more to offer than they thought at the beginning of the year. They found that the ideal solution might be a UI service that allows customers to self-diagnose and self-debug backend problems. This would enable customers to self-resolve 70% of the cases themself. Before this, the backend team, working in silos (same as the UI team), never even learned what it is possible to do in a quarter on the UI side of the options. While the UI team, who usually would be the first line to triage customers' issues, did have some ideas on how they could potentially build service to help the backend team, it was never a priority for them to do.</p><h1 id="the-end">The End</h1><p>So, start your UODP journey by putting business-level KPIs on your charter. If your organization already has a v-team and high-level owner, this will allow you to find it much sooner and become a part of the v-team. If not, this will enable you to immediately start a new v-team. Also, you will always know how your team is perceived, and there will be no surprises when the performance review time comes.</p><p>And the last thing, there is one excellent rule of thumb on how to test whether your KPI is a good one. Ask yourself: can you come up with a situation when all of your KPIs are green, but the company business goal is still red? If the answer is yes, you might be trying to hide what is essential behind what is easily (directly) changeable by you. And remember, there are no things that you can not be changed by you, there are things that you can easily change (things that you directly own) and things that are harder to change (owned by someone else), but there are never things that you can not change.</p>]]></content:encoded></item><item><title><![CDATA[OKR Best Practices]]></title><description><![CDATA[<figure class="kg-card kg-image-card"><img src="https://uodp-club-ghost-content.storage.googleapis.com/2022/11/51784463541_ec32c057ed_b.jpg" class="kg-image"></figure><h2 id="part-i-objectives">Part I: Objectives</h2><p>This a concise guide on how to start writing your OKRs.</p><p>Start with creating a tracker of:</p><ul><li>customers that you are aiming to onboard</li><li>their problems (problems that you are seeking to solve for the customers)</li></ul><p>The most crucial part is that the list of problems has</p>]]></description><link>https://uodp.club/okr-best-practices/</link><guid isPermaLink="false">637d6354645730000167be62</guid><category><![CDATA[uodp]]></category><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Wed, 23 Nov 2022 00:04:49 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2022/11/51784463541_ec32c057ed_b-1.jpg" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://uodp-club-ghost-content.storage.googleapis.com/2022/11/51784463541_ec32c057ed_b.jpg" class="kg-image" alt="OKR Best Practices"></figure><h2 id="part-i-objectives">Part I: Objectives</h2><img src="https://uodp-club-ghost-content.storage.googleapis.com/2022/11/51784463541_ec32c057ed_b-1.jpg" alt="OKR Best Practices"><p>This a concise guide on how to start writing your OKRs.</p><p>Start with creating a tracker of:</p><ul><li>customers that you are aiming to onboard</li><li>their problems (problems that you are seeking to solve for the customers)</li></ul><p>The most crucial part is that the list of problems has to be sorted. You should always be aiming at solving the most critical problem that your customer is having.</p><p>Remember that you are NOT "building a product" but "onboarding customers." Building the product is just a part of the onboarding process.</p><p>Solving customers' problems should be your objective, and therefore each objective should specify the problem you are aiming to solve and the list of customers with this problem. Be ready to answer why you think this is your customers' most crucial problem.</p><p>Remember, success seldom depends on picking the right engineering tool/project to build. It depends on choosing the right problem you are aiming to solve. So spend extra time verifying that you go the right problem.</p><p><em>Q: wait, but what if I do not have customers? What if my product is used by internal teams only?</em></p><p><em>A: there is always a customer, the fact that the customer is internal does not change anything. If unsure who your customer is, consider which team/org/person's feedback will be the most impactful when evaluating your effort's results. Very likely, that is your customer.</em></p><h2 id="part-ii-key-results">Part II: Key Results</h2><p>Next, to keep yourself honest, you should come up with the KRs that will be good proxy metrics that will indicate how good you are at solving the problem for your customers. KRs should be measurable metrics and not binary facts of delivering the products or features themself.</p><p><em>Q: but what if I know metric, but it is almost impossible to measure it?</em></p><p><em>A: find the next closest proxy? What about surveying customers? What about just picking one company and asking their opinion at the end?</em></p><p>With basic things out of the way, let us cover several examples. First, an awful objectives example:</p><ul><li>"Build unified engine for scheduling long running jobs" - what is the problem? Who is asking? Is this the most pressing problem my customers are having?</li><li>"Improve users experience of our web site users" - while we can identify customers, they have so many problems that this usually might mean anything.</li></ul><p>Better example:</p><ul><li>"It takes our customer 2 IC per year to run the manual process of scheduling long-running tasks by starting the VM manually and running scripts (customers: x/y/z)."</li><li>"We are spending X$$ on the CI system more than allocated in our budget."</li></ul><p>In many cases, objective already includes good measurable values. A good example of verifying if your objective is well written is this: you should be able to show the objective to a random person in your org, and they should be able to tell you if the problem from this objective has been resolved or it is still there. The opinions should be the same if your objective is well-written according to the UODP principles.</p><p><br></p><p>Now we can identify good proxy metrics that can give us more exposure if we solve the problem. Let's start with bad examples:</p><ul><li>Deliver (release) a new service that allows the customer to schedule long-running jobs</li><li>Refactor CI to use CPU-only instances more often</li></ul><p>These KPIs are hard to measure. The rule of thumb: if you can easily find a way to resolve your KR without solving a problem set in the objective, this is probably a bad KR. For example, one can find a way to use CPU-only instances more often, but this could move the needle only 0.0001%. Is this good enough? should we declare KR as green? KRs like this allow one to declare almost anything as green or red. Same as the example with the new service, if the new service is used by 5% of your customers, the initial problem is still there; 95% of your customers are still scheduling jobs manually.</p><p>Better KRs here could be:</p><ul><li>X jobs are scheduled on the new job scheduler (and not manually)</li><li>Customer Y no longer has to have one IC constantly allocated to the manual job of scheduling jobs</li><li>Z pipelines have migrated to the new job scheduler</li><li>Cost per month to support our CI reduced below Y$$</li></ul><p>Having good OKRs gives several benefits:</p><ul><li>They give clarity on how success will be measured.</li><li>At the same time, there is no micromanaging how the team should be delivering it (the team is free to pivot at any point to a different effort if they find a simpler way to provide an impact)</li><li>They protect the team from un-required pivots</li></ul>]]></content:encoded></item><item><title><![CDATA[Building Generalized Solutions is Killing Your Product]]></title><description><![CDATA[<p>You may have often heard this question: “If we listen to only this customer, how can we be sure that we are not overfitting?” This is the question that leads to the death of the project. This is such an important topic that I decided to give it a dedicated</p>]]></description><link>https://uodp.club/building-generalized-solutions-is-killing-your-product/</link><guid isPermaLink="false">6310e5a1ab9f200001ac28c5</guid><category><![CDATA[uodp]]></category><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Thu, 01 Sep 2022 17:03:01 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2022/09/Screen-Shot-2022-08-31-at-12.47.29-PM.png" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2022/09/Screen-Shot-2022-08-31-at-12.47.29-PM.png" alt="Building Generalized Solutions is Killing Your Product"><p>You may have often heard this question: “If we listen to only this customer, how can we be sure that we are not overfitting?” This is the question that leads to the death of the project. This is such an important topic that I decided to give it a dedicated article, so today, we are going to talk about the following:</p><ul><li>Why is directly building generalized products dangerous?</li><li>Why is overfitting for just one customer a much better way to end up with a generalized product?</li><li>Why do many senior leaders in the industry think that building generalized products (and not overfitting) is the right way?</li></ul><p>To illustrate the article’s main point, I will tell a story. The story is about two tech leads (names and circumstances are fictional):</p><ul><li>Let’s name the first tech lead Romeo. He is practicing “user oriented development process” <a href="http://uodp.club/main-pitfalls-of-uodp/" rel="noopener ugc nofollow">UODP</a> and knows the critical rule: Onboard one customer, build an overfitted solution, and find a way to generalize later. Romeo is prioritizing learning. Learning new information by releasing an early over-optimized solution always trumps spending resources later to rewrite the solution.</li><li>The second character — let’s name him James — is a classic product lead who thinks he needs to design the system correctly and build the generic solution from the beginning. James is prioritizing the reduced waste of engineering resources.</li></ul><p>Let’s see how the industry at large has been forcing people like Romeo to become more like James.</p><p>Let’s assume that James is tasked with building an internal service for his company. This service should connect those who would love to be mentored with those who would love to be mentors. James starts with collecting requirements for the future service, meeting customers, and planning how the website would look. He talks to as many customers as possible and asks questions like the following:</p><ul><li>What topics might you be interested in learning?</li><li>Are you interested in teaching, just learning, or both?</li><li>How many hours per week can you commit to this task?</li><li>How much time will you need to reach your goal?</li></ul><p>After some investigation, James comes up with some high-level user journeys and requirements:</p><ul><li>mentor/mentee account-hosting site</li><li>accounts that include these elements:</li><li>- fields the person wants learns about</li><li>- subjects the person is interested in teaching</li><li>- time zone</li><li>- time preferences</li><li>matching tool</li><li>progress-tracking tool</li><li>search ability</li></ul><p>The first milestone (V0) is as simple as possible: Just have a portal for mentors and searching functionality. This still requires the following things:</p><ul><li>authentification</li><li>personal profiles</li><li>searching</li></ul><p>James estimates three months for the full V0 rollout and six to nine months for the entire project.</p><p>At the same time, Romeo’s organization has a pressing problem of ever-growing requests to match mentors with mentees. To address the situation the fastest way, Romeo creates an intake form to collect folks who want to be a mentor and Google spreadsheet for storing mentors and mentees. He shares it with everyone in his organization. A month later, this spreadsheet became several organizations’ primary source for finding mentors, but it still lacks some things. The most pressing are these:</p><ul><li>pinging mentors periodically to see if they still want to mentor (and removing them from the list if the answer is no)</li><li>displaying mentor status (We already have enough mentors/openings for now.)</li><li>sending a notification when the mentor with the right skills is registered</li></ul><p>None of these requirements have been identified by James. It is mainly because customers do not know they are critical until they start using things. To address this, Romeo introduces the first service and the first DB created to accommodate these needs. At this point,</p><ul><li>Romeo creates a “horrible service” from an engineering perspective. For example, it does crazy things like joining Google spreadsheet with DB and requires complex on-call actions from the team. Romeo will definitely need to rewrite everything eventually. But he has customers and can start scaling the solution. Romeo has a service that delivers real value.</li><li>James does not have customers yet and is unaware that several critical requirements are not reflected in his service. He is on track to release V0 in several weeks. He slowly starts realizing a painful truth: that all his potential customers from Romeo’s organization already have a solution that looks better from a customer perspective (while definitely worse from an engineering perspective).</li></ul><p>A quarter into this project, James has zero chance of winning any customers. Not only are prospective customers already using the solution from Romeo, but V0 also lacks critical features (while providing many additional but not critical features). Leadership reasonably decides to kill James’s project and redirect headcount (HC) to Romeo’s team. Two to three quarters later, Romeo’s team implements features that were in James’s original proposal. They implement a proper service, depreciate Google spreadsheet, and have an adequate web UI — essentially everything that James’s team initially proposed. What is especially frustrating for James’s team members is that they were well positioned to implement similar things much more quickly.</p><p>Romeo’s way provides more value to customers faster. So why do people like James keep emerging in highly influential engineering leadership positions? There are several reasons, and the number one is “hindsight bias.” To illustrate how it plays a role here, let’s assume there was no James in our story, only Romeo.</p><p>After implementing his solution, for some time, Romeo has to support a complex service to correctly map records in Google spreadsheet to the DB records. Since the tool (the spreadsheet) is highly inappropriate for the task, this service is slow and has many bugs. Later on, Romeo has to rewrite everything to remove the spreadsheet altogether. This required a costly and lengthy customer migration from one source of truth (spreadsheet) to another, DB to Web UI. Migration was not the smoothest, with constant customer complaints. All in all, since each version of the service solved the main customer problem at any point, the service always had customers, and no one turned their back and left.</p><p>In hindsight, since there was no James, it might look like directly building the “right service” would be faster than what Romeo did. In the debrief, everyone would be pointing out that:</p><ul><li>At least two times, the team had to throw the solution out and rewrite it.</li><li>Migration was complex, and many users were unhappy during it.</li><li>The intermediary solution was difficult to use and hard to maintain (on call was quite intensive).</li></ul><p>With such “lessons learned,” it might seem like Romeo did everything wrong and should not be allowed to do projects like this anymore. So, when the next Romeo (let’s call him Romeo the Second) emerges in the company and suggests solving the problem in the simplest possible way, the original Romeo might start advising him in this way: “You should not think short term; let’s properly design scalable solutions so we can adequately accommodate all the required customers and not just the one. Let’s reduce waste.” This is how Romeo becomes just like James.</p><p>In the mind of the original Romeo, we went via the path that looks like this:</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/max/1400/1*7nUxSaxzjB8Bp-HjppPvLw.png" class="kg-image" alt="Building Generalized Solutions is Killing Your Product"></figure><p>So it makes total sense for the next time to go like this:</p><figure class="kg-card kg-image-card"><img src="https://miro.medium.com/max/1400/1*ReWhRfkx3Xr5iioSn-uAlA.png" class="kg-image" alt="Building Generalized Solutions is Killing Your Product"></figure><p>Now, in reality, if Romeo would be honest and look back to see where we thought we should be aiming, he might find that initial ideas about a destination:</p><ul><li>does not have features (like notifications) making it unusable</li><li>have too many features that are not critical or required</li></ul><p>However, how often during debriefings do you hear questions like:</p><ul><li>How many things have we learned?</li><li>How many problems are we now aware of that were hidden at the beginning?</li></ul><p>Usually, these questions are not asked. It is almost assumed that the information we have now was always available.</p><p>This story in UODP illustrates the following rule: “It is easier to implement products/features around one concrete customer and end up building highly used generalized products/features than building generalized solutions from day one and end up with a solution that can be used by at least one customer.” In short, “Building a solution for one customer quickly is better than building a generalized solution slowly.”</p><p>But you know what? Many folks agree with the statements here but will still do otherwise. Why? Because when you optimize for learning, you assume that you do not know something, UODP is all about getting the best outcome in uncertain conditions. However, many teams think their information is sufficient to make conclusions about customers/markets/products. It takes a lot of courage from the product manager to acknowledge how uncertain our understanding is of the customer and the market. This brings us to the second problem of convincing people UODP is the right way for them. A key point of UODP is: You do not know something; you do not know what you do not know, but you must constantly search for it. This argument is not concrete (since we do not know what we do not know). Meanwhile, people will have factual arguments why we should aim for the outlined target. They will have charts, poll results, and records of the interviews with customers. It will look like we know everything and are exactly where we need to be; it makes sense to build the shortest possible path to that place. This is how reducing uncertainty translates into reducing resource waste with full certainty.</p><p>After all, one can not quantify how much information you have<strong><strong> not learned</strong></strong>, so the horrible downside of building generic solutions is not visible. On the other hand, it is a more or less visible small amount of waste that generic solution allows to avoid. So we optimize not what is important, but what is simpler to measure.</p><p>The best product managers I have ever worked with shared my ideas about uncertainty. We agreed that the small details we do not know can have disproportionately significant outcomes on the product’s shape (hello, Taleb). Often, these details can only be found when an actual customer is doing the last of the integration.</p><p>Instead of saying how the product should look, the best PMs focus on finding the right customer to onboard next. It should be a big customer, so we will still profit if we only onboard her. At the same time, the customer should be willing to be a partner and work with us while we build a solution. You cannot onboard a customer who only talks to you once per quarter.</p>]]></content:encoded></item><item><title><![CDATA[Do Not Work on the Project that Solves More Than one Problem]]></title><description><![CDATA[<p>One close friend of mine recenly made a comment that stuck in my head for quite some time: "there are too many problems in the proposal". I was not comfortble about this myself(having too many problems) but:</p><ul><li>I was not sure exactly why. Why is it bad to take</li></ul>]]></description><link>https://uodp.club/do-not-work-on-the-project-that-solves-more-than-one-problem/</link><guid isPermaLink="false">60d4ccc39c5f5f0001c1febc</guid><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Thu, 24 Jun 2021 21:28:16 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2021/06/6852704246_b3b6a42930_b.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2021/06/6852704246_b3b6a42930_b.jpg" alt="Do Not Work on the Project that Solves More Than one Problem"><p>One close friend of mine recenly made a comment that stuck in my head for quite some time: "there are too many problems in the proposal". I was not comfortble about this myself(having too many problems) but:</p><ul><li>I was not sure exactly why. Why is it bad to take initiative and solve 15 problems?</li><li>How many problems is not too many?</li></ul><p>Spending some time reflecting on this, it finally hit me. If you start with the problem in mind you can only start with <strong><u>one</u></strong>, very specific problem that customer is having. You can not say: "There are 15 problems and I'm going to find exactly <strong><u>one</u></strong> solution that solves exactly these problems (not more and not less)." Technically you can say this but it sounds ridiculous, isn't it?</p><p>At this point I started wondering, what are the main reasons to put multiple problems in the same document. I found several documents where I was outlining multiple problems, instead of one. And I found two root causes of such behaviour:</p><ol><li>Attempting to convince that the project is justified (working backward from the project and convincing that it's need to be done)</li><li>Solution started backwards from the problem, however after solutions is outlined author added all <strong><u>other</u></strong> problems that proposed solution is solving (outlining nice side benefits of the solution)</li></ol><p>#1 is dangerous and a symptom that the author is NOT working backwards from the problem but rather trying to build a solution and finding the reason why the solution has to be built. Biggest danger of this is that, while the solution might indeed be solving a very critical problem(s) it is hard to find out if this is the simplest solution or not (since evaluation is done backward from the solution and not the problem). </p><p>#2, on the other hand, is an absolutely valid use-case. If you work backwards from the problem, and find a solution it makes sense to outline other problems that this solution is solving (as a nice side effect benefit). </p><p>So how to distinguish #1 from #2? Sinc in many documents #1 might not be distinguishable from the #2. In order to do so from now on I decided to explicitly outline what is <strong><u>the problem</u></strong>,  that we are solving with each project and what are the side problems that this solution is solving (as a nice side benefit). While the main problem should be solved no matter what the solution is, the list of the rest of the problems can be dynamic (in scope of the document).<br></p>]]></content:encoded></item><item><title><![CDATA[How to Identify The Right Customer]]></title><description><![CDATA[<p>One of the common questions that I'm getting from people reading this blog: "fine, you have all these articles, but can you give me a clear answer of what is UODP?". Well, today is the day when I will start answering this question.</p><p>In short, user-oriented development process (UODP) is</p>]]></description><link>https://uodp.club/how-to-identify-the-right-customer/</link><guid isPermaLink="false">60b6b7effe2c8b0001f11ea3</guid><category><![CDATA[uodp]]></category><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Tue, 01 Jun 2021 22:48:02 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2021/06/customer-focus.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2021/06/customer-focus.jpg" alt="How to Identify The Right Customer"><p>One of the common questions that I'm getting from people reading this blog: "fine, you have all these articles, but can you give me a clear answer of what is UODP?". Well, today is the day when I will start answering this question.</p><p>In short, user-oriented development process (UODP) is an execution technique that includes several aspects:</p><ul><li>how to identify the right customer</li><li>how to identify the right problem</li><li>how to set the right goals</li></ul><p>Today we are going to cover part number one:</p><h1 id="how-to-identify-the-right-customer">How To Identify The Right Customer</h1><p>Each customer has three subgroups. They all can be the same person or completely separate departments within one big enterprise. These subgroups are (sorted by priority):</p><ul><li>decision criteria enforcers</li><li>integrators</li><li>end-users</li></ul><p>You have to address them one by one, starting from the top.</p><h1 id="decision-criteria-enforcers">Decision Criteria Enforcers</h1><p>To illustrate this group, let's take an example: we want to build a service for machine learning practitioners. Now imagine if we're going to onboard a customer from the medical institution. Since our service will be used to process user's personal and medical data our decision criteria enforcers, in this case, can be:</p><ul><li><strong>government</strong> of a particular country that enforces standards like <a href="https://www.hhs.gov/hipaa/index.html">HIPAA</a></li><li>the company <strong>security team</strong> that might require specific compliances to approve services</li><li>the <strong>legal team</strong> might need specific aspect product's term of services to be in place</li></ul><p>It is clear from even these examples that it does not matter how excellent and fantastic the service/solution/product is. Our hypothetical customer still can NOT use it (even from a legal point of view) unless compliant with all the specific requirements.</p><p>Such a situation means that one does not have to have the best service in the world. <strong>One needs to have the best service among services that satisfy the requirements of decision-makers</strong>. </p><p>Good news: it makes life much easier since your MVP does not have to be even on the same level as another service created by shiny new startups. </p><p>Bad news: your MVP will need to have all the required certification and functionality (so actual work might be even more extensive).</p><p>To illustrate the point, let me ask you to recall the situation where you were asking yourself: "why does my university/employer/company/etc force me to use this horrible tool? I know hundreds of tools that do these tasks MUCH better". Such a situation exists due to a straightforward reason: you are not the customer. You are the end-user, and that is why you are using the best tool among tools that satisfy the customer's requirements (and the customer is not you).</p><p>Such an example also shows why bottoms-up approaches to winning the market seldom work in the enterprise world. I will write another article just around the topic of why people think that the bottom-up approach works.</p><p>Things are slightly different in the consumer business, where the end-user is also a customer, integrator altogether. But for the context of this article, we will assume a company - our primary customer.</p><h1 id="integrators">Integrators</h1><p>Integrators are departments/teams in charge of taking your product and rolling it out to all the end-users within the org. If you are using a Windows laptop (I hope you are using a Mac) at work, it is not you who installed it there. Windows (of a particular version) was preinstalled for you by integrators (IT department/Ops/Etc).</p><p>They also will be on the hook to supporting it, etc. Integrators usually have the most extensive list of requirements, out of which the majority might be critical blockers. Several random examples Your service might be required to:</p><ul><li>have an integration with Active Directory since this is the IAM platform used in the company</li><li>have extensive logging support for the sake of security audit</li><li>support the ability to encrypt end data in a particular way with a user-provided encryption key</li><li>be natively integrated with a specific cloud provider (or the opposite, be on top of Kubernetes)</li><li>have the solution by X date</li></ul><p>The most crucial part is the last one (date). Integrators constantly evaluate if they need to start building an in-house solution or trust you (or your competitor) to deliver the solution fast enough. And if they will start building an in-house solution - it is a game over. </p><p>The cost of migration from a solution to anything new (even drastically better) is very high and often unclear. I'm sure this statement is something that anyone who has ever migrated any service from one language to another (or even to the new version of the same language) can agree with. You will have to provide a very-very compelling argument why the company needs to invest X millions in migrating your solution instead of investing X millions elsewhere. And investment to migrate from one platform to a reasonably similar platform is almost always not on top of the list for fast-growing companies.</p><p>As you can see, when you are evaluating the requirements of the integrators, your main competitor is not "another shiny" startup, your main competitor is an in-house custom solution that will satisfy all the requirements from day zero, while your solution is always a risk that might come up short and late.</p><h1 id="end-users">End-Users</h1><p>Finally, let's assume that your product is fully compliant with all the enforced requirements, and integrators have started rolling it out to the initial set of end-users (let's say to 5% of the company). Technically this is what you can call MVP in the enterprise world. This MVP should be barely usable by end-users (but still usable). Integrators will be the middle layer that will be doing the job of collecting all the feedback from the end-users (to whom they are rolling it out) and supplying this feedback back to you. As integrators progress with rolling out the solution, they will give you detailed feedback of requests that you need to address before they can continue rolling out your product to the bigger and bigger user base. Until one day everyone in the company is using it.</p><h1 id="conclusion">Conclusion</h1><p>The majority of the startups, aiming for any enterprise segment, are trying to build shiny things that are unusable by any big player on the market. They acquire the invisible cost of re-designing everything later on when they have to start thinking about being profitable. Since most of today's startups have a lot of money (via initial funding), they can trick the public by ignoring the customer base that they should be focused on and instead focus directly on end-users. Such a strategy creates a lot of noise but rarely produces anything usable for enterprises.</p><p>In a way, ignoring the requirements of the actual customers until the very last minute is not new; this is what we already had in the past with the good old "waterfall" process.<br></p>]]></content:encoded></item><item><title><![CDATA[UODP and Reliability]]></title><description><![CDATA[<p>Quite often these days, I hear that UODP and reliability are two things that contradict each other. Indeed, at first, if you are focusing on onboarding customers, how/why/when one should be focusing on reliability? Does this even need? How can one justify the objective "to improve reliability" in</p>]]></description><link>https://uodp.club/uodp-and-reliability/</link><guid isPermaLink="false">5fa5c3a3a709290001adca49</guid><category><![CDATA[uodp]]></category><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Sat, 07 Nov 2020 23:28:11 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2020/11/brown-Guernsey-cow.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2020/11/brown-Guernsey-cow.jpg" alt="UODP and Reliability"><p>Quite often these days, I hear that UODP and reliability are two things that contradict each other. Indeed, at first, if you are focusing on onboarding customers, how/why/when one should be focusing on reliability? Does this even need? How can one justify the objective "to improve reliability" in the expanse of reducing the number of customers that we will be onboarding in the next month/quarter/year?</p><p>All these questions usually are symptoms of misunderstanding some parts of UODP frameworks, which leads to one of the several outcomes:</p><ul><li>teams do not do anything related to reliability at all until it is too late</li><li>teams religiously following the book with "best reliability practices" and applying everything from the book</li></ul><p>The second (applying all possible best practices) case often much more enjoyable. Since often such team straggling to calculate how much resources they should put on reliability vs other initiatives. All of them? But what is even more interesting, in such a case, is that the team still often ends up with un-happy customers.</p><p>So, let's focus today on the following question: according to the UODP, when and how should we prioritize reliability efforts?</p><h2 id="define-reliability">Define Reliability</h2><p>Let start with defining the reliability according to the UODP framework. The critical rule of UODP is this: we should always be working backward from the customer's needs. In case one's product does not have customers, one does not care. In case a product has one customer, the process of releasing/escalation/communication can be established with direct engagement with the customer. However, when one has thousands of customers, we no longer can maintain direct connections with all of them! How will you make sure that you still focusing on a customer when you have thousands of them?</p><p>Now let's look at the teams claiming that they are using UODP, and do not do reliability. Such a case almost always means that they are focusing only on either new customers or on a very small amount of the old customers. Since they are still focusing on the customers, it gives the illusion that work is always done backward from the customer's needs. But in reality, while the team is focusing on new customers, existing customers might be struggling.</p><p>But how to be customer-oriented when you have thousands of customers? Here is where reliability can help. According to UODP, reliability includes two parts:</p><ul><li>Building a model of your average user</li><li>Keep this user happy</li></ul><p>This definition provides two key points.</p><p>First, as you can see, it defines when one needs to start paying attention to reliability: when you have a total number of customers so big that no longer can be covered by deep engagements.</p><p>Secondly, this definition helps to avoid doing reliability for the sake of doing reliability. It might be surprising, but time to time, measuring too many things is as wrong as measuring nothing. I've seen many teams investing heavily in reliability just to find out that their customers still not happy. UODP helps you to avoid the "<a href="https://successfulnonprofits.com/portfolio/patty-azzarello/">limping cow problem</a>".</p><h2 id="how-to-implement-reliability">How To Implement Reliability?</h2><p>Ok, so now we have a good understanding of when and why we should be paying attention to reliability. But how much resources to allocate? And to what exactly?</p><p>Let's start by answering the question of what exactly does it means to "build a model of your customer" and "keep it happy"? UODP always prioritizes deep engagement; for you modeled customers, deep engagement means the following things:</p><ul><li>From your previous deep engagements, you should create a list of main <a href="https://www.reforge.com/brief/critical-user-journeys-how-google-product-teams-react-when-growth-slows#MzvUyQVl-o6MqTLHDe5DSg">critical user journeys</a> (CUJ) that your users frequently doing</li><li>Build key service level indicators (SLI) that will allow you to measure the "health status" of each CUJ</li><li>Define service level objective (SLO) per each SLI</li></ul><p>Now you have created "deep engagement" with your "modeled user", you will know when the user is unhappy. More importantly, now you will know what you need to fix and by when, and therefore a question for the resource investment will be resolved on its own.</p>]]></content:encoded></item><item><title><![CDATA[Main Pitfalls of UODP]]></title><description><![CDATA[<p>I have collected here main pitfalls of UODP that I have hear over the time.</p><p><strong>UODP is only about incremental changes and short term initiatives</strong></p><p>Key idea of UODP is to work backwards from real problems that customer experiencing. It has nothing to do with amount of time or complexity</p>]]></description><link>https://uodp.club/main-pitfalls-of-uodp/</link><guid isPermaLink="false">5e9cf93fc7cae900018c2667</guid><category><![CDATA[uodp]]></category><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Mon, 27 Apr 2020 02:05:31 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2020/04/download--8--1.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2020/04/download--8--1.jpeg" alt="Main Pitfalls of UODP"><p>I have collected here main pitfalls of UODP that I have hear over the time.</p><p><strong>UODP is only about incremental changes and short term initiatives</strong></p><p>Key idea of UODP is to work backwards from real problems that customer experiencing. It has nothing to do with amount of time or complexity of the solution. However we time to time have tendency to do a long-term project out of any problem they have at hand.  UODP allows one to solve all problems that can be solved with incremental changes quickly and focus your long running effort only where it is strictly required. Inevitably most of the problem indeed will be solved in incremental way, however this will give you an additional resources for the problems that requires quarters or years of work.</p><p>While this is one of the main problems rest of the list looks like this:</p><ul><li><strong>UODP is about incremental changes</strong>. Not it is not, it is about solving the problems. And yes, if you can solve problem that will earn you a billion, by doing incremental changes, you should do incremental changes.</li><li><strong>We have this amazing tool/technology/solution, let's find the problem that it can solve for our customers</strong>. </li><li><strong>I have validate the problem, therefore my solution is also validated</strong>.</li><li><strong>I put word "user" in my goal and therefore now it is a user oriented goal</strong>.</li><li><strong>Different user audience for validating the problem and building the product</strong>. It is very simple to pitch free product to the students and ask if they will be using it for free, it is completely different to sell same product form money to a CTO.</li><li><strong>Ignoring last mile of integration.</strong></li><li><strong>Sticking to the original plan/scope while user's problems clearly have changed</strong></li></ul>]]></content:encoded></item><item><title><![CDATA[First Steps Towards UODP]]></title><description><![CDATA[<p>We have not even defined yet what UODP is. However, I am already getting random questions like: "how can I try this? do you have any formal steps for beginners?"</p><p>This small article is going to address precisely that question. Without further ado, here are the ...</p><h2 id="steps-towards-uodp">Steps Towards UODP</h2><ol><li>Define</li></ol>]]></description><link>https://uodp.club/first-steps-towards-uodp/</link><guid isPermaLink="false">5e670ad8f58bd000012d3d9a</guid><category><![CDATA[uodp]]></category><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Fri, 13 Mar 2020 03:05:14 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2020/03/download--7-.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2020/03/download--7-.jpeg" alt="First Steps Towards UODP"><p>We have not even defined yet what UODP is. However, I am already getting random questions like: "how can I try this? do you have any formal steps for beginners?"</p><p>This small article is going to address precisely that question. Without further ado, here are the ...</p><h2 id="steps-towards-uodp">Steps Towards UODP</h2><ol><li>Define the set of problems that you were thinking to solve (not project or features)</li><li>Define the set of real customers impacted by those problems</li><li>Assign tasks to onboard specific customers to the solution that solves the problem</li><li>Pick your team members who will have a personal task to onboard the customer</li><li>Onboard the customer</li><li>Generalize solution for the rest of the customers</li></ol><h2 id="common-mistakes">Common Mistakes</h2><h3 id="working-backwards-from-technology-instead-of-a-problem">Working Backwards From Technology Instead of a Problem</h3><p>Starting with a technology/service/feature and trying to find where to apply it. When a team first attempts to practice UODP, they usually would start from searching for problems that can be solved with technology/service/feature that they have on the roadmap. Instead, the team should be starting with revisiting customer's problems that they want to solve (irrespectively of technology/service/features they are currently building).</p><p>Main symptoms:</p><ul><li>switching target customers if a current customer does not have a problem that can be solved with technology/service/feature that is in the development</li><li>promoting to the customers a technology/service/feature instead of learning more about existing customers infrastructure and existing customer's problems</li></ul><h3 id="not-onboarind-the-customers">Not Onboarind the Customers</h3><p>Another big mistake is to ignore items 3+ on the list. Again, I've seen this multiple times in my life, when a team only starts using UODP, what usually happens is that it focuses on validating the problem (which is a nice thing), however, completely drops the steps that confirm that solution solves the problem. It usually looks like this:</p><ul><li>validates the problem</li><li>builds the solution</li><li>releases the solution and claims that the problem now been solved</li></ul><p>While such a process is much better already, since it is starting from the validated problem, one can not argue that the problem is solved. The only proof that the problem is solved is when a real customer that had this problem is onboarded to a new solution.</p><h3 id="not-focusing-on-main-problem-s-">Not Focusing on Main Problem(s)</h3><p>Another issue that I've observed: since almost any problem can be justified by identifying customers who had it, one can end up with the roadmap that has tons of features in it that solving "validated problems". In such a case, a team can still deliver an unsuccessful product with zero customers using it.</p><h2 id="faq">FAQ</h2><p>Q: What if customers going away?</p><p>A: Customers might decide not to proceed indeed, and you have already spent engineering cycles on onboarding them. However, in reality, this is rarely the case if you are doing your job right. But if you want to protect yourself, always have several customers that you are onboarding.</p><p>Q: Would this be over-optimization for a specific customer?</p><p>A: While this is a common theoretical concern, it rarely the case in real life. On the contrary, it is much more likely, by building a generic solution to end up with no customers at all.</p>]]></content:encoded></item><item><title><![CDATA[How to Identify a Successful Product]]></title><description><![CDATA[<p>Have you ever asked yourself: if there are any product attributes that can tell you if it will be successful or not?</p><p>This question is such a hard question that many leaders tend to answer a more straightforward question (especially in the IT field): are the tools that we picked</p>]]></description><link>https://uodp.club/how-to-identify-a-successful-product/</link><guid isPermaLink="false">5e647963421e4c000180d6f4</guid><category><![CDATA[Successful Product]]></category><dc:creator><![CDATA[Viacheslav Kovalevskyi]]></dc:creator><pubDate>Mon, 09 Mar 2020 00:13:33 GMT</pubDate><media:content url="https://uodp-club-ghost-content.storage.googleapis.com/2020/03/failure-and-success-text-showing-in-papers-thumbnail-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://uodp-club-ghost-content.storage.googleapis.com/2020/03/failure-and-success-text-showing-in-papers-thumbnail-1.jpg" alt="How to Identify a Successful Product"><p>Have you ever asked yourself: if there are any product attributes that can tell you if it will be successful or not?</p><p>This question is such a hard question that many leaders tend to answer a more straightforward question (especially in the IT field): are the tools that we picked to deliver the project will allow us to deliver it successfully? While the latter question is also important, I rarely see wrong technical decisions as for product failure. Quite the opposite, I've seen many products that are horrible from the technical perspective to end up being quite successful. Such observation boils down to a simple statement: it is better to solve the right problem with the wrong tools than to solve a wrong problem with right tools.</p><p>Just several examples:</p><ul><li>RethinkDB (excellent technology but <a href="https://rethinkdb.com/blog/rethinkdb-shutdown">failed product</a>) vs. MongoDB (highly successful product but <a href="https://www.reddit.com/r/mongodb/comments/43m2om/how_exactly_does_mongodb_lose_data_and_is_it/">horrible technology</a>)</li><li><a href="https://consequenceofsound.net/2015/11/r-i-p-the-microsoft-zune-is-officially-dead/">Microsoft Zune </a>- everyone who used it, loved it. However, the product failed</li><li>Airbus A380 - it is a state of the art engineering that is a complete failure as a product (BTW case of A380 we will be using a lot in this article)</li></ul><p>Okay, so the product needs to be solving the right problem. But are there any other aspects of a potentially successful product? And what does it mean "right problem"? In this article, I'm going to introduce you to the high-level criteria of any successful product. They are so simple and generic that my site has a page (<a href="http://uodp.club/successful-product/">uodp.club/successful-product/</a>) to have them outlined for you so you can get back to them whenever it is needed. I would advise you to bookmark it and revisit time to time (I'm very occasionally updating it).</p><p>So, a successful product is a product that (rev Mar.08.20 is used here):</p><ul><li>Solves a real customer's problem</li><li>Solves one of the primary customer's problems</li><li>Solves a problem that will exist (and will remain primary problem) by the time it is delivered</li><li>Easily pluggable in the <strong>existing</strong> customer's infrastructure</li><li>Cost of adopting the product should be smaller (ideally: drastically lower) than the benefits that customer will get</li></ul><p>While this might look like a small list of things, you would be surprised how easily people can get items from this list incorrectly.</p><h2 id="instead-of-a-disclaimer">Instead Of a Disclaimer</h2><p>One thing to notice before directing each item one by one. If the product does not qualify as a successful product, it does not mean that it can not become a successful product in the future. On my site, I will be calling such products(a product that is not qualified as a successful product now): "Moon Shots". We will dive into the "Moon Shot" definition later, but for now, let's say that "Moon Shot" is a product that has minimal chances of becoming a successful product.</p><p>Also, the definition of success, in this case, is minimal. I am only defining it as a product that will have usage among customers on the initially expected level (or more). However, such a definition does not even say if such usage level would also give any reasonable <a href="https://www.investopedia.com/terms/r/returnoninvestment.asp">ROI</a>.</p><p>With these caveats in mind, let's proceed...</p><h2 id="solves-a-real-customer-s-problem">Solves a Real Customer's Problem</h2><p>A problem that the product is solving has to exist today and now. It does not mean that you should not innovate. A user might not know that the problem exists since there are no solutions to it, and therefore everyone just settled on compromises that they see. However, do not confuse "existing problems that customers are not thinking about" with "nonexisting problems that you think customers will have in the future". Since identifying first is much harder than latter frequently, we tend to come up with an imaginary future that will have new problems and start solving them.</p><p>If you are focusing on forecasting the market, you are doing "Moon Shot" by design. Let me give you an example. Airbus A380, a disaster that never generated enough money even to compensate for all the investments. Probably excellent high-level overview of the failure can be found in this short video:</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/NlIdzF1_b5M?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>In short: in ~1988 Airbus performed a market analysis and made a bat on the transportation model that turns out to be a mistake. This analysis shows that even companies with multi-billion budgets for market evaluations can not predict the future. We will speak about why that is later.</p><h2 id="solves-one-of-the-primary-customer-s-problems">Solves One of the Primary Customer's Problems</h2><p>This requirement is much-much harder to satisfy, comparing to the previous one, and here is why. Almost any product/feature that succeeds (or failed), with a high level of certainty, can be traced to a real problem that product/feature was solving.</p><p>I'm sure that <a href="https://arstechnica.com/gadgets/2020/02/andy-rubins-smartphone-startup-essential-is-dead/">the essential phone</a> or <a href="https://mashable.com/2015/09/09/amazon-fire-phone-dead/">Amazon fire phone</a> had a real customer's problem that they were solving. Airbus A380 had a very well defined and evident problem in mind.</p><p>I love the following military analogy: imagine general commands to open fire, and imagine that no one does so. Now, imagine that the reason why no one starts doing as commanded is that there are no bullets. In such a situation, it does not matter what other problems you will solve. If you are not going to solve the problem around bullets, no other solution will probably be adopted.</p><p>Solving one of main problems is the key to success. Whoever solving the main problem usually has ultimate access to customers and customer's trust. Such a situation, by itself, makes you the first candidate to ask for help with solving other problems tomorrow when the main problems of today will be resolved.</p><h2 id="solves-a-problem-that-will-exist">Solves a Problem That Will Exist </h2><p>It is very-very hard to make sure that existing problems still exist and still one of the main problems. Need to predict the future is the critical aspect of why it is theoretically impossible to have a 100% certainty that the product will be successful.</p><h2 id="easily-pluggable-in-the-existing-customer-s-infrastructure">Easily Pluggable in the Existing Customer's Infrastructure</h2><p>Let's come back to the example of A380. Here is another video that shows a slightly different angle of the same problem:</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/FogSAQ63f3Q?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>In short, A380 became a logistical nightmare. It was not possible to use it within the existing infrastructure. Even today, there is a tiny amount of airport capable of working with A380. To show how important for the products to be pluggable with existing infrastructure, let's also look at Boeing 777x. This airplane has amazing foldable wingtips. And the main reason why they got created is so the size of the aircraft on the ground would allow it to tax in smaller airports.</p><p>Watch it in action:</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/rNyJbdv2KF4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><h2 id="afterwords">Afterwords</h2><p>Unfortunately, as I have mentioned earlier, it is theoretically impossible to satisfy all the items in this list with 100% certainty. At least you will never be sure that the problem (even if you got the problem right) will exist by the time your product delivered. However, this list gives you an excellent framework to focus on, Northstar. I've used it so many times to pivot my products in the early stages of development even before writing any lines of code. I've also been using it on many product technical reviews, but this is a different topic for the next article.</p><h2 id="home-work">Home Work</h2><p>I would suggest to allocate some time and do a quick analysis of all the products that you own to figure out if any of them meeting all the required criteria or not.</p>]]></content:encoded></item></channel></rss>