The Agile methodology is a project management approach that breaks larger projects into several phases. It is a process of planning, executing, and evaluating with stakeholders. Our resources provide information on processes and tools, documentation, customer collaboration, and adjustments to make when planning meetings.
Value streams have been a central tenet of Lean thinking for decades, starting with Toyota and the Lean Manufacturing movement, and are now widely adopted across industries. Despite this, many businesses still need to harness the full potential of value streams to drive organizational change and achieve greater efficiency and effectiveness. Instead, they may focus narrowly on metrics like team velocity or production pipeline speed, missing the broader picture of the end-to-end system. In modern product development, understanding value streams is crucial to optimizing our ways of working and delivering value to customers. By mapping the path to value, we can gain visibility into our processes and identify improvement areas, such as code deployment bottlenecks or mismatches between personnel and roles. In this blog, we will explore the concept of value stream mapping and its role in actualizing the purpose of DevOps transitions. We'll debunk common myths and misunderstandings around value stream mapping and introduce principles to help you succeed in this activity and beyond. Whether you're a seasoned DevOps practitioner or just starting on your journey, you will want to take advantage of this opportunity to unlock the holy grail of Agile-DevOps value stream hunting. What Is Value Steaming, and Why the Path to Value Streaming Is Quintessential for Your Agile-DevOps Journey? Value stream mapping is the process of analyzing and improving the flow of value to customers by mapping out the end-to-end process, from idea to delivery. Understanding value streams and mapping the path to value streaming is essential for any Agile-DevOps journey. Consider a software development team struggling to deliver value to customers efficiently in real-life scenarios. They may focus on completing tasks and meeting deadlines but only take a holistic view of the entire process. Through value stream mapping, they can identify bottlenecks in the development process, such as long wait times for testing or approval processes, and adjust and streamline the flow of value to customers. Value stream mapping is quintessential to an Agile-DevOps journey because it helps teams understand how their work fits into the larger picture of delivering value to customers. By mapping out the entire process, teams can see where delays occur, where handoffs are inefficient, and where there is room for improvement. Consider a DevOps team struggling to smoothly integrate code changes into the production environment. Through value stream mapping, they may discover that their testing process could be more time-consuming or that there are too many manual steps in the deployment process. By identifying these inefficiencies, they can automate testing and deployment, leading to faster value delivery to customers. By taking a holistic view of the entire process, teams can identify inefficiencies, reduce waste, and deliver customer value more efficiently and effectively. Value stream mapping helps organizations identify and eliminate inefficiencies in their processes, leading to the faster, more efficient delivery of value to customers. Following are some more examples: A financial services company wants to improve the time it takes to process customer loan applications. Through value stream mapping, they discover that there are long wait times between different departments and multiple handoffs that slow down the process. By identifying these inefficiencies, they can redesign the operation to eliminate unnecessary steps and reduce wait times, resulting in faster loan processing and improved customer satisfaction. A healthcare organization wants to improve patient care by reducing the time it takes for lab results to be processed and returned to the doctor. Through value stream mapping, they discover that there are too many manual steps in the lab testing process and bottlenecks in the information flow between departments. By redesigning the process to automate testing and improve communication, they can reduce the time it takes to process lab results, leading to faster patient diagnosis and treatment. A software development company wants to improve the quality of its code releases. Through value stream mapping, they discover that multiple handoffs between development, testing, and operations teams lead to errors and delays. By redesigning the process to automate testing and improve communication between teams, they can reduce the time it takes to identify and fix bugs, resulting in higher-quality code releases and happier customers. Embarking a Lightweight Quest to Value Stream Mapping for Agile-DevOps Teams A lightweight approach to value stream mapping can help Agile-DevOps teams to streamline their processes, improve efficiency, and deliver value to their customers more quickly. By avoiding unnecessary complexity and focusing on the most critical areas of the process, teams can achieve success and stay competitive in today's fast-paced business environment. A lightweight approach means using simple tools and methods to map out your processes instead of getting bogged down in complex and time-consuming activities. This approach can be particularly beneficial for Agile-DevOps teams, often focused on delivering value quickly and efficiently. By taking a lightweight approach, teams can focus on identifying the most critical areas of the process that need improvement and acting quickly to address them. A lightweight approach also allows for greater flexibility and agility, which is essential in the fast-paced world of Agile-DevOps. Teams can quickly adapt and adjust their value stream mapping activities as needed to stay aligned with their goals and objectives. Busting the Myths and Misconceptions: The Truth About Value Streams and Value Stream Mapping Some common myths and misconceptions about value streams and value stream mapping include the idea that they are only relevant to manufacturing or physical products, that they are too complex and time-consuming to implement, or that they are only helpful for large organizations. However, the truth is that value streams and value stream mapping can be applied to any industry or process, regardless of its size or complexity. Instead, they provide a holistic view of the end-to-end process, allowing teams to identify and address bottlenecks, reduce waste, and improve efficiency. Another misconception is that value stream mapping is a one-time activity, but in reality, it should be an ongoing process that evolves with the organization's needs and goals. It's also optional to completely understand all the processes upfront. It's perfectly acceptable to start with a smaller scope and build on that as needed. By busting these myths and misconceptions, teams can better understand the actual value of value stream mapping and how it can be a valuable tool in their Agile-DevOps journey. They can avoid unnecessary complexity and focus on the critical areas of the process that need improvement. Ultimately, this will lead to a more efficient and effective operation and better customer value delivery. Unlocking Business Excellence: Maximize the Benefits of Agile-DevOps Value Stream Mapping Using 8 Lean Principles If you want to take your Agile-DevOps team to the next level, then unlocking business excellence with Agile-DevOps value stream mapping and eight Lean principles is the way to go. Value stream mapping (VSM) is a Lean tool that visually represents the process steps required to deliver value to customers. The VSM process identifies bottlenecks, waste, and opportunities for improvement in the value stream. In addition, it helps Agile-DevOps teams to focus on value-added activities and eliminate non-value-added activities, resulting in reduced lead time, improved quality, and increased customer satisfaction. To maximize the benefits of VSM, Agile-DevOps teams should follow eight Lean principles. These principles are: Define value from the customer's perspective: Identity what your customers consider valuable and focus your efforts on delivering that value. Map the value stream: Create a visual representation of the entire value stream, from idea to delivery, to identify inefficiencies and opportunities for improvement. Create flow: Eliminate waste and create a smooth workflow through the value stream to improve delivery time. Implement pull: Use customer demand to drive work and avoid overproduction. Seek perfection: Continuously improve the value stream to eliminate waste and improve efficiency. Empower the team: Provide your Agile-DevOps team with the tools, resources, and authority they need to succeed. Practice Lean leadership: Create a culture of continuous improvement and empower your team to drive change. Respect people: Treat your team members respectfully and create a positive work environment encouraging collaboration and innovation. By implementing these eight Lean principles, Agile-DevOps teams can unlock business excellence and deliver superior customer value. Deploying the Power of Principles: Succeeding in Value Stream Mapping in a Lightweight Way and the Horizons Beyond By embracing a lightweight approach and deploying the power of Lean principles, organizations can succeed in value stream mapping and achieve business excellence. The lightweight approach enables organizations to identify areas that need improvement, break down silos, and facilitate collaboration across teams, thus unlocking the true potential of value stream mapping. It also helps organizations to sustain their efforts and continue to make improvements in the long run. By embracing the eight Lean principles, organizations can achieve business excellence by continuously improving their value stream and delivering value to their customers. These principles include creating customer value, mapping the value stream, establishing flow, implementing pull, seeking perfection, embracing scientific thinking, empowering teams, and respecting people. So, if you're looking to unlock the true potential of your Agile-DevOps transition, take advantage of value stream mapping. Don't wait; take the first step towards success, start your value stream mapping (VSM) journey today, and take your Agile-DevOps team to the next level!
Railsware is an engineer-led company with a vast portfolio of building projects for companies, so when talking about Jira best practices for developers, we speak from experience. Why Do People Love Jira? Jira is by no means perfect. It certainly has its downsides and drawbacks. For instance, it is a behemoth of a product and, as such, is pretty slow when it comes to updates or additions of new functionality. Some developers also say that Jira goes against certain agile principles because—when in the wrong hands—it can promote fixation on due dates rather than delivery of product value. Getting lost in layers and levels of several boards can, indeed, disconnect people by overcomplicating things. Still, it is among the preferred project management tools among software development teams. Why is that? Permissions: Teams, especially bigger ones, work with many different experts and stakeholders, besides the core team itself. So, setting up the right access to information is crucial. Roadmaps and epics: Jira is great for organizing your project on all levels. On the highest level, you have a roadmap with a timeline. Then, you have epics that group tasks by features or feature versions. Inside each epic, you create tickets for implementation. Customization: This is Jira’s strongest point. You can customize virtually anything: Fields for your JIRA tickets. UI of your tickets, boards, roadmaps, etc. Notifications. Workflows: Each project may require its own workflow and set of statuses per ticket, e.g., some projects have staging server and QA testing on it and some don’t. Search is unrivalled (if you know SQL aka JQL in Jira): Finding something that would have been lost to history in a different project management tool is a matter of knowing JQL in Jira. The ability to add labels using keywords makes the aforementioned search and analysis even simpler. Automation: The ability to automate many actions is among the greatest and most underestimated strengths of Jira: You can create custom flows where tickets will create temporary assignees (like the back and forth between development and QA). You can make the issue fall into certain columns on the board based on its content. Move issues to “in progress” from “todo” when there’s a related commit. Post the list of released tickets to Slack as a part of release notes. Integrations and third party apps: Github, Bitbucket, and Slack are among the most prominent Jira integrations, and for good reasons. Creating a Jira ticket from a message, for example, is quite handy at times. The Atlassian Marketplace broadens your reach even further with thousands of add-ons and applications. Broad application: Jira is suitable for both iterative and non-iterative development processes for IT and non-IT teams. Jira Best Practices Let’s dive into the nitty-gritty of Jira best practices for multiple projects or for a single one. Define Your Goals and Users Jira, being as flexible as it is, can be used in a wide manner of ways. For instance, you can primarily rely on status checking throughout the duration of your sprint, or you can use it as a project management tool on a higher level (a tool for business people to keep tabs on the development process). Define your team and goals. Now that you have a clear understanding of why, let’s talk about the “who.” Who will be the primary Jira user? And will they be using it to: Track the progress on certain tickets to know where and when to contribute? Use it as a guide to learn more about the project? As a tool for tracking time for invoicing clients, performance for internal, data-driven decision making, or both? Is it a means of collaborating, sharing, and spreading knowledge across several teams involved in the development of the product? The answers to the above questions should help you define the team and goals in the context of using Jira. Integrations, Third-Party APIs, and Plugins Jira is a behemoth of a project management platform. And, like all behemoths, it is somewhat slow and clunky when it comes to moving forward. If there’s some functionality you feel is missing from the app—don’t shy away from the marketplace. There’s probably a solution for your pain already out there. Our team, for instance, relies on a third-party tool to create a series of internal processes and enhance fruitful collaboration. You can use ScriptRunner to create automation that’s a bit more intricate than what comes out of the box. Or you can use BigGantt to visualize the progress in a friendly drag-and-drop interface. Don’t shy away from combining the tools you use into a singular flow. An integration between Trello and Jira, for instance, can help several teams—like marketing and development—stay on the same page. Use Checklists in Tickets Having a checklist integrated into your Jira issues can help guide a culture that’s centered around structured and organized work as well as transparency and clarity to everyone. Our Smart Checklist for Jira offers even more benefits: You have a plan: Often times it’s hard to start a feature implementation, and without a plan, you can go in circles for a long time. Having mental peace: Working item by item is much more calm and productive than dealing with the unknown. Visibility of your work: If everyone sees the checklist progress, you are all on the same page. Getting help: If your progress is visible, colleagues can give you advice on the plan itself and the items that are challenging you. Prioritization: Once you have the items list, you can decide with your team what goes into v1, and what can be easily done later. You can use checklists as templates for recurring processes: Definition Done, Acceptance Criteria, onboarding and service desk tickets, etc., are prime candidates for automation. Moreover, you can automatically add checklists to your Jira workflow based on certain triggers like the content of an issue or workflow setup. To learn more, watch our YouTube video: “How to use Smart Checklist for Jira.” Less Is More Information is undoubtedly the key to success. That said, in the case of a Jira issue, awareness is key. What we’ve noticed over our time of experimenting with Jira is that adding more info that is either unnecessary or irrelevant seems to introduce more confusion than clarity into the process. Note: We don’t mean that Jira shouldn’t be used for knowledge transferring. If some information (links to documentation, your internal processes, etc.) is critical to the completion of a task—share it inside the task. Just use a bit of formatting to make it more readable. However, an age-old history of changes or an individual’s perspective on the requirements is not needed. Stick to what is absolutely necessary for the successful completion of a task and elaborate on that. Not more, nor less. Keep the Backlog and Requirements Healthy and Updated Every project has a backlog—a list of ideas, implementation tickets, bugs, and enhancements to be addressed. Every project that does not keep its backlog well-maintained ends up in a pickle sooner rather than later. Some of our pro-tips on maintaining a healthy backlog are: Gradually add the requirements to the backlog: If not for anything else, you’ll have a point of reference at all times, but moving them there immediately may cause certain issues as they may change before you are ready for implementation. Keep all the work of the development team in a single backlog: Spreading yourself thin across several systems that track bugs, technical debt, UX enhancements, and requirements is a big no-no. Set up a regular backlog grooming procedure: You’ll get a base plan of future activities as a result. We’d like to point out that said plan needs to remain flexible to make changes based on feedback and/or tickets from marketing, sales, and customer support. Have a Product Roadmap in Jira Jira is definitely not the go-to tool for designing a product roadmap, yet having one in your instance is a major boon, because it makes the entire scope of work visible and actionable. Additional benefits of having a roadmap in Jira include: It is easier to review the scope with your team at any time. Prioritizing new work is simpler when you can clearly see the workload. You can easily see dependencies when several teams are working on a project. Use Projects as Templates Setting up a new project can be tedious even if you’ve done it a million times before. This can be especially troublesome in companies that continuously deliver products with a similar approach to development such as mobile games. Luckily, there’s no need to do the same bit for yet another time with the right combination of tools and add-ons. A combination of DeepClone and Smart Checklist will help you clone projects, issues, stories, or workflow conditions and use them as project templates. Add Definition of Done as a Checklist to all of Your Jira Issues Definition of Done is a pre-release checklist of activities that determine whether a feature is “releasable.” In simpler words, it determines whether something is ready to be shipped to production. The best way of making this list accessible to everyone in the team is to put it inside the issues. You can use Smart Checklist to automate this process; however, there are certain rules of thumb you’ll need to follow to master the process of designing a good DoD checklist: Your objectives must be achievable. They must clearly define what you wish to deliver. It’s best if you keep the tasks measurable. This will make the process of estimating work much simpler. Use plain language so everyone who is involved can easily understand the Definition of Done. Make sure your criteria are testable so the QA team can make sure they are met. Sync With the Team After Completing a Sprint We have a nice habit of running Agile Retrospective meetings here at Railsware. These meetings, also known as Retros, are an excellent opportunity for the team to get recognition for a job well done. They can also help you come up with improvements for the next sprint. We found that the best way of running these meetings is to narrow the conversation to goods and improves. This way you will be able to discuss why the things that work are working for you. You’ll also be able to optimize the rest. Conclusion If there’s a product where there’s something for everyone—within the context of a development team—it’s probably Jira. The level of customization, adaptability, and quality of life features is an excellent choice for those teams that are willing to invest in developing a scalable and reliable process. If there’s anything missing from the app—you can easily find it on the Atlassian Marketplace.
As software development continues to evolve, there are two approaches that have gained a lot of attention in recent years - Agile and DevOps. Agile has been around since the early 2000s and focuses on delivering software frequently through iterative and incremental development. DevOps, on the other hand, is a newer approach that focuses on speeding up the software delivery process through collaboration, automation, and continuous delivery. While both Agile and DevOps aim to improve efficiency and collaboration within the development team, there are some key differences between the two approaches. Agile is focused on the software development process, while DevOps is focused on deployment, integration, and delivery. Agile uses a methodology of sprints, daily stand-ups, and retrospectives to deliver working software frequently. DevOps, on the other hand, uses continuous integration and continuous deployment to speed up the delivery process. Agile Agile is a software development methodology that focuses on delivering value to customers through iterative and incremental development. It values collaboration, flexibility, and customer satisfaction. Agile teams work in short sprints, usually lasting 1-4 weeks, and aim to deliver working software at the end of each sprint. Agile development involves continuous feedback from the customer and the team, and the ability to adapt to changing requirements. Agile practices include daily stand-up meetings, sprint planning, backlog grooming, and retrospective meetings. The Agile Manifesto defines four core values: individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan. DevOps DevOps is a culture that emphasizes collaboration, communication, and integration between development and operations teams. The goal of DevOps is to improve the quality and speed of software delivery, by automating processes and reducing the time it takes to go from development to production. DevOps involves a combination of practices, such as continuous integration, continuous delivery, and continuous deployment. DevOps teams work in a continuous cycle of development, testing, deployment, and monitoring. This allows for rapid feedback and the ability to quickly fix issues. DevOps teams also value the use of automation tools, such as configuration management, orchestration, and monitoring tools. DevOps vs Agile Agile and DevOps are complementary approaches that share many of the same values and principles. Both aim to deliver high-quality software that meets the needs of the customer. However, there are some key differences between the two approaches. Agile is focused on the software development process, while DevOps is focused on the entire software lifecycle. Agile teams work in short sprints, while DevOps teams work in a continuous cycle. Agile teams rely on manual testing and deployment, while DevOps teams automate these processes. Agile teams prioritize flexibility and customer collaboration, while DevOps teams prioritize speed and efficiency. Agile DevOps Focus Software development process Deployment, integration and delivery process Goals Delivering working software frequently Speeding up software delivery and feedback Methodology Iterative and incremental Continuous delivery and deployment Process Sprint planning, daily stand-ups, retrospectives Continuous Integration and Continuous Deployment Team Self-organizing cross-functional teams Collaborative and integrated teams Communication Face-to-face communication, regular meetings Strong collaboration, communication and feedback loop Feedback Regular customer feedback and iterations Continuous feedback through automated testing and monitoring Culture Empowered and autonomous teams Collaborative, feedback-oriented culture Tools Agile project management tools, issue trackers Automated testing, monitoring, and deployment tools Which Approach Will Win the Battle for Efficiency? The answer to this question depends on the specific needs and goals of your organization. If your goal is to improve the speed and efficiency of your software delivery, then DevOps may be the better approach. DevOps allows for rapid feedback, quick issue resolution, and automation of manual processes. However, if your goal is to prioritize flexibility, collaboration, and customer satisfaction, then Agile may be the better approach. Agile allows for continuous feedback from the customer and the team, and the ability to adapt to changing requirements. Ultimately, the choice between DevOps and Agile depends on the specific needs and goals of your organization. It is possible to combine elements of both approaches to create a customized approach that works best for your organization. Conclusion DevOps and Agile are two popular approaches in software development that aim to improve efficiency and productivity. Both approaches have their own principles, practices, and benefits. The choice between DevOps and Agile depends on the specific needs and goals of your organization. It is possible to combine elements of both approaches to create a customized approach that works best for your organization.
A few years ago, I ran a survey to figure out what Scrum Masters serving a single Scrum team do all day. Now that we have a new kid, pardon: a new LLM, on the block, I reran the old questionnaire: Club Scrum: What Are You Doing all Day, ChatGPT — as a Scrum Master? Based on the survey results from 2018, the normalized total amount of time spent on Scrum events, educating themselves, or coaching teammates and stakeholders, respectively, was approximately 12 hours per week, which leaves a lot of room for dealing with impediments. Read on and learn what ChatGPT considers to be a typical workload. (Excluding the removal of impediments.) Setting Up the Interview I used the same questionnaire from 2018 for this run of the Club Scrum interview and resisted the urge to Grammarly it to maintain comparability. (Please accept my apologies for the text quality.) I ran the “interview” on Friday, February 24, from 6:30 pm CET to 8:15 pm CET on the ChatGPT Feb 13 version. The Initial ChatGPT Prompt to Explore Club Scrum I used the following initial prompt to prime ChatGPT before I started asking the questions: I want you to act as an experienced Scrum Master, serving a single Scrum team. Your organization is familiar with Scrum and supports its Scrum teams fully. All stakeholders in the organization understand and support that your team is using Scrum. I will interview you in your capacity as a Scrum Master of said Scrum team to find out how much time you allocate to typical Scrum Master duties. ChatGPT: Sure, I’d be happy to help you as an experienced Scrum Master for a Scrum team. I can give you an idea of how much time I typically allocate to the various Scrum Master duties. As a Scrum Master, my primary responsibility is to facilitate the Scrum process and help the team maximize its potential. This includes the following key responsibilities: Facilitating Scrum Events: I allocate a significant amount of time towards facilitating Scrum events like daily stand-ups, sprint planning, sprint review, and sprint retrospectives. These events are crucial for the team’s progress, and I ensure that they are conducted efficiently and effectively. Coaching and Mentoring: I also spend a significant amount of time coaching and mentoring the team. I help them understand the Scrum framework, agile principles, and values. Additionally, I work with them to identify and implement best practices and continuously improve their skills and capabilities. Removing Impediments: One of my critical roles as a Scrum Master is to identify and remove any impediments that may be hindering the team’s progress. I allocate a significant amount of time towards identifying and resolving these issues, working with stakeholders to remove roadblocks and improving the team’s overall performance. Facilitating Collaboration: I ensure that collaboration is encouraged among team members, and they work together to deliver a quality product. I encourage team members to share knowledge and expertise and ensure that they work together to achieve the sprint goal. Monitoring Progress: As a Scrum Master, I monitor the team’s progress and ensure that they are on track to deliver the sprint goal. I also track progress against the product backlog and help the team adapt and adjust if they fall behind. Overall, the amount of time I allocate to each of these duties varies depending on the team’s needs and the project’s complexity. However, I would say that I spend most of my time facilitating Scrum events, coaching and mentoring the team, and removing impediments to help the team achieve its goals. Comment: Okay, I did not ask for the answer, but here we go. Club Scrum: The Questions I skipped questions 1 — “How many people are on your scrum team?” — and 2 — “What is your Sprint length?” — for obvious reasons; ChatGPT does not have a Scrum team that serves as a Scrum Master. Let us assume that ChatGPT’s Scrum team comprises seven members and runs a 2-week Sprint to preserve the comparability with the 2018 results. Product Backlog Refinement Question 3: “How much time do you spend weekly on product backlog refinement sessions?” ChatGPT: As a Scrum Master, I believe that the Product Backlog Refinement (PBR) is an essential Scrum event that helps the team to refine the product backlog and prepare the upcoming backlog items. Typically, the amount of time spent on PBR sessions varies depending on the team’s needs and the complexity of the product backlog items. In my experience, I allocate approximately 10-15% of the team’s capacity to PBR sessions. This means that for a team with a 2-week sprint, I would schedule a 2-hour PBR session each week, or for a team with a 1-week sprint, I would schedule a 1-hour PBR session each week. However, it’s important to note that this is just a rough estimate, and the actual time spent on PBR sessions may vary depending on the team’s needs. For example, if the team is working on a complex project with a lot of new requirements, we might need to allocate more time to PBR sessions. Similarly, if the product backlog items are well-defined and the team is working on a straightforward project, we might need to allocate less time to PBR sessions. In any case, my goal as a Scrum Master is to ensure that the team has enough time to refine the product backlog effectively and prepare for upcoming backlog items while also ensuring that we do not spend too much time in meetings and lose valuable development time. Comment: The 10-15% figure seems to be taken straight from the Scrum Guide 2017. Although this guidance is missing from the Scrum Guide 2020, it is still a good rule of thumb. Given that the 2-week Sprint proved most popular during the 2018 survey, we take a 2-hour session every week, totaling 4 hours for the Sprint. Sprint Planning Club Scrum Question 4: “How much time do you spend per sprint on sprint planning?” ChatGPT: As a Scrum Master, I believe that the Sprint Planning event is a crucial part of the Scrum framework that helps the team to plan the upcoming sprint and identify the work that they will be delivering. The amount of time spent on Sprint Planning varies depending on the team’s needs and the complexity of the sprint backlog items. In my experience, I allocate approximately 5-10% of the team’s capacity to Sprint Planning. This means that for a team with a 2-week sprint, I would schedule a 4-hour Sprint Planning session, or for a team with a 1-week sprint, I would schedule a 2-hour Sprint Planning session. However, it’s important to note that the actual time spent on Sprint Planning may vary depending on the team’s needs. For example, if the team is working on a complex project with a lot of new requirements, we might need to allocate more time to Sprint Planning. Similarly, if the sprint backlog items are well-defined and the team is working on a straightforward project, we might need to allocate less time to Sprint Planning. In any case, my goal as a Scrum Master is to ensure that the team has enough time to plan the upcoming sprint effectively while also ensuring that we do not spend too much time in meetings and lose valuable development time. I also make sure that the team has a clear understanding of the sprint goal and the product backlog items that they will be working on during the sprint. Comment: First of all, there are no Sprint Backlog items; we call any work item a ‘Product Backlog item.’ Also, a “straightforward project” is probably not Scrum’s best use case: if you know what needs to be accomplished, Scrum’s risk mitigation approach comes with a hefty price tag in the form of overhead. The last sentence on ensuring the Scrum team understands the Sprint Goal is too much project-management-ish; the Scrum team created the Sprint Goal in the first place. Anyway, 4 hours for a 2-week Sprint sounds reasonable. Daily Scrum Question 5: “How much time do you spend weekly on daily scrums or standups?” ChatGPT: As a Scrum Master, I believe that the Daily Scrum or Daily Stand-up event is a critical part of the Scrum framework that helps the team to synchronize their activities and make sure that they are aligned towards the sprint goal. The amount of time spent on Daily Scrum or Stand-up varies depending on the team’s needs and the number of team members. In my experience, I allocate approximately 15-30 minutes for the Daily Scrum or Stand-up. This means that for a team with a 2-week sprint, I would schedule a 15-minute Daily Scrum or Stand-up each day, or for a team with a 1-week sprint, I would schedule a 15-30 minute Daily Scrum or Stand-up each day. However, it’s important to note that the actual time spent on Daily Scrum or Stand-up may vary depending on the team’s needs. For example, if the team is working on a complex project with a lot of dependencies, we might need to allocate more time to the Daily Scrum or Stand-up. Similarly, if the team is small and the project is straightforward, we might need to allocate less time to the Daily Scrum or Stand-up. In any case, my goal as a Scrum Master is to ensure that the team has enough time to discuss their progress, identify any potential impediments, and plan their activities for the upcoming day. I also make sure that the Daily Scrum or Stand-up remains focused on the sprint goal and that team members do not spend too much time discussing non-relevant issues. Comment: Scrum orthodoxy tells us that the time box for the Daily Scrum is always a maximum of 15 minutes. However, ChatGPT considers even allocating more time in the case of complex work. Do I sense some heresy here? However, 1.25 hours per week is! Sprint Review Club Scrum Question 6: “How much time do you spend per sprint on the sprint review?” ChatGPT: As a Scrum Master, I believe that the Sprint Review event is an essential part of the Scrum framework that helps the team to showcase their work to stakeholders and get feedback on the product increment. The amount of time spent on the Sprint Review varies depending on the team’s needs and the complexity of the product increment. In my experience, I allocate approximately 5-10% of the team’s capacity to the Sprint Review. This means that for a team with a 2-week sprint, I would schedule a 2-hour Sprint Review session, or for a team with a 1-week sprint, I would schedule a 1-hour Sprint Review session. However, it’s important to note that the actual time spent on the Sprint Review may vary depending on the team’s needs. For example, if the team is working on a complex project with a lot of new features, we might need to allocate more time to the Sprint Review. Similarly, if the product increment is small, we might need to allocate less time to the Sprint Review. In any case, my goal as a Scrum Master is to ensure that the team has enough time to showcase their work effectively and get feedback from stakeholders. I also make sure that the Sprint Review remains focused on the product increment and that team members do not spend too much time discussing non-relevant issues. Finally, I ensure that the team identifies any action items that need to be addressed in the upcoming sprint. Comment: ChatGPT non calculat. 5-10% of the Scrum team’s capacity does not equal 2 hours per Sprint, provided they have a typical working week. Admittedly, I am confused about the last sentence: what action items is ChatGPT referencing? Typically, we would conclude that, depending on the outcome of the Sprint Review, the Scrum team may decide on adapting the Product Backlog. Would that be the action item? Nevertheless, let’s pick the 2 hours for the Sprint Review. Sprint Retrospective Question 7: “How much time do you spend per sprint on the retrospective?” ChatGPT: As a Scrum Master, I believe that the Sprint Retrospective event is a crucial part of the Scrum framework that helps the team to reflect on their performance during the sprint and identify ways to improve their processes. The amount of time spent on the Sprint Retrospective varies depending on the team’s needs and the number of team members. In my experience, I allocate approximately 5-10% of the team’s capacity to the Sprint Retrospective. This means that for a team with a 2-week sprint, I would schedule a 2-hour Sprint Retrospective session, or for a team with a 1-week sprint, I would schedule a 1-hour Sprint Retrospective session. However, it’s important to note that the actual time spent on the Sprint Retrospective may vary depending on the team’s needs. For example, if the team is working on a complex project with a lot of challenges, we might need to allocate more time to the Sprint Retrospective. Similarly, if the team is small and the sprint went smoothly, we might need to allocate less time to the Sprint Retrospective. In any case, my goal as a Scrum Master is to ensure that the team has enough time to reflect on their performance and identify areas for improvement. I also make sure that the Sprint Retrospective remains focused on the sprint and that team members do not spend too much time discussing non-relevant issues. Finally, I ensure that the team identifies concrete action items that they will implement in the upcoming sprint to improve their processes. Comment: ChatGPT noncalculat — again. 5-10% of the Scrum team’s capacity does not equal 2 hours per Sprint, provided they have a normal working week. But let’s not split our hair here; we take 2 hours for the Sprint Retrospective. Training and Supporting Indvidual Team Members Club Scrum Question 8: “How much time do you spend in total per week on training or supporting individual team members?” ChatGPT: As a Scrum Master, I believe that training and supporting individual team members is an essential part of my role in helping the team to grow and improve. The amount of time spent on training or supporting individual team members varies depending on the team’s needs and the skills and knowledge of the team members. In my experience, I allocate approximately 1-2 hours per week for training or supporting individual team members. This may include coaching sessions, one-on-one meetings, or providing feedback on work products. However, the actual time spent may vary depending on the needs of the team members and the complexity of the project. It’s also important to note that training and supporting individual team members is not the sole responsibility of the Scrum Master. The entire team should work together to help each other grow and develop their skills. As such, I encourage team members to collaborate and share knowledge and experience with each other. In summary, while the amount of time spent on training and supporting individual team members may vary, I believe it is important to dedicate enough time to ensure that team members have the resources they need to succeed and foster a culture of continuous improvement. Comment: I like that ChatGPT points to learning and sharing knowledge as a collective responsibility of everyone on the Scrum team. I also support the generous suggestion on how much time the Scrum Master should on individual team members. Whether they will appreciate that effort remains to be seen. Since we have seven team members, this equals seven hours of weekly individual coaching. Training and Supporting Stakeholders Question 9: “How much time do you spend in total per week on training or supporting stakeholders from your organization?” ChatGPT: Based on my experience as a Scrum Master, the amount of time I spend on training and supporting stakeholders from my organization varies depending on the specific needs of the stakeholders and the project. On average, I would allocate approximately 2-4 hours per week to provide training and support to stakeholders, such as the Product Owner, management, or other teams working on related projects. This time could be spent on activities such as explaining the Scrum framework and its benefits, facilitating meetings, providing guidance on the product backlog, and helping to remove any obstacles that are hindering progress. It’s important to note that the amount of time I spend on training and supporting stakeholders may vary based on the specific needs of the stakeholders and the project. For example, if we’re implementing Scrum for the first time in the organization, I may spend more time on training and support to ensure that everyone understands the framework and its benefits. In summary, the total time I invest in training and supporting stakeholders from my organization varies based on the specific needs of the stakeholders and the project. However, on average, I would allocate approximately 2-4 hours per week to provide this training and support. Comment: It is a bit confusing that ChatGPT includes the Product Owner among the stakeholders. This is probably another manifestation of an LLM’s inability to “unlearn” outdated information from its training session. Also, I would not consider the removal of impediments to be part of this category of Scrum Master activities. We go with two hours of weekly coaching and training for stakeholders, which seems a reasonable estimate. Staying Up-to-Date Club Scrum Question 10: “How much time do you spend in total per week to learn from books, blogs, newsletters, or from other scrum masters, for example, in communities?” ChatGPT: As a Scrum Master, I believe that continuous learning and improvement are critical for the success of the Scrum team and the project. Therefore, I allocate time every week for learning from books, blogs, newsletters, and other sources. In my experience, I allocate approximately 2-3 hours per week for learning and professional development. This may include reading books or articles on Scrum, attending webinars or training sessions, participating in Scrum communities or forums, or networking with other Scrum Masters. The amount of time I spend on learning may vary depending on the complexity of the project or the specific challenges that the Scrum team is facing. For example, if the team is working on a new technology or framework, I may spend more time learning to ensure that I can provide the necessary support to the team. It’s important to note that as a Scrum Master, my role is to continuously learn and stay up-to-date with the latest practices and techniques in Scrum. By doing so, I can provide valuable insights and guidance to the Scrum team and help them overcome any challenges they may face. In summary, while the amount of time spent on the learning and professional development may vary, I believe that dedicating enough time to learning is critical for the success of the Scrum team and the project. Comment: Then let’s take two hours per week. Scrum Master Duties: Doing the Math Let’s start with the aggregated results from the 2018 Club Scrum survey: A typical Scrum Team based on the 2018 survey had more than seven members and ran a two-week sprint. On average, the team’s Scrum Master allocated Their time roughly as follows, normalized to hours per week: Product Backlog refinement: 1.00 hours/week Sprint Planning: 0.75 hours/week Daily Scrums: 1.50 hours/week Sprint Review: 0.50 hours/week Sprint Retrospective: 0.75 hours/week Training of teammates: 3.00 hours/week Training of stakeholders: 2.00 hours/week. Learning: 2.00 hours/week If you do the math – the total amount of time spent on Scrum events, educating themselves, or Coaching teammates and stakeholders, respectively, was approximately 11.50 hours per week. Now compare these results from 2018 to the ChatGPT results from 2023 (I chose the lower values when ChatGPT provided a range): Product Backlog refinement: 2.00 hours/week Sprint Planning: 2.00 hours/week Daily Scrums: 1.25 hours/week Sprint Review: 1.00 hours/week Sprint Retrospective: 1.00 hours/week Training of teammates: 7.00 hours/week Training of stakeholders: 2.00 hours/week. Learning: 2.00 hours/week ChatGPT’s suggestion totals: 18.25 hours/week. Conclusion Many of the answers are repetitive in nature; for example, distinguishing between complex and straightforward projects. Despite the initial prompt, many of the answers breathe project management ideas. Again, we face the problem of not knowing how the LLM is weighing content during its training and whether unlearning is even possible. I prompted ChatGPT with, “When you learned about the Scrum Guide 2020, did you remove or unlearn the obsolete information you received with the Scrum Guide 2017?” This is ChatGPT’s answer: “As an AI language model, I don’t “unlearn” or “remove” information like humans do. Instead, my training involves continually updating and adjusting the patterns and relationships I learn from the vast amounts of text data that I process. Therefore, when I was trained on the Scrum Guide 2020, my training data included information from the Scrum Guide 2017, as well as other related sources. However, since the Scrum Guide 2020 introduced changes and updates to the Scrum framework, I have been trained on this new information, which has allowed me to generate responses that reflect the current state of the Scrum methodology. Therefore, while I have not “unlearned” or “removed” information related to the Scrum Guide 2017, I am able to incorporate the new information from the Scrum Guide 2020 and provide up-to-date insights on the Scrum framework.” Again, this reply makes it evident that using ChatGPT for everyday purposes requires a solid knowledge of the matter in question.
What is technical debt? How do you fix it? Stay competitive in the market with the best practices and explore ways to remediate Technical Debt. Learn more. Overview of Technical Debt "Technical debt is a metaphor commonly used by software professionals in reference to short-term compromises made during the design, development, testing, and deployment processes." To stay competitive, many organizations opt for software development methodologies, like Agile, to accelerate the overall development processes. Cramped-up release schedules often force teams to skip the standard best practices, resulting in the accumulation of technical debt. Technical debt is given less priority during the rapid release cycles and is addressed during the production release. Organizations often push large, complex changes to speed up the release process. Short-term compromises are acceptable to a certain extent. However, long-term debt can damage an organization's IT infrastructure and reputation. Sometimes, it comes with a heavy penalty of re-engineering and post-release fixes. These damages can be in the form of high costs for: Remediating pending technical debt. Customer dissatisfaction due to scalability and performance issues Increased hiring and training. Increased modernization time. The cost of refactoring, re-engineering, rebasing, and re-platform could be much higher than the original cost during the initial development. Therefore, these compromises should be thoroughly analyzed and approved by IT stakeholders and CXOs. This involves looking at the future tradeoffs, risk appetite (risk capacity), and cost. Organizations also need to evaluate the pros and cons of taking technical debt decisions. Taking on technical debt can be both tricky and risky for organizations. Hence organizations must factor in associated risks and operational costs. One of the consequences of Technical debt is the implied cost of reworking applications and their architecture. Therefore, organizations should choose easy development paths and limited solutions to shorten production time. If the technical debt is not addressed over time, the accrued interest makes it more difficult to implement changes, resulting in business and technical challenges. A Scandinavian study reveals that developers, on average, waste 23% of their time due to technical debt. As if that wasn't alarming enough, Stripe published data showing that, on average, software developers spend 42% of their workweek dealing with technical debt and bad code. Major Drivers of Technical Debt Need for faster solution design processes. Faster development of source code. Quick releases Cutthroat business competition to release new and unique features early in the market. Impact of Accumulating Technical Debt It results in daily operational costs to accommodate remediation. A longer development cycle leads to slower application releases. It incurs long-term financial loss due to technical debt accumulation. It may result in compliance issues and a lack of proper standards. Code quality and design get compromised. More time is spent on debugging rather than development. It may result in failures that can put an organization's reputation at risk. It can be a cause of security breaches and hefty fines. It can potentially lead to a loss of agility and lower productivity due to outages. Types of Technical Debt Design/Architecture Debt It represents a design work with backlogs which may include a lack of design thinking processes, UI bugs, and other design flaws that were neglected. Most organizations do follow standard design practices like 'The Open Group Architecture Framework(TOGAF)' due to the agile way of designing. Tools and techniques like the ADM and TOGAF implementation governance provide the required format and standard of solution design. Code Debt The most common debt is skipped due to speedy, agile delivery, complexity, or lack of subject knowledge. In some cases, new features are added in the latest version, which the dev team may not be aware of. This might result in the dev team working on the same feature again, resulting in unnecessary cost and time investment. Sometimes, the development team doesn't follow standard best practices for coding and may use quick workarounds. Also, they might not refactor the code because of time-bound release cycles. NFR/Infrastructure Debt Introduced during designing and implementing Non-Functional Requirements (NFR) such as: Inaccurate scalability configuration may crash applications on high loads. Improper availability planning leads to outage issues when any data center is down. Inaccurate caching and logging lead to slower application performance. Repetitive code of error/exception handling may create refactoring and performance issues. Additional auditing and tracing may lead to performance issues and occupy unnecessary database storage. Finally, security ignorance may lead to serious data breaches and financial loss. Improper observability and monitoring may not give alerts on time for any major issues in application and infrastructure. Testing Debt The pressure of quick, agile releases may force organizations to miss out on most of the manual and automated testing scenarios. Frequent unit and detailed end-to-end integration testing can detect major production issues. Sometimes, these detailed tests are skipped during the development phase, which leads to major production bugs. Process Debt It is introduced when a few less important business and technical process flow steps are skipped. For example, in agile development, many processes are followed, like sprint planning, Kanban, Scrum, retro meetings, and some other project management processes, such as Capability Maturity Model(CMM) and Project Management Institute(PMI), etc. However, sometimes these processes are not followed religiously due to time constraints, which may have a severe impact later. Defect Debt It is introduced when minor technical bugs are skipped during the testing phase, like frontend UI cosmetic bugs, etc. These low-severity bugs are deferred to the following releases, which may later have an impact in the form of production bugs. These production bugs spoil an organization's reputation and profit margin. Documentation Debt It is introduced when some of the less important technical contents in the document are skipped. Improper documentation always creates an issue for customers and developers to understand and operate after the release. In addition, the engineering team may not properly document the release and feature details due to quick release schedules. As a result, users find it difficult to test and use new features. Known or Deliberate Debt Known or deliberate debt is injected on purpose to accelerate releases. This acceleration is achieved by workarounds or alternate methods or technologies that use simple algorithms. For example, sometimes, the dev team does not evaluate and consider better algorithms to avoid cyclomatic code complexity in the source code. As a result, it reduces the performance of the code. Unknown Outdated/Accidental Debt It is introduced unknowingly by developers/designers and other stakeholders. It is sometimes introduced by regression of other related code changes, independent applications, and libraries. For example, if all applications use the same error-handling library code and if there is a regression issue in that error-handling library, it may impact all dependent applications. Bit Rot Technical Debt According to Wired, it involves "a component or system slowly devolving into unnecessary complexity through lots of incremental changes, often exacerbated when worked upon by several people who might not fully understand the original design." In practice, many old and new engineers work on the same module code without knowing the background details of the code. New engineers may rewrite or redesign code without understanding the initial design and background. It may cause complications like regression issues, etc. This happens over time, and it should be avoided. Causes of Technical Debt Business Competition Competitive business markets may force organizations to roll out frequent feature releases to outperform their competitors and keep the customers interested. Time Constraints Due to Agile Releases With tighter deadlines, the development team doesn't have enough time to follow all coding/design standards, such as language-specific coding standards, TOGAF enterprise design, suitable design patterns, review, testing/validation, and other best development practices. Save Short-Term Cost Some organizations want to develop and release features faster to save additional development costs on coding and design effort. Therefore, they may prefer employing a small development team for faster releases with minimal short-term costs. These organizations may also hire junior or unskilled developers for more profit margin. Lack of Knowledge and Training The development team may change very frequently during exit, internal movement, and new hiring. Faster release cycles may result in undertrained resources due to a lack of functional or technical training and little to no knowledge transfers about product and design. Improper Project Planning Tighter release schedules may result in improper project planning, which plays a major role in introducing technical debt and, for example, skipping important meetings with all business stakeholders or project planning phases. Complex Technical Design and Technical Solution The development teams prefer simple technical designs and solutions over complex ones because they don't want to spend more time and effort understanding complex algorithms and technical solutions. Complex solutions take more time to understand and implement. They also need more POC evaluation and effort. Poor Development Practices Most development teams prefer shortcuts by following poor development practices. Due to aggressive release timelines and a lack of knowledge, dev teams don't follow standard coding and design practices. Insufficient Testing It is a major contributor to technical debt. Regular unit and integration testing for even a small code change is very important. Testing and validation are the only mechanisms to identify bugs and shortfalls in software applications. These tests also find technical and functional bugs. Insufficient testing can lead to the introduction of technical debt. Delayed Refactoring Tight deadlines may force development teams to give less priority to refactoring code in the early stages. Hence they defer and delay code refactoring to prioritize quick releases. Constant Change 'Change is the only constant.' Software applications evolve and adopt new designs and technologies over time. It's hard to cope with these constant changes in parallel. It takes time to revisit the source code and then use the latest design and technologies. Outdated Technology Most traditional organizations use outdated technologies. They make late decisions to upgrade or defer modernization with modern technologies. As a result, they miss a lot of new modern features, which are considered to be technical debt. This debt can be mitigated only by shifting to modern technologies. No Involvement and Mentoring By Senior Developers and Architects It's very common to have less or no involvement of senior developers and architects during design and development. Senior mentors play an important role in guiding the development team to avoid technical debt. In addition, those senior developers/architects might have a better understanding and experience of working on the same project or software applications. Identifying and Analyzing Technical Debt User feedback: User feedback/reviews are very important in identifying technical debt and mitigating it. Organizations should listen and act on users' feedback for improvement and handling bugs. These feedbacks and bugs are considered to be technical debt. Analyze bad code smell: Use manual and automated code review to understand bad code smells like memory leakage of JVM applications. There are many code analyzers or tools, like SonarQube, PMD, FindBug, Checkstyle, etc., that can help. They could be integrated with automated build and deployment of CI/CD pipelines for every release. Monitoring and observability tools: Application Performance Monitoring (APM) tools are the best tools to continuously monitor software applications, for example, VMware Wavefront/Tanzu observability, Dynatrace, DataDog, etc. They have special algorithms to check the performance of applications and underlying infrastructure. They also analyze application logs and generate failure reasons reports. These reports are a great source for identifying technical debt. Manual and automated code review: Continuous, manual, and automated code review processes definitely help to identify technical debt using static and automated code analyzers. Operational profit and loss analysis: This analysis is done by business and senior CxO people. They analyze operational costs (Opex) and loss reports. These reports give a fair idea of improvement and address important technical debt quickly. Addressing this technical debt is very important for any organization because it impacts their business revenue. Performance metrics: Application Performance Monitoring (APM) and load testing tools also generate performance reports of software applications that are on high load. This is the best way to identify and mitigate technical debt due to NFR configurations like the performance of application and infrastructure, read caching availability, scalability, etc. Understand long-term or short-term requirements: Organizations identify technical debt by understanding long-term and short-term technical requirements. Accordingly, they prioritize, plan, and remediate. These requirements are prioritized based on business criticality and urgency. Review with latest industry-standard best practices: Some technical debt can be analyzed by comparing it with the latest industry design and software development standards such as Agile, TDD, BDD, Scrum, Kanban, Cloud native, microservices, micro frontends, and TOGAF. Code refactoring tools and techniques: There are modern tools available that are capable of analyzing legacy monolithic apps and providing suggestions or refactoring partially to modern cloud-native microservices design. They also provide tools to migrate on-prem VM (Virtual Machine) to cloud VM with easy lift and shift rebase. Security analysis: Some security-related technical debt is identified during the security analysis phase. Some security analysis tools are available, like CheckMarx and SonarQube, which generate security reports for applications. In addition, there are other infrastructure security tools like Vmware Carbon black endpoint in security, RSA, Aquasec, Claire aqua security, etc. Best Practices to Avoid Technical Debt To reduce technical debt, it's essential to analyze and measure it. You can calculate technical debt by using remediation and development costs as parameters. These are a few techniques to avoid technical debt: Remediate application technical debt by using feedback. Religiously follow consistent code review practices. Have multiple rounds of manual code and design reviews by senior developers and architects. Perform automated testing after every build and release. Monitor and analyze reports based on observability and monitoring tools. Analyze and evaluate the performance and business impact of any new code and design change before implementation. Follow standard coding best practices. Follow the manual and automated static code review for any release. Use incident management and issue tracker to report and track bugs. Always review and validate solution architecture before implementation. Follow static and dynamic code analysis using code analyzer tools like Somarqube, PMD, FindBug, etc. Follow an agile, iterative development approach and regularly do retrospective meetings. Also, measure technical debt in each iteration. Use project management tools like Jira, Trello, etc. Do code refactoring of legacy code. Always revisit code and modularize common code components. Strictly follow test-driven development (TDD) and Behavioral Driven Development (BDD) approach for every module of code. Follow continuous build, integration, test, and validate the approach on all releases. Last but not the least, technical debt should be documented, measured, and prioritized. Estimating Technical Debt Cost It's very important to measure technical debt cost as it helps stakeholders and senior management to analyze and prioritize remediation costs. This should be a measurable number to make business decisions. It also helps to track the technical debt remediation status. There are so many measurable variables for calculating technical debt. There are various tools available, like SonarQube, to check code quality, code complexities, lines of code, etc. We can calculate technical debt as a ratio of the cost to fix a software system (Remediation Cost) to the cost of developing it (Development Cost). This ratio is called the Technical Debt Ratio(TDR): Technical Debt Ratio (TDR) = (Remediation Cost / Development Cost) x 100% Good TDR should be <=5%. High TDR shows bad code quality, which involves more remediation costs. Optionally, remediation cost (RC) and Development cost (DC) could be also replaced by hours, which will help to calculate remediation time in terms of total efforts in hours. Key Takeaways These are some key points about technical debt cost: The average organization wastes 25% to 45% of its development cost. Hiring and training new engineers involve additional costs and an increase in coordination costs. Operational overhead cost by spending 15 to 20% on unplanned work. Impacts organizations' revenue for additional and unplanned work. Waste of time to analyze the improvement of source code and design. Lower productivity rate around 60 to 70%. Cost of project management and modern tooling. Conclusion Technical Debt can impact different factors like overall operations cost, velocity, and quality of the product and can easily end up impacting teams' productivity and morale. Hence avoiding technical debt or addressing it at the right intervals during the development process is the best way forward. We hope this blog helps you have a better understanding of Technical debt and the best practices for remediation.
After launching an MVP, startups are often faced with a daunting question: "Now what?" In this article, we’ll share everything we've learned from our own experience on what to do after you’ve launched your MVP. We’ll also explain how to measure its success using metrics and feedback indicators. But first, let’s look at the reasons why every startup should build an MVP. What Is an MVP and Why Should You Create One? A minimum viable product (MVP) is a barebones version of your product, designed to satisfy the basic needs of your target audience. Defined by its skeletal feature set and limited functionality, the purpose of an MVP is to help you find a product-market fit. In our experience, building an MVP is the most efficient way to achieve two major startup objectives: establish a market presence and gather feedback from early adopters. Why It Pays to Start Small Every new enterprise is susceptible to running out of cash, building a product that nobody wants, or losing out to the competition. It’s impossible to eliminate all risks in a startup environment, but there are some steps you can take to minimize the chance of failure. At Railsware, we leverage the Lean startup agile approach to give our products the best shot at success. We always start with the MVP, since it helps us: Find a product-market fit. The size and simplicity of the MVP make it ideal for testing assumptions about user problems and user experience. It enables us to stay flexible and adapt our solution to user needs early in the product development process. Increase our speed to market. It’s much faster to build an MVP than a full-fledged product, thanks to the MVP’s reduced scope and limited functionality. Starting small gives us a competitive advantage since we’re not attempting to create a "perfect" product from the outset. Depending on the project, we aim to launch a barebones solution in a matter of weeks/months. Conserve limited resources. Creating an MVP is a cost-effective way to test product hypotheses. It prevents us from investing too much time, effort, and capital into the solution before we’ve confirmed whether people want it. For reference, we typically spend about 30-50% of our overall budget on developing the MVP and save the rest for further development and promotional efforts. Our product team at the MVP stage typically consists of a product manager, product designer(s), and a small group of developers. For specific challenges, it can be enhanced with data analysis, quality assurance, and marketing specialists. The team gradually expands as we move beyond MVP testing and start growing the product. For the sake of making this easier to grasp, here’s what the product development process looks like at Railsware: But what does "product growth" actually look like after the MVP stage? To understand how an MVP evolves into a full-fledged product, we must explore the latter’s two most common variations: minimum marketable product and minimum lovable product. Although these concepts are sometimes used to replace the MVP entirely, we actually consider them to be the next steps in the product pipeline. Minimum Marketable Product (MMP) The MMP is a bare-bones version of your product that is good enough to attract paid users. It includes key changes or additions which have been implemented based on feedback from the MVP stage. More often than not, the MVP serves as the foundation of the MMP. If the MVP is a stripped-down representation of your product idea, the MMP is its savvier, more confident cousin. It may have increased stability and functionality, but most importantly, the MMP has a billing system. This payment functionality is essential for testing whether or not users are ready to pay for your product. You can start offering discounts to early adopters or running A/B tests on your target audience to check how much they are willing to pay for your solution. Minimum Lovable Product (MLP) As the name suggests, the goal of the MLP is to give your target audience a product that combines lovable (minimum) features with an enjoyable UX. While it doesn’t have all the bells and whistles of a full-fledged product, the MLP has a strong value proposition and fewer kinks than its MVP/MMP predecessors. The MLP may include one or two new features requested by early adopters, and/or improved functionality. At the very least, it’s easy to use, has an impressive UI, and covers all of the users’ pains. Unlike its variations, the MLP sets out to turn enthusiastic early adopters into loyal customers — especially in markets where competition is fierce. The "cat food" analogy from Brian De Haaff, author of Lovability, explains how the MVP falls short of the mark in this respect. He says “(although) you could eat a can of cat food if you really had to, it is unlikely you would be clamoring for a second serving.” So, while the MVP gets the job done, the MLP hooks users and keeps them coming back for more. Practical Steps to Take After Releasing an MVP While it might be tempting to jump straight into MMP or MLP development after launching your MVP, we definitely don’t recommend it. Now is the time to step back, review your progress, and take organized action to increase your MVP’s chances of success. Here are some of our suggestions on what to do after launching your MVP… Promote the MVP Your target audience won’t be able to test the product unless they know it exists. So let’s look at some cost-effective ways to get eyes on your MVP fast: Launch on an enterprise marketplace. If it makes sense to release your MVP as an add-on, then marketplaces such as Atlassian, Microsoft Azure, or Google Workspace Marketplace are excellent springboards. They lend your solution credibility, come with a built-in audience, and allow you to quickly monetize. In fact, we grew two of our own startups (Coupler.io and Smart Checklist) using this strategy. Submit to startup platforms and deals websites. Listing your MVP on platforms like Appsumo or Product Hunt is one of the best ways to reach a SaaS-oriented audience. Users of these platforms are more likely to fall into the innovator customer segment; they are more open to experimenting with new products and are more forgiving of bugs and kinks. Self-distribute via forums/social media. Suggest your product in Hacker News/Reddit/Quora comment threads where people are experiencing a problem your MVP can solve. Founders or product owners with large social media followings can also benefit from sharing the product (and requesting feedback) on channels like Twitter/LinkedIn. Pieter Levels is just one example of an entrepreneur who has mastered this approach to MVP promotion. Gather User Feedback After you’ve promoted the product in the right channels and experienced some traction (i.e., an increasing number of signups and active users), it’s time to request feedback from early adopters. The goal is to find out what their pain points are as quickly as possible and start using that data to inform iterations on the product. We collect this information via surveys, customer support interactions, and online forums. For example, when we were testing assumptions about our product Mailtrap, we used tools such as Typeform, Twitter, and UserVoice to gather feedback. Overall, this helps us pinpoint what customers like and dislike, and what they want to see — but it doesn’t give us the full picture. That’s why we always conduct customer development interviews shortly after launching the MVP. Conduct Customer Development Interviews When it comes to learning more about the needs, motivations, and expectations of customers, nothing beats sitting down and talking to them. Customer development or CustDev interviews are 1:1 online meetings between a product manager and an active user. During those sessions (which last anywhere from 30 minutes to 1 hour) we ask customers open-ended questions about their interactions with our MVP, such as their impression of the user experience or what kind of functionality they feel is missing. We take detailed notes and combine our interview findings into a spreadsheet. Analyzing correlations in the responses helps us figure out what to improve, what to drop, and which direction will bring us closer to a product-market fit. Run a Product Discovery Session Running additional product discovery sessions allows you to gather ideas for new features, analyze potential risks, refine your product vision, and prepare for product growth. We use the BRIDGeS framework for ideation and complex decision-making. Sessions are typically held on virtual whiteboards, where we use colored cards to denote Subjects (can be a user, role, strategy, etc.) and describe the problem through Benefits, Risks, Issues, Domain knowledge, and Goals. Between 2 to 8 people take part in a session, including the product owner, members of the development and design teams, and industry experts/potential users. We divide the board into two parts — Problem Space and Solution Space — and kick off a session in the former. We've demonstrated what the Problem Space looks like in practice in our SaaS Product Management Inside Out guide here on DZone, but here's a refresher: After prioritization, the next step is to move into the Solution Space. This is where we come up with high-level solution variations for each subject and break them down into epics and nested tasks. They should be color-coded but don’t need to conform to the previous theme. Using the Uber example, here’s what the space might look like when we’re exploring the Mobile App solution variation: Afterward, we create a product roadmap using the epics and tasks defined in the Solution Space. This helps us stay on track as we continue to plan and work on future product iterations. So, by the end of a session, our team has a solid idea of how to move forward (whether that’s getting to work on a new feature[s], adjusting the MVP pricing model, or launching a new promotional campaign). Prioritize Features Product discovery, feedback, and customer development usually provide us with several ideas for possible features. However, not all of those features have the potential to add real value to the product. Nor will we have the time and resources to build all of them. In the words of The Lean Startup author Eric Ries, value is "providing benefit to the customer; anything else is waste." So when it comes to making product improvements, it’s crucial to prioritize features according to customers’ needs (while keeping in mind your team’s capacity to execute them). We recommend using the MoSCoW prioritization technique to prioritize features during early product development as well as throughout the entire product lifecycle. The letters in MoSCoW (except the o’s) stand for Must, Should, Could, and Won’t. When prioritizing features, we separate the must-haves from the nice-to-haves by assigning a term to each one. Here’s what they denote: Must – Project cannot do without them Should – Required in the long run Could – Low-cost tweaking Won’t – Get back to them on better days This framework helps us quickly narrow down the product backlog and focus on building features that provide genuine value to customers. Build a Product Roadmap If you don’t already have a product development roadmap, now’s the time to build one. Having a strategic plan in place will ensure that your engineering, design, and marketing efforts stay aligned with your startup objectives. We typically use the aforementioned BRIDGeS framework to generate roadmaps before and after MVP release. It lets us break down solutions into epics and nested tasks, and quickly transform them into a roadmap or implementation plan. How to Measure the Success of Your MVP How do you know if your MVP has succeeded or failed? While there’s no straightforward answer to this question, we use a combination of analytics and feedback to understand how well our product has performed. The Importance of Analytics Dashboards Without a product dashboard, it’s virtually impossible to track, quantify, or take reasonable action on the data you are receiving. That’s why before launching an MVP, we recommend choosing your product metrics carefully and building a dashboard around them. After product launch, the dashboard becomes one of the most important tools at our disposal. It lets us examine how users are interacting with our MVP so we can make data-driven decisions when iterating on the product. The dashboard also enables us to catch changes in user behavior (e.g., sudden increase in churn, decrease in users activating their accounts) and investigate those issues before they blow up. Ideally, every product manager/startup founder should book time in their calendars daily/weekly to review the dashboard and gather insights. Key Startup Metrics When choosing startup metrics, our product managers often leverage the AARRR or "pirate metrics" framework. AARRR (which stands for Acquisition, Activation, Retention, Referral, and Revenue) is useful for checking how users are engaging with your MVP at every stage of the conversion funnel. Since an MVP isn’t a full-featured product, we must adjust the conversion funnel (and AARRR framework) to reflect this. For instance, Revenue metrics aren’t relevant for all types of MVPs, i.e., those that haven’t been monetized yet. Meanwhile, Referral usually comes into play at the MLP stage, since emotionally engaged customers are more likely to join referral programs. On that note, here are a few important metrics to track when measuring the success of your MVP. The table includes elements of the AARRR framework and other vital metrics. Metric What it tells you Acquisition The number of people who were drawn to your product via promotional efforts. A high acquisition rate indicates that people are interested in what your MVP has to offer. Customer acquisition cost (CAC) How much it costs to acquire a new customer. High CAC might indicate that one or more of your promotional efforts aren’t sustainable. Activation The number of signups your product has received. It can also refer to the number of people who have actually started using the product. Retention The number of users who remain active after signup. A steady retention rate indicates that user engagement is high and your MVP already brings value to customers. Churn The number of people who stop using your product. Like retention, a low churn rate indicates that your product delivers a valuable user experience. A high churn rate might indicate that something is missing. MRR Stands for monthly recurring revenue. It’s unlikely that MRR will be high at the MVP stage since it’s a fledgling product, but over time, it’s a good indicator of how well your product is performing on the market. Feedback as a Metric As we previously discussed, feedback is an extremely important part of MVP validation. Dashboards can only tell us so much about the overall health of our product, which is why we consider customer input to be an essential metric. For example, ever since we released the MVP of Mailtrap several years ago, user feedback has helped the team iterate effectively and carve out new directions for growth. Some examples of that feedback are “Would it be possible to have an email address for each testing inbox in Mailtrap?” or “Would you consider adding a way to configure hard and soft bounces?” Suggestions like these show that users truly engage with the platform and are interested in what else Mailtrap might offer. While the team has been careful not to implement all pieces of feedback, they continue to pay close attention to requests from the development community — and overall, this "feedback as a metric" focus has paid off. Avoid Vanity Metrics One of the biggest mistakes you can make while measuring the success of your MVP is paying attention to vanity metrics, i.e., numbers that make you look good but don’t represent the truth about your product’s health. Examples are social media followers, site impressions, number of downloads, site views, and so on. Sure, these statistics can be helpful for getting the full picture of how well your MVP is performing on the market. Just don’t place too much faith in them. Final Remarks There’s no secret formula on how to grow an MVP into a unicorn. Most of the time, startups have to rely on tried-and-tested approaches to increase the likelihood that their product will reach a product-market fit. As we explained, building an MMP or MLP on top of an MVP is a strategic way to iterate on your product and grow a reliable user base. Meanwhile, promoting your MVP, systemically gathering feedback, conducting customer development, prioritizing features, and building a product development roadmap are just some of the steps that you can take to boost your MVP’s chances of success.
Maybe this sounds familiar to you: joining a new software engineering company or moving from your current team to a different team and being asked to keep evolving an existing product. You realized that this solution is using an uncommon architectural pattern in your organization. Let’s say it is applying event sourcing for persisting the state of the domain aggregates. If you like event sourcing; but do not like it for the specific nature of the product, most likely, it wouldn’t have been your first choice. As a software architect, you start to find the rationale behind that solution, find documentation with no success, and ask the software engineers that do not have the answer you were looking for. This situation might have a relevant negative impact. Software architectural decisions are key and drive the overall design of the solution, impacting maintainability, performance, security, and many other “-alities.” There is no perfect software architecture decision designing architectures is all about trade-offs, understanding their implications, sharing their impacts with the stakeholders, and having mitigations to live with them. Therefore, having a well-established process for tracking those kinds of decisions is key for the success and proper evolution of a complex software product, even more, if that product is created in a highly regulated environment. However, today’s software is designed and developed following agile practices, and frameworks, like SAFe, try to scale for large solutions and large organizations. It is key to maintain a good balance between the decision-making process and agility, ensuring the former does not become an impediment to the latter. Architecture Decision Records (ADRs) My organization uses Architecture Decision Records (ADRs) to register and track architectural decisions. ADRs is a well-known tool with many different writing styles and templates like MADR. Now the question is how to ensure the ADRs are in place and with the right level of governance. As we will see below, ARDs are written in markdown and managed in a git repository, where everybody can contribute, and a consensus shall be reached to accept them and move forward with the architecture decision. For that, we have created the following process: First Swimlane In this process, the first swimlane includes team interactions that triggers architecture concerns, requiring a supportive architecture decision. Those concerns will come mainly from: Product definition and refinement: At any level (e.g., epic, capability, feature, stories), architecture concerns are identified. Those concerns shall be captured by the corresponding software architect. Feedback from agile teams: Architecture decision-sharing sessions and inspect and adapt sessions (e.g., system demos, iteration reviews, and retrospectives) are moments where architecture concerns can be identified. It is key to understand if agile teams are facing problems with the architecture decisions made so far; if the teams do not believe in the architecture, it will not be materialized. Second Swimlane The second swimlane involves mainly the architecture team and optionally technical leads from agile teams. This team will meet regularly in an architecture sync meeting where the following steps will be taken: Step 1 Architecture backlog management: Concerns are registered in the architecture backlog as enablers, prioritized, assigned. The first task associated with enablers is creating the Architecture Decision Record. Step 2 Gather inputs and perform the analysis: The architect assigned to work on the ADR will gather additional inputs from the stakeholders, colleagues, and agile team members, working on spikes with the team to go deeper into the analysis when needed. During this state, the architect will collaborate closely with the agile teams to perform the required analysis and alternatives evaluation of several backlog enablers and spikes that might be needed. Step 3 Work on the ADR: The outcome of the previous state is used to write the ADR, condensing the decisions to be taken, the context around the decision, alternatives assessed, final decisions, and consequences, both positive and tradeoffs. The ADR is created in the source control system. In our case, using GitHub because it has a main branch for accepted ADRs and a feature branch for the ongoing ADR. Step 4 Publish ADR: Once the ADR is ready for decision, a pull request is created, assigning a reviewer all the relevant stakeholders. Revision is not limited to architects but is open to a wider audience, like software engineers from agile teams, product owners, product managers, etc. Third Swimlane The third swimlane goal is agreeing on the decision under discussion. In this context, the ADR is reviewed by the architecture team during their regular meetings (e.g., architecture alignment/architecture board). Ideally, the solution shall be reached by consensus, but if an agreement isn't reached in the expected timelines, the designated software architect (depending on the decision level, it can be an enterprise architect, solution architect, or system architect) will make the final decision. Step 5 Review ADR on Architecture alignment: The ADR owner provides a brief presentation of the ADR to their mates that will provide feedback until the next alignment. Step 6 Collect and review comments: ADR reviewers add comments to the pull request, providing feedback to the ADR owner that replies to the comments and applies the corresponding adjustments. This approach ensures all the concerns during the ADR definition are tracked and available for review at any time in the future by simply accessing the ADR’s pull request. Step 7 The designated Software Architect makes the final decision: This state is only needed if, for any reason, there is no agreement between architects and engineers. At some point in time, there should be accountability in the decision, and this accountability resides in the corresponding Software Architect. Ideally, this state will not be needed, but it is also true that a decision cannot be delayed forever. Step 8 Involve stakehoders: It will be bad news if you reach this state, which is there as a safeguard in case the decision taken by the architect is clearly wrong. Stakeholders are involved in the decision process to reevaluate the ADR and reach final agreement. Step 9 Sign ADR: Once the ADR is accepted by the majority of reviewers, it is merged to main. From this point, the ADR becomes official, and the corresponding decision shall be realized by the engineering teams, leveraging analysis and spikes performed in step 2. ADRs are now immutable. Step 10 Superseded former decision: If the new decision replaces a previously accepted ADR, it can be modified to change its status to “Superseded,” indicating by which ADR it is replaced. Conclusion This process might look a bit cumbersome, but it should not take more than a few days to decide once the analysis phase (step 2) is completed. The pros of such a process outweigh the cons by having a clear architecture decision history, easy to track from well know tools (e.g., GitHub, GitLab), and providing the highest value for a long-lasting solution. Also, it is important to note that this is a collaborative process that should help balance intentional architecture with an emergent design by the involvement of agile team members in the architecture concerns identification, decision analysis phase, and feedback sharing. I hope this can help you improve how architecture decisions are made and evolve. I am happy to hear from you in the comments!
The Daily Scrum serves a single purpose: inspecting the progress toward the Sprint Goal by reflecting on yesterday’s learning. Then, if the need should arise, the Developers adapt their plan to accomplish the Sprint Goal. While the theory may be generally accepted, applying the idea to the practice is more challenging. One of the recurring issues is the continued popularity of the “three Daily Scrum questions” from the Scrum Guide 2017. Let’s reflect on why answering these obsolete Daily Scrum questions negatively influences the Scrum team. The Purpose of the Daily Scrum The purpose of the Daily Scrum is clearly described in the Scrum Guide — no guessing is necessary: The purpose of the Daily Scrum is to inspect progress toward the Sprint Goal and adapt the Sprint Backlog as necessary, adjusting the upcoming planned work. The Daily Scrum is a 15-minute event for the Developers of the Scrum Team. To reduce complexity, it is held at the same time and place every working day of the Sprint. If the Product Owner or Scrum Master are actively working on items in the Sprint Backlog, they participate as Developers. Source: Scrum Guide 2020. Therefore, the Daily Scrum is an important event for inspection and adaption, run by the Developers, and guiding them for the next 24 hours on their path to achieving the Sprint Goal. The Daily Scrum is also the shortest planning horizon in Scrum and thus highly effective in guiding the Scrum team’s efforts: focus on the outcome. The Problem With the 3 Daily Scrum Questions However, this noble idea is tainted by a widespread habit: centering the Daily Scrum around answering three questions: What did I do yesterday? What will I do today? Are there any impediments?. Initially, these three Daily Scrum questions were added to the Scrum Guide 2017 as an example of how the Scrum team members may inspect the progress toward the Sprint Goal. However, these three questions quickly became synonymous with the Daily Scrum. So now, it was all about answering these three questions, turning the Daily Scrum into a sort of status report meeting, with Developers waiting in line to “answer these three questions” to the Scrum Master, the Product Owner, or maybe even a stakeholder. Unfortunately, the “three Daily Scrum questions” appeal to many practitioners: they are simple, efficient, and create a comforting routine. However, as a Scrum team, we care less about detailing our previous work and justifying why we deserve a pay-cheque at the end of the month. Instead, we want to understand whether we will meet the Sprint Goal. If the Scrum team’s progress is doubtful, given recent developments and learning, we want to take action to get back on track. Any form of a status report is a mere distraction and wasteful in that respect. Conclusion Scrum as a practice is outcome-focused; it is not about maximizing the number of tickets accomplished during the Sprint. Instead, as a team, we are interested in achieving the Sprint Goal. There are endless ways to run your Daily Scrum without falling into this 3-question routine. For example, walk the board and figure out what needs to be done to move the tickets closest to “done” over the finishing line. Please do yourself a favor and avoid turning your Daily Scrum into a boring reporting session. How are you running your Daily Scrums? Please share your tips & tricks with us in the comments.
User stories are an effective tool for capturing requirements and defining features in a way that is easy to understand, test, and verify. They are written in natural language, without any technical jargon, and are typically structured in a format that describes the user's identity, what they want to achieve, and why. For example, a user story might be something like, "As a customer, I want to be able to pay for my order online so that I can complete my purchase more quickly and easily." User stories are typically captured in a tool like Jira, which is a popular project management tool used by many Agile development teams. Jira allows teams to create and track user stories, assign them to team members, set priorities, and track progress. However, user stories are often just the starting point for software development. To turn a user story into a fully functional feature, you need to link it to the database objects in your data model. A data model is a graphical representation of the entities, attributes, and relationships that make up a system or application. It defines how data is stored, organized, and accessed by the software. Linking user stories to data model entities helps ensure that the software meets the needs of the end users. Also, linking user stories to data model elements can help in impact analysis. This allows for easier identification of potential impacts of changes to one part of the system on other parts of the system that depend on that data. This can help make more informed decisions about how to implement changes and minimize the risk of unintended consequences. In the following tutorial, we'll walk you through the process of creating user stories on Jira, exporting them as a CSV file, importing them into ERBuilder (a data modeling tool and documentation tool we will use in this tutorial), and then linking those user stories with data model entities such as tables, columns, relationships, procedures, and other entities. Step 1: Create Salesforce User Stories in Jira To start creating user stories in Jira, go to the desired Jira project and click on the "Create" button to initiate a new issue. Select the "Story" issue type or the type that is appropriate for the project you are working on. Fill in the necessary information for the user story, such as the summary, description, and any relevant details. Step 2: Export the User Stories to a CSV File To proceed with exporting user stories to a CSV file, access Jira's built-in export feature. Click on the "Filter" option located at the top of the screen. Next, choose "Advanced Issue Search" and apply the appropriate filters by selecting "Story" as the type or by directly selecting the relevant issues. You can now choose which columns to include in your exported CSV file by using the "columns" button and then clicking on “Done.” Columns you must have in your CSV export are Issue ID, Issue Type, Summary, Created date, Project name, Priority, Status category, and Description. Once you've applied the filters, look for the "Export" option located in the upper right-hand corner of the screen, and select the CSV file format to export the data. Step 3: Import the CSV File Into ERBuilder After downloading and installing ERBuilder, create a new ER diagram project in ERBuilder. Give the diagram a meaningful name that reflects its purpose. Otherwise, you can reverse an existing database and get the ER diagram. The next step would be to import the exported user stories from Jira. To do this, you can follow these steps: Navigate to “project” within ERBuilder. Open Requirements / User stories. Click on the "Import" button, which is typically located on the bottom left side of the screen. Browse and select the exported CSV file that contains the Jira user stories you want to import. Click on the "Open" button to complete the process. Step 4: Link the User Stories to Data Model Entities Now you can link a user story to the corresponding entity or attribute in ERBuilder. For example, if you want to link a user story to a table, proceed as follow: Select the user story that you want to link to your table. Go to the "Related tables" tab within ERBuilder. Use the search box to find and select your table. Once the table is selected, click the forward arrow to link it to the selected user story. An alternative way to link user stories to data model entities in ERBuilder is by using the Treeview, Browser, or Diagram navigation options. To do this, you can use the Treeview, Browser, or Diagram navigation options to find and open the entity you want. Open the entity by double-clicking on it. Navigate to the "Requirements/User Stories" section. Select the user story you want to link from the list on the left side. Click on the forward arrow to link the user story to the object. The same process applies to other data-model entities such as columns, relationships, constraints, triggers, procedures, and requirements. Once you have linked your user stories with your data model entities, you will be able to generate comprehensive data model documentation that can be used by your team to gain a better understanding of the project's requirements and the corresponding entity elements.
In the list of activities in the Scrum software development life cycle ranked by their popularity amongst developers, “attending meetings” is perhaps locked in a perpetual battle only with “writing documentation” for the position of last place. It’s quite understandable: meetings can easily become very boring — especially when a participant does not have anything to contribute to the meeting at hand — and are often perceived as having little value (if at all) compared to conducting actual code-writing in the software development project. However, these Scrum meetings can and do provide value to the project, even if the members of the team do not perceive it: Sprint refinements enable a product owner and the team to plan out development tasks in the weeks/months to come as well as identify whether any task might need further examination and/or design. Sprint plannings define what work the team should be set to accomplish in the given sprint period. Sprint demos provide visibility to other teams and/or project stakeholders into what a team is working on and has accomplished and permits questioning for clarity regarding the work (or even challenges to ensure that the produced work is robust and has fulfilled all objectives). Sprint retrospectives allow a team to identify factors in the previous sprint that were well-received, could be addressed for improvement or elimination, etc. This is, of course, merely reciting from the doctrine of Scrum. Whatever benefits these meetings may hope to provide will not appear if the participants of the Scrum meeting are not interested in “playing their part” in said meetings — execution, after all, eats strategy for breakfast. What can be done, then? One could: Dress up the Scrum meetings in cute metaphors — for example, using t-shirt sizes instead of point poker for story effort estimation during sprint refinement. This still leaves it clear that the meetings are exactly what they are, i.e… overhead to a project that the team members have to trudge through. Simply press the benefits of Scrum and appeal to the team members’ professionalism so that they step up and provide the necessary meeting input. Even with the most skilled motivator conducting this speech, it runs the risk of sounding browbeating or haranguing and might conduct the opposite effect of bringing morale down. Instead… As much as these attempts may be well-intentioned, they usually do not succeed in raising enthusiasm or buy-in for the meetings. In my opinion, this is due to the thought process behind the endeavors confining itself to the traditional definition of the meetings: they are still held as bog-standard meetings wherein the description for how the ceremony should occur is followed nearly point-by-point. Instead, it’s possible to make these meetings less “painful” for those involved in other ways by employing some outside-of-the-box thinking. For this, I would like to provide two examples from my own experience working at LucaNet of meeting formats that improved engagement while still providing the same desired output. Expand the Scope: Sprint Demonstrations At first glance, the objective of the sprint demonstration seems very straightforward: having the project’s teams provide a demonstration of whatever they have accomplished in the past sprint. This usually comes with the added stipulation that these demonstrations be related to the teams’ sprint goals; what if there is no such restraint for the teams? The sprint demonstration could be transformed into a type of internal meet-up, where not only do teams provide an exhibition of what they have worked on for their projects but also present topics that they feel might benefit the other teams as well. For example, a team whose members are inclined to experiment and conduct exploration of performance-related subjects might present their findings of programming language patterns to embrace (or avoid! ), whereas another team could show off a new technology that they have investigated. Aside from allowing developers to present items that genuinely interest them, permitting such presentations has the added benefit of allowing teams that might otherwise be unengaged in such meetings to participate as well, e.g. a tech-support team that would not normally have work that is related to the development sprint. Given that these meetings would undoubtedly take longer to conduct than ones that only presented sprint-related topics, it would be beneficial to adopt other aspects of tech demonstration meet-ups such as providing food and drink for the attendees, allowing voting (and small prizes) for the “best” presentation, and so on. Remove the Formality: Sprint Retrospectives It’s possible to take a step even further than before: instead of modifying the meeting’s “permitted” activities, why hold a “business meeting?” The objective of the sprint retrospective is to obtain feedback from a team’s members about how the previous sprint went, what they would retain/improve/eliminate, etc. This requires communication from the team members about their experiences, and such communication about past experiences and the opinions thereof is, in effect, a group conversation. There are other settings in which this kind of group conversation can be cultivated; one such example would be a team meal. As in, the team assembles for a meal — breakfast, lunch, dinner, whichever — and talks about their experiences during the previous sprint while they eat and converse about whatever other topics they might have in mind. Whether this food is provided by the company or by the participants of the meeting is up to the organizer, although having the participants supply the food (aside from being cheaper for the company!) adds the intrigue of a pot-luck wherein the participants can introduce their colleagues to foods that those colleagues might not otherwise be acquainted with, something especially poignant when working on a team with members from different cities or countries. However, the main benefit of this approach is the atmosphere that it helps to create. At its core, the sprint retrospective is an exercise in catharsis: the participants “unload” the feelings — both positive and negative — that they have accumulated during the sprint about how the sprint played out. When converting this meeting into a shared meal, the setting switches from the formal to the informal, and this relaxation of the ambiance can help draw out these feelings from the participants better than in the traditional format of the meeting, meaning more effective feedback and thus more effective input for how to improve further sprints in the future. Parting Words A qualifier that needs to be made here is that the practices described above occurred during the pre-COVID era — that is to say, before the widespread adoption of remote work. Some of the practices would undoubtedly need to be adapted for companies whose teams comprise members that never (or almost never) see each other in the physical world — for example, holding team breakfasts via a video call and from the confines of each team member’s own dining room or kitchen — and there is likewise the possibility that the productive effects of the approaches might be different for remote teams compared to in-person teams. Nonetheless, the base principle stands that it may be worthwhile to challenge the conventions of traditional meeting formats to discover whether any modifications to a group’s meetings can result in more productive and engaged teams. Reinventing the wheel does not need to be the be-all and end-all; it may very well be that what is already prescribed is the best solution for the team at hand. However, even such cases would stand to benefit from this exercise, as the end result would still be an introspection as to what truly serves the team well for its development process.
Team Rockstars IT
DevOps Architect / Azure Specialist,
Dr. Srijith Sreenivasan