The Testing, Tools, and Frameworks Zone encapsulates one of the final stages of the SDLC as it ensures that your application and/or environment is ready for deployment. From walking you through the tools and frameworks tailored to your specific development needs to leveraging testing practices to evaluate and verify that your product or application does what it is required to do, this Zone covers everything you need to set yourself up for success.
As per StackOverflow insights, JavaScript is the most popular programming language. As the power of web and mobile is increasing day by day, JavaScript and JavaScript frameworks are becoming more popular. It would not be surprising to hear that JavaScript has become a preference for test automation as well. Over the past few years, a lot of development has happened in the open-source JavaScript based test automation framework development and now we have multiple JavaScript testing frameworks that are robust enough to be used professionally. There are scalable frameworks that can be used by web developers and testers to automate even unit test cases and create complete end-to-end automation test suites. Mocha is one JavaScript testing framework that has been well renowned since 2016, as per StateofJS. With that said, when we talk about JavaScript automation testing, we can’t afford not to loop Selenium into the discussion. So I thought coming up with a step-by-step Mocha testing tutorial on the framework will be beneficial for you to kickstart your JavaScript automation testing with Mocha and Selenium. We will also be looking into how you can run it on the LambdaTest automation testing platform to get a better browser coverage and faster execution times. By the end of this Mocha testing tutorial, you will have a clear understanding of the setup, installation, and execution of your first automation script with Mocha for JavaScript testing. What Will You Learn From This Mocha Testing Tutorial? In this article, we are going to deep dive into Mocha JavaScript testing to perform automated browser testing with Selenium and JavaScript. We will: Start with the installation and prerequisites for the Mocha framework and explore its advantages. Execute our first Selenium JavaScript test through Mocha with examples. Execute group tests. Use the assertion library. Encounter possible issues along with their resolutions. Execute some Mocha test scripts on the Selenium cloud grid platform with minimal configuration changes and tests on various browsers and operating systems. What Makes Mocha Prevalent? Mochajs, or simply Mocha, is a feature-affluent JavaScript test framework that runs test cases on Node.js and in the browser, making testing simple and fun. By running serially, Mocha JavaScript testing warrants flexibility and precise reporting, while mapping uncaught exceptions to the correct test cases. Mocha provides a categorical way to write a structured code for testing the applications by thoroughly classifying them into test suites and test cases modules for execution and to produce a test report after the run by mapping errors to corresponding test cases. What Makes Mocha a Better Choice Compared To Other JavaScript Testing Frameworks? Range of installation methods: It can be installed globally or as a development dependency for the project. Also, it can be set up to run test cases directly on the web browser. Various browser support: Can be used to run test cases seamlessly on all major web browsers and provides many browser-specific methods and options. Each revision of Mocha provides upgraded JavaScript and CSS build for different web browsers. Number of ways to offer test reports: It provides users with a variety of reporting options, like list, progress and JSON, to choose from with a default reporter displaying the output based on the hierarchy of test cases. Support for several JavaScript assertion libraries: It helps users cut testing cost and speed-up the process by having compatibility for a set of JavaScript assertion libraries—Express.js, Should.js, Chai. This multiple library support makes it easier for testers to write lengthy complex test cases and use them if everything works fine. Works in TDD and BDD environments: Mocha supports behavior driven development (BDD) and test driven development (TDD), allowing developers to write high quality test cases and enhance test coverage. Support for synchronous and asynchronous testing: Unlike other JavaScript testing frameworks, Mocha is designed with features to fortify asynchronous testing utilizing async/await by invoking the callback once the test is finished. It enables synchronous testing by omitting the callback. Setting Up Mocha and Initial Requirements Before we start our endeavor and explore more of Mocha testing, there are some important prerequisites we need to set up to get started with this Mocha testing tutorial for automation testing with Selenium and JavaScript: Node.js and (npm): The Mocha module requires Node.js to be installed on the system. If it is not already present on the system, it can be installed using the npm manager or by downloading the Windows installer directly from the official Node.js website . Mocha package module: Once we have successfully installed Node.js on the system, we can make use of the node package manager, i.e. npm, to install the required package, which is Mocha. To install the latest version using the npm command line tool, we will first initialize the npm using the below command : $ npm init Next, we will install the Mocha module using npm with the below command: $ npm install -g mocha Here, “g” is for installing the module globally, it allows us to access and use the module like a command line tool and does not limit its use to the current project. The –save-dev command below will place the Mocha executable in our ./node_modules/.bin folder: $ npm install --save-dev mocha We will now be able to run the commands in our command line using the mocha keyword: Java—SDK: Since Mocha is a Selenium test framework and Selenium is built upon Java, we would also be installing the Java Development Kit (preferably JDK 7.0 or above) on the system and configure the Java environment. Selenium WebDriver: We require a Selenium WebDriver and that should be already present in our npm node modules. If it is not found in the module, we can install the latest version of Selenium WebDriver using the below command: $ npm install selenium-webdriver Browser driver: Lastly, we will be installing the driver of the specific browser we are going to use. This executable also needs to be placed inside the same bin folder: $ npm install -g chromedriver Writing Our First Mocha JavaScript Testing Script We will create a project directory named mocha_test and then we will create a subfolder name scripts with a test script name single_test.js inside it. Finally, we will initialize our project by hitting the command npm init. This will create a package.json file in an interactive way, which will contain all our required project configurations. It will be required to execute our test script single_test.js. Finally, we will have a file structure that looks like the below: mocha_test | -- scripts | -- single_test.js | -- package.json { "name": "mocha selenium test sample", "version": "1.0.0", "description": " Getting Started with Our First New Mocha Selenium Test Script and Executing it on a Local Selenium Setup ", "scripts": { "test": "npm run single", "single": "./node_modules/.bin/mocha scripts/single_test.js", }, "author": "rohit", "license": "" , "homepage": "https://mochajs.org", "keywords": [ "mocha", "bdd", "selenium", "examples", "test", "bdd", "tdd", "tap" "framework" ], "dependencies": { "bluebird": "^3.7.2", "mocha": "^6.2.2", "selenium-webdriver": "^3.6.0" } } You have successfully configured your project and are ready to execute your first Mocha JavaScript testing script.You can now write your first test script in the file single_test.js that was created earlier: var assert = require('assert'); describe('IndexArray', function() { describe('#checkIndex negative()', function() { it('the function should return -1 when the value is not present', function(){ assert.equal(-1, [4,5,6].indexOf(7)); }); }); }); Code Walkthrough of Our Mocha JavaScript Testing Script We will now walk through the test script and understand what exactly is happening in the script we just wrote. When writing any Mocha test case in JavaScript, there are two basic function calls we should remember that does the job for us under the hood. These functions are: describe() it() We have used both of them in the test script we wrote above. describe(): Is mainly used to define the creation of test groups in Mocha in a simple way. The describe() function takes in two arguments as the input. The first argument is the name of the test group, and the second argument is a callback function. We can also have a nested test group in our test as per the requirement of the test case. If we look at our test case now, we see that we have a test group named IndexArray, which has a callback function that has inside it a nested test group named #checkIndex negative() and inside of that, is another callback function that contains our actual test. it(): This function is used for writing individual Mocha JavaScript test cases. It should be written in a layman way conveying what the test does. The It() function also takes in two arguments as the input, the first argument is a string explaining what the test should do, and the second argument is a callback function, which contains our actual test. In the above Mocha JavaScript testing script, we see that we have the first argument of the it() function that is written as “the function should return -1 when the value is not present” and the second argument is a callback function that contains our test condition with the assertion. Assertion: The assertion libraries are used to verify whether the condition given to it is true or false. It verifies the test results with the assert.equal(actual, expected) method and makes the equality tests between our actual and expected parameters. It makes our testing easier by using the Node.js built-in assert module. In our Mocha JavaScript testing script, we are not using the entire assert library as we only require the assert module with one line of code for this Mocha testing tutorial. If the expected parameter equals our actual parameter, the test is passed, and the assert returns true. If it doesn’t equal, then the test fails, and the assert returns false. It is important to check whether the below section is present in our package.json file as this contains the configurations of our Mocha JavaScript testing script: "scripts": { "test": "npm run single", "single": "./node_modules/.bin/mocha scripts/single_test.js" }, Finally, we can run our test in the command line and execute from the base directory of the project using the below command: $ npm test or $ npm run single The output of the above test is : This indicates we have successfully passed our test and the assert condition is giving us the proper return value of the function based on our test input passed. Let us extend it further and add one more test case to our test suite and execute the test. Now, our Mocha JavaScript testing script: single_test.js will have one more test that will check the positive scenario of the test and give the corresponding output: var assert = require('assert'); describe('IndexArray', function() { describe('#checkIndex negative()', function() { it('the function should return -1 when the value is not present', function(){ assert.equal(-1, [4,5,6].indexOf(7)); }); }); describe('#checkIndex positive()', function() { it('the function should return 0 when the value is present', function(){ assert.equal(0, [8,9,10].indexOf(8)); }); }); }); The output of the above Mocha JavaScript testing script is : You have successfully executed your first Mocha JavaScript testing script in your local machine for Selenium and JavaScript execution. Note: If you have a larger test suite for cross browser testing with Selenium JavaScript, the execution on local infrastructure is not your best call. Drawbacks of Local Automated Testing Setup As you expand your web application, bring in new code changes, overnight hotfixes, and more. With these new changes, comes new testing requirements, so your Selenium automation testing scripts are bound to go bigger, you may need to test across more browsers, more browser versions, and more operating systems. This becomes a challenge when you perform JavaScript Selenium testing through the local setup. Some of the major pain points of performing Selenium JavaScript testing on the local setup are: There is a limitation that the testing can only be performed locally, i.e., on the browsers that are installed locally in the system. This is not beneficial when there is a requirement to execute cross browser testing and perform the test on all the major browsers available for successful results. The test team might not be aware of all the new browsers versions and the compatibility with them will be tested properly. There is a need to devise a proper cross browser testing strategy to ensure satisfactory test coverage. There arise certain scenarios when it is required to execute tests on some of the legacy browsers or browser versions for a specific set of users and operating systems. It might be necessary to test the application on various combinations of browsers and operating systems, and that is not easily available with the local inhouse system setup. Now, you may be wondering about a way to overcome these challenges. Well, don’t stress too much because an online Selenium Grid is there for your rescue. Executing Mocha Script Using Remote Selenium WebDriver on LambdaTest Selenium Grid Since we know that executing our test script on the cloud grid has great benefits to offer, let us get our hands dirty on the same. The process of executing a script on the LambdaTest Selenium Grid is fairly straightforward and exciting. We can execute our local test script by adding a few lines of code that is required to connect to the LambdaTest platform: It gives us the privilege to execute our test on different browsers seamlessly. It has all the popular operating systems and also provides us the flexibility to make various combinations of the operating system and browsers. We can pass on our environment and config details from within the script itself. The test scripts can be executed parallelly and save on executing time. It provides us with an interactive user interface and dashboard to view and analyze test logs. It also provides us the desired capabilities generator with an interactive user interface, which is used to select the environment specification details with various combinations to choose from. So, in our case, the multiCapabilities class in the single.conf.js and parallel.conf.js configuration file will look similar to the below: multiCapabilities: [ { // Desired Capabilities build: "Mocha Selenium Automation Parallel Test", name: "Mocha Selenium Test Firefox", platform: "Windows 10", browserName: "firefox", version: "71.0", visual: false, tunnel: false, network: false, console: false } Next, the most important thing is to generate our access key token, which is basically a secret key to connect to the platform and execute automated tests on LambdaTest. This access key is unique to every user and can be copied and regenerated from the profile section of the user account as shown below. The information regarding the access key, username, and hub can be alternatively fetched from the LambdaTest user profile page Automation dashboard, which looks like the one as mentioned in the screenshot below. Accelerating With Parallel Testing Using LambdaTest Selenium Grid In our demonstration, we will be creating a script that uses the Selenium WebDriver to make a search, open a website, and assert whether the correct website is open. If the assert returns true, it indicates that the test case passed successfully and will show up in the automation logs dashboard. If the assert returns false, the test case fails, and the errors will be displayed in the automation logs. Now, since we are using LambdaTest, we would like to leverage it and execute our tests on different browsers and operating systems. We will execute our test script as below: Single test: On a single environment (Windows 10) and single browser (Chrome). Parallel test: On a parallel environment, i.e., different operating system (Windows 10 and Mac OS Catalina) and different browsers (Chrome, Mozilla Firefox, and Safari). Here we will create a new subfolder in our project directory, i.e., conf. This folder will contain the configurations that are required to connect to the LambdaTest platform. We will create single.conf.js and parallel.conf.js where we need to declare the user configuration, i.e, username and access key along with the desired capabilities for both our single test and parallel test cases. Now, we will have a file structure that looks like below: LT_USERNAME = process.env.LT_USERNAME || "irohitgoyal"; // Lambda Test User name LT_ACCESS_KEY = process.env.LT_ACCESS_KEY || "1267367484683738"; // Lambda Test Access key //Configurations var config = { commanCapabilities: { build: "Mocha Selenium Automation Parallel Test", // Build Name to be displayed in the test logs tunnel: false // It is required if we need to run the localhost through the tunnel }, multiCapabilities: [ { // Desired Capabilities , this is very important to configure name: "Mocha Selenium Test Firefox", // Test name that to distinguish amongst test cases platform: "Windows 10", // Name of the Operating System browserName: "firefox", // Name of the browser version: "71.0", // browser version to be used visual: false, // whether to take step by step screenshot, we made it false for now network: false, // whether to capture network logs, we made it false for now console: false // whether to capture console logs, we made it false for now }, { name: "Mocha Selenium Test Chrome", // Test name that to distinguish amongst test cases platform: "Windows 10", // Name of the Operating System browserName: "chrome",// Name of the browser version: "79.0", // browser version to be used visual: false, // // whether to take step by step screenshot, we made it false for now network: false, // // whether to capture network logs, we made it false for now console: false // // whether to capture console logs, we made it false for now }, { name: "Mocha Selenium Test Safari", // Test name that to distinguish amongst test cases platform: "MacOS Catalina", // Name of the Operating System browserName: "safari",// Name of the browser version: "13.0", // browser version to be used visual: false, // // whether to take step by step screenshot, we made it false for now network: false, // // whether to capture network logs, we made it false for now console: false // // whether tocapture console logs., we made it false for now } ] }; exports.capabilities = []; // Code to integrate and support common capabilities config.multiCapabilities.forEach(function(caps) { var temp_caps = JSON.parse(JSON.stringify(config.commanCapabilities)); for (var i in caps) temp_caps[i] = caps[i]; exports.capabilities.push(temp_caps); }); var assert = require("assert"),// declaring assert webdriver = require("selenium-webdriver"), // declaring selenium web driver conf_file = process.argv[3] || "conf/single.conf.js"; // passing the configuration file var caps = require("../" + conf_file).capabilities; // Build the web driver that we will be using in Lambda Test var buildDriver = function(caps) { return new webdriver.Builder() .usingServer( "http://" + LT_USERNAME + ":" + LT_ACCESS_KEY + "@hub.lambdatest.com/wd/hub" ) .withCapabilities(caps) .build(); }; // declaring the test group Search Engine Functionality for Single Test Using Mocha in Browser describe("Search Engine Functionality for Single Test Using Mocha in Browser " + caps.browserName, function() { var driver; this.timeout(0); // adding the before an event that triggers before the rest execution beforeEach(function(done) { caps.name = this.currentTest.title; driver = buildDriver(caps); done(); }); // defining the test case to be executed it("should find the required search result in the browser ", function(done) { driver.get("https://www.mochajs.org").then(function() { driver.getTitle().then(function(title) { setTimeout(function() { console.log(title); assert( title.match( "Mocha | The fun simple flexible JavaScript test framework | JavaScript | Automated Browser Test" ) != null ); done(); }, 10000); }); }); }); // adding the after event that triggers to check if the test passed or failed afterEach(function(done) { if (this.currentTest.isPassed) { driver.executeScript("lambda-status=passed"); } else { driver.executeScript("lambda-status=failed"); } driver.quit().then(function() { done(); }); }); }); var assert = require("assert"), // declaring assert webdriver = require("selenium-webdriver"), // declaring selenium web driver conf_file = process.argv[3] || "conf/parallel.conf.js"; // passing the configuration file var capabilities = require("../" + conf_file).capabilities; // Build the web driver that we will be using in Lambda Test var buildDriver = function(caps) { return new webdriver.Builder() .usingServer( "http://" + LT_USERNAME + ":" + LT_ACCESS_KEY + "@hub.lambdatest.com/wd/hub" ) .withCapabilities(caps) .build(); }; capabilities.forEach(function(caps) { // declaring the test group Search Engine Functionality for Parallel Test Using Mocha in Browser describe("Search Engine Functionality for Parallel Test Using Mocha in Browser " + caps.browserName, function() { var driver; this.timeout(0); // adding the before event that triggers before the rest execution beforeEach(function(done) { caps.name = this.currentTest.title; driver = buildDriver(caps); done(); }); // defining the test case to be executed it("should find the required search result in the browser " + caps.browserName, function(done) { driver.get("https://www.mochajs.org").then(function() { driver.getTitle().then(function(title) { setTimeout(function() { console.log(title); assert( title.match( "Mocha | The fun simple flexible JavaScript test framework | JavaScript | Automated Browser Test" ) != null ); done(); }, 10000); }); }); }); // adding the after event that triggers to check if the test passed or failed afterEach(function(done) { if (this.currentTest.isPassed) { driver.executeScript("lambda-status=passed"); } else { driver.executeScript("lambda-status=failed"); } driver.quit().then(function() { done(); }); }); }); }); Finally, we have our package.json that has an additional added configuration for parallel testing and required files: "scripts": { "test": "npm run single && npm run parallel", "single": "./node_modules/.bin/mocha specs/single_test.js conf/single.conf.js", "parallel": "./node_modules/.bin/mocha specs/parallel_test.js conf/parallel.conf.js --timeout=50000" }, { "name": "mocha selenium automation test sample", "version": "1.0.0", "description": " Getting Started with Our First New Mocha Selenium Test Script and Executing it on a Local Selenium Setup", "scripts": { "test": "npm run single && npm run parallel", "single": "./node_modules/.bin/mocha scripts/single_test.js conf/single.conf.js", "parallel": "./node_modules/.bin/mocha scripts/parallel_test.js conf/parallel.conf.js --timeout=50000" }, "author": "rohit", "license": "" , "homepage": "https://mochajs.org", "keywords": [ "mocha", "bdd", "selenium", "examples", "test", "bdd", "tdd", "tap" ], "dependencies": { "bluebird": "^3.7.2", "mocha": "^6.2.2", "selenium-webdriver": "^3.6.0" } } The final thing we should do is execute our tests from the base project directory by using the below command: $ npm test This command will validate the test cases and execute our test suite, i.e., the single test and parallel test cases. Below is the output from the command line. Now, if we open the LambdaTest platform and check the user interface, we will see the test runs on Chrome, Firefox, and Safari browsers on the environment specified, i.e., Windows 10 and Mac OS, and the test is passed successfully with positive results. Below, we see a screenshot that depicts our Mocha code is running over different browsers, i.e Chrome, Firefox, and Safari, on the LambdaTest Selenium Grid Platform. The results of the test script execution along with the logs can be accessed from the LambdaTest Automation dashboard. Alternatively, if we want to execute the single test, we can execute the following command: $ npm run single To execute the test cases in different environments in a parallel way, run the below command: $ npm run parallel Wrap Up! This concludes our Mocha testing tutorial and now, we have a clear idea about what Mocha is and how to set it up. It allows us to automate the entire test suite and get started quickly with the minimal configuration and is well readable and also easy to update. We are now able to perform an end-to-end test using group tests and the assertion library. The test cases results can be fetched directly from the command line terminal.
The rapid growth of technology has led to an increased demand for efficient and effective software testing methods. One of the most promising advancements in this field is the integration of Natural Language Processing (NLP) techniques. NLP, a subset of artificial intelligence (AI), is focused on the interaction between computers and humans through natural language. In the context of software testing, NLP offers the potential to automate test case creation and documentation, ultimately reducing the time, effort, and costs associated with manual testing processes. This article explores the benefits and challenges of using NLP in software testing, focusing on automating test case creation and documentation. We will discuss the key NLP techniques used in this area, real-world applications, and the future of NLP in software testing. Overview of Natural Language Processing (NLP) NLP is an interdisciplinary field that combines computer science, linguistics, and artificial intelligence to enable computers to understand, interpret, and generate human language. This technology has been used in various applications such as chatbots, voice assistants, sentiment analysis, and machine translation. The primary goal of NLP is to enable computers to comprehend and process large amounts of natural language data, making it easier for humans to interact with machines. NLP techniques can be classified into two main categories: rule-based and statistical-based approaches. Rule-based approaches rely on predefined linguistic rules and patterns, while statistical approaches utilize machine learning algorithms to learn from data. NLP in Software Testing Traditionally, software testing has been a labor-intensive and time-consuming process that requires a deep understanding of the application's functionality and the ability to identify and report potential issues. Testers must create test cases, execute them, and document the results in a clear and concise manner. With the increasing complexity of modern software applications, the manual approach to software testing becomes even more challenging and error-prone. NLP has the potential to revolutionize software testing by automating test case creation and documentation. By leveraging NLP techniques, testing tools can understand the requirements and specifications written in natural language, automatically generating test cases and maintaining documentation. Automating Test Case Creation NLP can be used to automate the generation of test cases by extracting relevant information from requirement documents or user stories. The main NLP techniques involved in this process include: Tokenization: The process of breaking down a text into individual words or tokens, making it easier to analyze and process the text. Part-of-speech (POS) tagging: Assigning grammatical categories (such as nouns, verbs, adjectives, etc.) to each token in a given text. Dependency parsing: Identifying the syntactic structure and relationships between the tokens in a text. Named entity recognition (NER): Detecting and categorizing entities (such as people, organizations, locations, etc.) in a text. Semantic analysis: Extracting the meaning and context from the text to understand the relationships between the entities and actions described in the requirements or user stories. By using these techniques, NLP-based tools can process natural language inputs and automatically generate test cases based on the identified entities, actions, and conditions. This not only reduces the time and effort needed for test case creation but also helps in ensuring that all relevant scenarios are covered, minimizing the chances of missing critical test cases. Automating Test Documentation One of the key aspects of software testing is maintaining accurate and up-to-date documentation that outlines test plans, test cases, and test results. This documentation is crucial for understanding the state of the application and ensuring that all requirements have been met. However, manually maintaining test documentation can be time-consuming and error-prone. NLP can be used to automate test documentation by extracting relevant information from test cases and test results and generating human-readable reports. This process may involve the following NLP techniques: Text summarization: Creating a condensed version of the input text, highlighting the key points while maintaining the original meaning. Text classification: Categorizing the input text based on predefined labels or criteria, such as the severity of a bug or the status of a test case. Sentiment analysis: Analyzing the emotional tone or sentiment expressed in the text, which can be useful for understanding user feedback or bug reports. Document clustering: Grouping similar documents together, making it easier to organize and navigate the test documentation. By automating the documentation process, NLP-based tools can ensure that the test documentation is consistently up-to-date and accurate, reducing the risk of miscommunication or missed issues. Real-World Applications Several organizations have already started incorporating NLP into their software testing processes, with promising results. Some examples of real-world applications include: IBM's Requirements Quality Assistant (RQA) RQA is an AI-powered tool that uses NLP techniques to analyze requirement documents and provide suggestions for improving their clarity, consistency, and completeness. By leveraging NLP, RQA can help identify potential issues early in the development process, reducing the likelihood of costly rework and missed requirements. Testim.io Testim is an end-to-end test automation platform that uses NLP and machine learning to generate, execute, and maintain tests for web applications. By understanding the application's user interface (UI) elements and their relationships, Testim can automatically create test cases based on real user interactions, ensuring comprehensive test coverage. QTest by Tricentis QTest is an AI-driven test management tool that incorporates NLP techniques to automate the extraction of test cases from user stories or required documents. QTest can identify entities, actions, and conditions within the text and generate test cases accordingly, streamlining the test case creation process. Challenges and Future Outlook While NLP holds great promise for automating test case creation and documentation, there are still challenges to overcome. One major challenge is the ambiguity and complexity of natural language. Requirements and user stories can be written in various ways, with different levels of detail and clarity, making it difficult for NLP algorithms to consistently extract accurate and relevant information. Additionally, the accuracy and efficiency of NLP algorithms depend on the quality and quantity of the training data. As software testing is a domain-specific area, creating high-quality training data sets can be challenging and time-consuming. Despite these challenges, the future outlook for NLP in software testing remains optimistic. As NLP algorithms continue to improve and mature, it is expected that the integration of NLP in software testing tools will become more widespread. Moreover, the combination of NLP with other AI techniques, such as reinforcement learning and computer vision, has the potential to further enhance the capabilities of automated testing solutions. Summary Natural Language Processing (NLP) offers a promising approach to automating test case creation and documentation in software testing. By harnessing the power of NLP techniques, software testing tools can efficiently process and understand requirements written in natural language, automatically generate test cases, and maintain up-to-date documentation. This has the potential to significantly reduce the time, effort, and costs associated with traditional manual testing processes. Real-world applications, such as IBM's RQA, Testim.io, and QTest by Tricentis, have demonstrated the value of incorporating NLP into software testing workflows. However, there are still challenges to be addressed, such as the ambiguity and complexity of natural language and the need for high-quality training data. As NLP algorithms continue to advance and improve, it is anticipated that the role of NLP in software testing will expand and become more prevalent. Combining NLP with other AI techniques may further enhance the capabilities of automated testing solutions, leading to even more efficient and effective software testing processes. To summarise, the integration of Natural Language Processing (NLP) in software testing holds great promise for improving the efficiency and effectiveness of test case creation and documentation. Furthermore, as technology continues to evolve and mature, it is expected to play an increasingly important role in the future of software testing, ultimately transforming the way we test and develop software applications.
In this article, I will look at Specification by Example (SBE) as explained in Gojko Adzic’s book of the same name. It’s a collaborative effort between developers and non-developers to arrive at textual specifications that are coupled to automatic tests. You may also have heard of it as behavior-driven development or executable specifications. These are not synonymous concepts, but they do overlap. It's a common experience in any large, complex project. Crucial features do not behave as intended. Something was lost in the translation between intention and implementation, i.e., business and development. Inevitably we find that we haven’t built quite the right thing. Why wasn’t this caught during testing? Obviously, we’re not testing enough, or the wrong things. Can we make our tests more insightful? Some enthusiastic developers and SBE adepts jump to the challenge. Didn’t you know you can write all your tests in plain English? Haven’t you heard of Gherkin's syntax? She demonstrates the canonical Hello World of executable specifications, using Cucumber for Java. Gherkin Scenario: Items priced 100 euro or more are eligible for 5% discount for loyal customers Given Jill has placed three orders in the last six months When she looks at an item costing 100 euros or more Then she is eligible for a 5% discount Everybody is impressed. The Product Owner greenlights a proof of concept to rewrite the most salient test in Gherkin. The team will report back in a month to share their experiences. The other developers brush up their Cucumber skills but find they need to write a lot of glue code. It’s repetitive and not very DRY. Like the good coders they are, they make it more flexible and reusable. Gherkin Scenario: discount calculator for loyal customers Given I execute a POST call to /api/customer/12345/orders?recentMonths=6 Then I receive a list of 3 OrderInfo objects And a DiscountRequestV1 message for customer 12345 is put on the queue /discountcalculator [ you get the message ] Reusable yes, readable, no. They’re right to conclude that the textual layer offers nothing, other than more work. It has zero benefits over traditional code-based tests. Business stakeholders show no interest in these barely human-readable scripts, and the developers quickly abandon the effort. It’s About Collaboration, Not Testing The experiment failed because it tried to fix the wrong problem. It failed because better testing can’t repair a communication breakdown between getting from the intended functionality to implementation. SBE is about collaboration. It is not a testing approach. You need this collaboration to arrive at accurate and up-to-date specifications. To be clear, you always have a spec (like you always have an architecture). It may not always be a formal one. It can be a mess that only exists in your head, which is only acceptable if you’re a one-person band. In all other cases, important details will get lost or confused in the handover between disciplines. The word handover has a musty smell to it, reminiscent of old-school Waterfall: the go-to punchbag for everything we did wrong in the past, but also an antiquated approach that few developers under the age of sixty have any real experience with it. Today we’re Agile and multi-disciplinary. We don’t have specialists who throw documents over the fence of their silos. It is more nuanced than that, now as well as in 1975. Waterfall didn’t prohibit iteration. You could always go back to an earlier stage. Likewise, the definition of a modern multi-disciplinary team is not a fungible collection of Jacks and Jills of all trades. Nobody can be a Swiss army knife of IT skills and business domain knowledge. But one enduring lesson from the past is that we can’t produce flawless and complete specifications of how the software should function, before writing its code. Once you start developing, specs always turn out over-complete, under-complete, and just plain wrong in places. They have bugs, just like code. You make them better with each iteration. Accept that you may start off incomplete and imprecise. You Always Need a Spec Once we have built the code according to spec (whatever form that takes), do we still need that spec, as an architect’s drawing after the house was built? Isn’t the ultimate truth already in the code? Yes, it is, but only at a granular level, and only accessible to those who can read it. It gives you detail, but not the big picture. You need to zoom out to comprehend the why. Here’s where I live: This is the equivalent of source code. Only people who have heard of the Dutch village of Heeze can relate this map to the world. It’s missing the context of larger towns and a country. The next map zooms out only a little, but with the context of the country's fifth-largest city, it’s recognizable to all Dutch inhabitants. The next map should be universal. Even if you can’t point out the Netherlands on a map, you must have heard of London. Good documentation provides a hierarchy of such maps, from global and generally accessible to more detailed, requiring more domain-specific knowledge. At every level, there should be sufficient context about the immediately connecting parts. If there is a handover at all, it’s never of the kind: “Here’s my perfect spec. Good luck, and report back in a month”. It’s the finalization of a formal yet flexible document created iteratively with people from relevant disciplines in an ongoing dialogue throughout the development process. It should be versioned, and tied to the software that it describes. Hence the only logical place is together with the source code repository, at least for specifications that describe a well-defined body of code, a module, or a service. Such specs can rightfully be called the ultimate source of truth about what the code does, and why. Because everybody was involved and invested, everybody understands it, and can (in their own capacity) help create and maintain it. However, keeping versioned specs with your software is no automatic protection against mismatches, when changes to the code don’t reflect the spec and vice versa. Therefore, we make the spec executable, by coupling it to testing code that executes the code that the spec covers and validates the results. It sounds so obvious and attractive. Why isn’t everybody doing it if there’s a world of clarity to be gained? There are two reasons: it’s hard and you don’t always need SBE. We routinely overestimate the importance of the automation part, which puts the onus disproportionally on the developers. It may be a deliverable of the process, but it’s only the collaboration that can make it work. More Art Than Science Writing good specifications is hard, and it’s more art than science. If there ever was a need for clear, unambiguous, SMART writing, executable specifications fit the bill. Not everybody has a talent for it. As a developer with a penchant for writing, I flatter myself that I can write decent spec files on my own. But I shouldn’t – not without at least a good edit from a business analyst. For one, I don’t know when my assumptions are off the mark, and I can’t always avoid technocratic wording from creeping in. A process that I favor and find workable is when a businessperson drafts acceptance criteria which form the input to features and scenarios. Together with a developer, they are refined: adding clarity, and edge cases, and removing duplication and ambiguity. Only then can they be rigorous enough to be turned into executable spec files. Writing executable specifications can be tremendously useful for some projects and a complete waste of time for others. It’s not at all like unit testing in that regard. Some applications are huge but computationally simple. These are the enterprise behemoths with their thousands of endpoints and hundreds of database tables. Their code is full of specialist concepts specific to the world of insurance, banking, or logistics. What makes these programs complex and challenging to grasp is the sheer number of components and the specialist domain they relate to. The math in Fintech isn’t often that challenging. You add, subtract, multiply, and watch out for rounding errors. SBE is a good candidate to make the complexity of all these interfaces and edge cases manageable. Then there’s software with a very simple interface behind which lurks some highly complex logic. Consider a hashing algorithm, or any cryptographic code, that needs to be secure and performant. Test cases are simple. You can tweak the input string, seed, and log rounds, but that’s about it. Obviously, you should test for performance and resource usage. But all that is best handled in a code-based test, not Gherkin. This category of software is the world of libraries and utilities. Their concepts stay within the realm of programming and IT. It relates less directly to concepts in the real world. As a developer, you don’t need a business analyst to explain the why. You can be your own. No wonder so many Open Source projects are of this kind. Gherkin Scenario: discount calculator for loyal customers Given I execute a POST call to /api/customer/12345/orders?recentMonths=6 Then I receive a list of 3 OrderInfo objects And a DiscountRequestV1 message for customer 12345 is put on the queue /discountcalculator [ you get the message ]
TestOps is an emerging approach to software testing that combines the principles of DevOps with testing practices. TestOps aims to improve the efficiency and effectiveness of testing by incorporating it earlier in the software development lifecycle and automating as much of the testing process as possible. TestOps teams typically work in parallel with development teams, focusing on ensuring that testing is integrated throughout the development process. This includes testing early and often, using automation to speed up the testing process, and creating continuous testing and improvement. TestOps also works closely with operations teams to ensure that the software is deployed in a stable and secure environment. TestOps is an approach to software testing that emphasizes collaboration between the testing and operations teams to improve the overall efficiency and quality of the software development and delivery processes. Place of TestOps in Software Development The Need for TestOps Initial Investment Adopting DevOps requires an initial investment of time, resources, and financial investment. This can be a significant barrier to adoption for some organizations, particularly those with limited budgets or resources. Learning Curve DevOps requires a significant cultural shift in the way that teams work together, and it can take time to learn new processes, tools, and techniques. This can be challenging for some organizations, particularly those with entrenched processes and cultures. Security Risks DevOps practices can increase the risk of security vulnerabilities if security measures are not properly integrated into the development process. This can be particularly problematic in industries with strict security requirements, such as finance and healthcare. Automation Dependencies DevOps relies heavily on automation, which can create dependencies on tools and technologies that may be difficult to maintain or update. This can lead to challenges in keeping up with new technologies or changing requirements. Cultural Resistance DevOps requires a collaborative and cross-functional culture, which may be difficult to achieve in organizations with siloed teams or where there is resistance to change. Advantages of the TestOps Continuous Testing TestOps allows continuous testing that enables organizations to detect defects early in the development process. This reduces the cost and effort required to fix defects and ensures that software applications can be delivered with high quality. Improved Quality By integrating testing processes into the DevOps pipeline, TestOps ensures that quality is built into software applications from the outset. This reduces the risk of defects and improves the overall quality of the software. Greater Efficiency TestOps enables the automation of testing processes, which can help organizations reduce the time and effort required to test software applications. This can also reduce the costs associated with testing. Increased Collaboration TestOps promotes collaboration between development and testing teams, which can help identify and resolve issues earlier in the development process. This can lead to faster feedback and better communication between teams. Faster Time-to-Market TestOps allows the automation of testing processes, which reduces the time required to test software applications. This enables organizations to release software applications faster, which can give them a competitive advantage in the marketplace. Scope of TestOps in the Future The scope of TestOps in the future is significant as software development continues to become more complex and fast-paced. TestOps combines software testing with DevOps practices. As a result, it is becoming increasingly important for organizations to implement TestOps to ensure that they can deliver high-quality software applications to market quickly. Some of the trends that are likely to shape the future of TestOps include: Increasing Adoption of Agile and DevOps Methodologies Agile and DevOps methodologies are becoming increasingly popular among organizations that want to deliver software applications faster and more efficiently. TestOps is a natural extension of these methodologies, which are likely to become an essential component of Agile and DevOps practices in the future. Greater Focus on Automation Automation is a critical aspect of TestOps and will likely become even more important in the future. The use of automation tools and techniques can help organizations reduce the time and effort required to test software applications while also improving the accuracy and consistency of testing. The Growing Importance of Cloud Computing Cloud computing is becoming increasingly popular among organizations that want to reduce their IT infrastructure costs and improve scalability. TestOps can be implemented in cloud environments, and they are likely to become even more important as more organizations move their software applications to the cloud. Overall, the scope of TestOps in the future is vast, and it is likely to become an essential component of software development practices in the coming years. Conclusion Is TestOps the future of software testing? Obliviously Yes. With the increasing adoption of Agile and DevOps methodologies, there is a growing need for software testing processes that can keep pace with rapid development and deployment cycles. TestOps can help organizations achieve this by integrating testing into the software development lifecycle and ensuring that testing is a continuous and automated process. Furthermore, as more and more software is deployed in cloud environments, TestOps will become even more important in ensuring that applications are secure, scalable, and reliable. In summary, TestOps is a key trend in software testing that is likely to continue to grow in the future as organizations look for ways to improve the efficiency and quality of their software development and delivery processes.
I've heard a lot about Jira not being optimized for QA as, at its core, it is specifically a Project Management solution, and, therefore, it is not about test management. But let's be honest here, Jira feels unnecessarily complicated when you get started with it, regardless of your position or goals. And given it's the go-to standard for organizing and managing software development projects – of which QA is an integral part – you'll barely have a choice in the matter. That being said, Jira can (and should) be optimized in a way that is equally efficient for developing new features, testing, and releasing them. In this article, we talk about the following: Effective use of Jira in software testing. Optimizing Jira the QA workflow. Writing and managing tests in Jira. Additional tools and Jira plugins that can help your QA process. Jira for Development vs. Jira for QA Let's start with the elephant in the room. I've seen a lot of materials stating that Jira is not built with bug tracking in mind and that it is often lacking in "this and that." This statement is true as Jira wasn't built for development, HR, procurement, or marketing specifically. It is a project management system that is designed to help with, well, managing teams. That being said, the teams themselves will use Jira differently. And some may have an arguably simpler time. Developers, for instance, might not need to spend as much time in Jira as QA engineers due to the fact that they don't need to create their tickets daily: They'll be working on a handful of User Stories per Sprint. You, on the other hand, will have to cover these stories with tests. The key to striking a delicate balance where everyone can do their job effectively lies in planning your processes in Jira in a way that considers the interests of everyone on the team. Designing Processes With a Jira QA Workflow in Mind Let's look at a couple of handy tricks and process improvements that can not only help QA engineers be more effective but improve everyone's work on a software development project. Include Testers When You Estimate QA Tasks QA engineers aren't typically responsible for providing estimates. However, this can be a good practice as they have much more hands-on experience and can deliver more accurate estimates for QA tasks. Allow Testers to Help Set Bug Priorities As a rule of thumb, QA engineers are quite good at prioritizing bugs thanks to their user experience testing background. Obviously, a project manager can change these priorities based on business needs, but it is beneficial to allow testers to set initial bug priorities. As a tester, you need to consider the following factors when prioritizing a bug in Jira: How often does it occur? What is the severity (how much does it impact the user)? Is the issue blocking the main functionality or some other features? What devices/browsers does the bug happen in? Is the bug impacting the users in a way where they churn or leave negative reviews (have similar bugs caused it before)? For how long has the bug been happening (death by a thousand papercuts is still a thing)? What other features are also coming in this sprint, and what are their priorities? Design Your Board to be More Process-Friendly The iconic Kanban board is probably the first thing that pops into one's head when thinking about Jira. And if you've worked on several projects, you've probably seen them arranged in a variety of ways. The general approach sorts your issues into several columns that move from "to do" to "in progress" and then to "done." That being said, you are not limited to a specific number of columns. Use this to your advantage. I'd suggest having more columns that can clearly illustrate the path your issue needs to go through before it can be considered done. I'd consider having the following columns: TO DO Ready for development In progress Ready for review Ready for testing In testing Reopened Ready for release Released Having a separate column for every stage may seem overwhelming, but only at first glance. In reality, this array gives everyone a nice high-level view and becomes an asset for the QA team. QA-Relevant Issue Fields QA engineers don't typically need too many extra bells and whistles when it comes to issuing fields. In fact, testers will mostly need the default ones with a handful of minor exceptions. However, these tiny changes can make a world of difference in the long run. Here's an example of issue fields I find to be useful in a QA ticket. Summary: This field will have a short – usually one-line max summary of the main problem. Description: This section will be the home to your pre-requisites (if needed), expected and actual results, steps to reproduce, device details, app version, and environment info (unless you have specific fields for this information), and generally any other delta that could help with reproducing the issue withoutasking QA for additional help. Issue type: This field is typically assigned by the Project Manager. They will decide whether an issue is a Story or something else. That being said, you need to ensure that your QAs have the option to create bug report tickets on their own. Assignee: Self-explanatory. Priority: This field is where either the Project Manager or QA engineers themselves set the priority of a bug. Labels: People often overlook this field. However, I find it to be an exceptionally good way of grouping tickets together. Environment: This field can add a lot of necessary details, like whether the feature is in dev or staging. You can also specify hardware and software, like using Safari on an iPad Mini. Affects Version and Fix Version fields: These fields can also add a bit more context by clearly stating the software version. This is handy because the QA engineer can easily understand the version of the app/software that should be used for testing. Components field: This field can come in handy when you need to specify if the product consists of multiple components. Attachment field: You can use this field to add screenshots with errors, log files, or even video recordings of an issue. Linked Issues: This field will help you link bugs to stories or Epics for feature implementation if you know that there's a relation. Due date: This field is useful for general awareness and understanding of when the issue is planned to be fixed. Do note, however, that this field should not be filed by QAs. Please keep in mind that this suggested list of fields is what I consider good to have, but they are not a hard requirement. If you feel comfortable that your process can skip some fields and simply hold more info in the description – go for it. The same can be said about making more fields. Yes, it may seem tempting to formalize everything possible, but you shouldn't go too far. The more fields there are – the more time you'll spend on filling them. This is even more true given the nature of the QA process, as you may need to create dozens at a time. Cultivate the Right Comments Culture Comments in tickets can be a gold mine for a QA engineer as detailed and specific notes will help discover a lot about the feature. They can also become a rabbit hole of irrelevant fluff, as reading through dozens of them will take away precious time that could have been better spent elsewhere. Jira is not Slack, nor should it ever become a messenger. Therefore it's best that your team uses tools that are meant for communication to discuss most details and questions and put the finalized requirements or needed context into tickets. Tags Whenever you are adding any details that should be relevant to a certain developer – tag them. Obviously, the same is true for engineers and managers who want a QA person to look at something specific. Pro tip: You can use a checklist plugin to break down tasks into actionable items. Smart Checklist will allow you to add custom statuses, tag users, and set up deadlines for each item individually. Sub-Tasks Many testing teams use sub-tasks for bugs that are related to a feature ticket. This approach usually narrows down to either personal preference or the culture a team has developed. And it makes a lot of sense, at least on paper. That being said, I'd advise against it as sub-tasks are not clearly visible on the board, making it harder for the manager to review progress. Another reason for using separate tickets for bugs is that sometimes a decision can be made to release a feature despite it having some bugs, in which case, they'd really love the ability to move it to done. A workaround would be to create separate issues and link them together. This way, you'll still see the feature a bug is linked to without negatively affecting the board's visibility. Backlog Grooming A typical backlog consists of dozens of tickets with a "medium" or lower priority. If uncared for, it will eventually become a black hole that hoards copious amounts of tickets that never see the light again. But have you ever heard of the straw that broke the camel's back? People who continuously experience minor bugs and inconveniences will eventually shift to a more stable solution. Therefore you need to ensure that at least some of the tickets from the backlog are continuously making their way into the sprint. Some of the best practices I'd recommend to make sure that the minor bugs don't become a major issue are: Periodically having Sprints that are dedicated to lower-priority issues. Committing to pulling a certain number (it doesn't have to be a large number) of tickets from the backlog into every sprint. Sprint Lag for Automation Covering a feature that's still in progress with automated tests can lead to a lot of unnecessary work as things may (and probably will) change. That's why having your Automator lag one sprint behind so they can cover a completed feature with scripts is actually a nice practice. The fact that the tickets were already covered with test cases is also a boon, as they will add even more context. You'll have to properly label these kinds of tickets to avoid confusion. How to Write Tests in Jira? We need to understand what tests are before we are able to write and execute them properly. Luckily, the concept is quite far from rocket science. Your typical test will be comprised of three parts: The scenario: This is the basic idea of a test and is used to indicate what you are trying to validate. An example could be making sure a user can only log in using valid credentials. Expected result: An expected result is the behavior you'd expect from a feature when it is working correctly. Think of it as the requirement for success. Validation: This is the description of the method you'll be using to test the feature against the requirements. Following these basic principles will help you write, document, and execute excellent tests. As for using them in Jira specifically, here's what you should do. Indicate that the issue is a Test Case in the title. Example: Test Case: Login functionality. Mention the necessary details in the description: Share the test steps you will be taking, like accessing the login interface and entering valid and/or invalid credentials. Share expected results in the description as well. Creating test cases in Jira can be a tedious and repetitive task, but it is an essential step in ensuring the quality and functionality of a product. A well-written test case can save time and budget in the long run. Consider following these tips to write kick-ass test cases. Start by clearly describing the purpose of the feature and its benefits to the user. This goes beyond just stating what the feature does and should also explain why it is important to have. Define the user interactions and activities required to achieve the desired outcome. This will help ensure that all necessary steps are covered in the testing process. Specify the framework and methodology of the test cases, including any relevant tools or techniques that will be used. Use detailed steps that illustrate what to do, where to navigate, which button to click, etc. This helps the team understand and follow the test case. Prioritize the test cases based on their importance and relevance to the project. This will help in forming the scope for smoke and regression testing. Always link the test cases to the requirements in Jira to ensure that the testing process is aligned with the product's specifications. This will also help in identifying any dependencies between the functionality being tested. Group the test cases based on the product components or user personas and use labels to categorize them. This will make it easier to manage and search for specific test cases in the future. Include variations in the test cases by testing the product under different scenarios and conditions. This will help ensure that the product works as intended in all possible situations. Take advantage of additional tools and functionality available in the Atlassian Marketplace to enhance the testing experience within Jira. How to Manage Tests in Jira? Generally speaking, there are two schools of thought when it comes to managing tests in Jira. One is pretty straightforward and relies on using the functionality that's available in Jira out of the box. The other advocates for using external products like the add-ons that are available on the Atlassian Marketplace. Xray is probably the most well-known add-on for testing; hence I'll be using it for my examples. But for now, let's focus on option number one – vanilla Jira. Using Sub-Tasks As I mentioned above, you can use sub-tasks for tests, but this option has a series of drawbacks and limitations. Therefore, I'd suggest adding an additional issue type to your Jira instance for test cases. This approach will allow you to use sub-tasks more effectively (yes, you'll need to make custom sub-tasks for this to work), as they'll be attached to a specific test case issue. And you'll be using them for submitting the results of your tests. For this, you'll need to open the global settings of your Jira instance and choose the option of the issue from the drop-down menu. From there, you'll need to click on the create button and add your custom tasks and sub-tasks. Then you will need to go to schemes to add your newly created issue type. Simply drag it into the column on the left. Connect the newly created issue type with the test case results in the Jira subtask and use them for your QA purposes. The benefits of this approach are: This method is workable, not necessarily too hard to configure, and it will not cost you any additional funds. On the downside, this can not resolve the challenges associated with reusing test cases, and there's barely any support for exploratory testing or test automation. Using Plugins Then there's the option for using additional add-ons from the marketplace like the aforementioned Xray. Xray is a testing tool that supports the entire testing process, including planning, designing, execution, and reporting. To facilitate this process, Xray uses specific issue types within Jira. During each phase of testing, you can use the following issue types: Plan Phase: Test plan issues are used to create and manage test plans. Design Phase: Precondition and test issue types are used to define the testing specifications. Test sets can be used to organize the tests. Execute Phase: Test execution issues are used to execute tests and track their results. Report Phase: Test execution issues are used to generate built-in requirement coverage reports. Custom issues can also be created using Jira Software tools to create additional reports. If you are new to Xray, it is recommended to start with a small project and use test issues to create and execute tests for your requirements in an ad hoc, unplanned manner. The pros and cons of this approach are literally the reverses of using the default functionality of Jira – you will be able to do more in less time; however, the added comfort will cost you. Using Checklists As I mentioned, there are two "conventional" ways of managing tests in Jira. However, there are some that are less common, and they typically involve certain combinations of the two. Checklists are a great example of an unconventional way of managing tests. Smart Checklist for Jira, for instance, allows you to add checklists to your issues. The built-in Markdown editor simplifies your workflow as you can type or even copy-paste your tests. You can use the formatting options to your advantage when writing tests. For example, you can use headers for test topic values, checklist items for test case values, and details to store your expected results. This way, your tests will have actionable statuses, and deadline and assignee functionality, all while being attached to the parent ticket. Conclusion Jira may seem overwhelming at first glance, but from complexity comes configurability and flexibility. You are free to mold the software in a way that fits your processes, resulting in more productivity inside a unified hub every team in your company uses.
A software bug or defect exists when actual behavior deviates from expected behavior. This is when customers are unhappy and the development team’s morale is hit. If the expected behavior is not clear, deviation from what’s intuitively expected raises the question: Is this a bug or a feature? In any case, from a customer’s point of view, bugs are undesirable. Customer-driven development teams make their best efforts to identify and fix bugs. There are two complementary ways of viewing software bugs. One way to view them involves the metaphor of harmful insects that can be eliminated using pesticides. Software bugs are the insects and software testing is the pesticide. The second way is to view them as a learning opportunity that calls for the right learning processes. Faults, Errors, and Failures But what exactly is a bug? What can make a software product deviate from expected behavior? In terms of programming code, there can be faults, errors, and failures. A fault in the code may result in an error. An error may result in a failure. Another metaphor could help us clarify these concepts. We wake up in the morning and are in pain or feeling dizzy. Think of these symptoms as failures. We decided to go to the doctor. The doctor measures our pressure which is found to be very low. Think of this as an error since our pressure should not be that low! Then we do a blood test which shows a high cholesterol level. The high cholesterol level is another error like any deviation from acceptable levels. The doctor analyses all the failures and errors. She tries to find out the fault (or the faults) that resulted in the errors which led to the failures. Errors could add up resulting in bigger failures (more pain). Errors could be irrelevant to each other, some causing failures while others causing no failures. Errors could also be canceling each other resulting in little or no failures (little or no pain). They could be due to the same fault, due to different faults, or due to the interaction between faults. Here is an example of a simple method that accepts arrays of numbers and counts the number of zeros. Method numZero has a fault in the initialization expression of variable i. Due to this fault, numZero doesn't take into account the first element of the input array. This is an error. As a result, if the first element is zero then numZero will fail in the calculation. However, if the first element is non-zero then numZero will not fail. For example, numZero ([1, 9, 0, 7]) correctly evaluates to 1 while numZero ([0, 1, 2, 6]) results in a failure since it incorrectly evaluates to zero. There is a tendency in the software industry to use the term bug or defect informally to refer to any of the three: Faults, errors, and failures. Types of Bugs To find bugs we need to test. Consequently, bug types are in accordance with the types of testing we perform to catch them. There exist functional bugs where functionality is broken. There also exist non-functional bugs like security bugs, reliability bugs, availability bugs, usability bugs, and performance bugs. Input validation bugs, happy-path bugs, negative testing bugs, regression bugs, smoke testing bugs, and bugs caught via manual testing or via automated testing. The list of bug types is long but most importantly, testing cannot prove that there are no bugs. It can help to gain confidence that we’ve found most of them or the most important ones. Be aware though to avoid the trap of constantly seeking to achieve zero bugs after each and every testing activity. Is it always worth the effort and the costs? Bugs Should Be Detected Early A bug could be created during many stages in the software development life cycle (SDLC). During requirements analysis, architectural design, development, or deployment. The earlier we detect and fix a bug the better. This is because the cost of fixing a bug increases considerably as we move from requirements to design, to development, and then to QA testing. As an English proverb says: A stitch in time saves nine. To detect bugs, testing at any level is key: at a unit level, at an integration level, at an API level, at a system level, or at an acceptance level. One software group used testers throughout the SDLC. Bugs detected during requirements analysis were documented and solved with high priority. Another group has trained business analysts and product owners to have a testing mindset. They would look for positive and negative cases and problematic edge cases. Having found such problems they would directly notify testers to incorporate specific scenarios during their test-case design. Another group used to employ product owners that were good exploratory testers. People who had the knowledge to explore and the ability to constructively critique requirements. People who also had business domain knowledge and a willingness to expand and focus on that. Product owners with such a background would often catch bugs early. They would also take the necessary steps to avoid complex requirements. They generated requirements that were clear, unambiguous, complete, and testable. As Insects Get Immune to Pesticides Following the insect-pesticide metaphor, in time, insects in biology will get immune to pesticides and we must use new pesticides to kill them. Similarly, software bugs in our code will get immune to our tests and we will need new tests to find them and fix them. Tests are focused on specific areas of the code and in time they will find fewer bugs. This is because the code tested will eventually be improved and few (or no) defects will be caught. If we want our tests to always catch important bugs, we must test in new ways, new testing levels, and try different types of testing and our test cases must evolve as our code evolves. Defect Tracking Systems A defect tracking system is a tool for documenting problems and fixes. Information like defect priority and severity, reproduction steps, possible workarounds, and issuer and assignee, are some of the details included. One team viewed documenting defects as rework and waste. They fixed bugs as soon as they were discovered. Unit tests were written to reproduce defects and the code was fixed so that unit tests passed. The tests and the fix were then checked-in and work would continue as normal. Testers would collaborate with developers as soon as a problem was discovered. If a developer could provide a fix immediately then the bug was not documented. If no developer was immediately available or if the fix required difficult decision making then the bug was documented. Another team would create a list of bugs in a Google document shared between testers, developers, and product owners. These were bugs found during feature testing and before the feature release. At least all the important bugs were fixed before release. If the decision was to release candidate features with some low-priority bugs then they would document them in a defect tracking system. They always documented defects detected after feature releases. There was a group that viewed defect-tracking systems as a database of knowledge. All bugs were reported to the system irrespective of whether they were found before or after feature releases. They were looking for patterns in the bugs. They were asking questions like are there interrelated bugs? Why do some bugs that get fixed keep coming back again? Why are some specific bugs sporadic? Root cause analysis was conducted in a quest to find answers and learn how to prevent similar issues from reoccurring. Detecting Bugs Is One Thing, Fixing Them Is Another Although preventing, detecting, and fixing bugs early in the SDLC is clearly a best practice, fixing bugs can be challenging. If the costs of fixing are greater than the costs of not fixing then maybe a workaround could make customers happy (if available). This is perhaps when bug tracking systems could be very useful since all known issues that need to be tackled can be clearly documented and communicated to all stakeholders. Important bugs that require across-team collaboration, which may also require difficult decisions across teams and may take time to fix should be documented. Maybe we’ve found an important bug but there are currently more important ones that need to be fixed. We should fix the most important first whilst keeping the less important ones filtered per priority. Artificially Introduced Bugs as a Learning Process Bugs may also be artificially introduced into the system to check if the existing processes are good enough to detect them. This is a policy that has been popular in a number of software groups. One group introduced UI bugs to check if their overnight test automation runs would catch them. Another group had a regular bug training session for developers. Bugs were introduced by the DevOps team and hints were given to developers in order to find and fix them. There was a group that introduced bugs at a microservice level to check microservice behavior. They were also interested to see how such carefully crafted bugs would propagate across microservices. A number of groups were inciting faults to ensure their systems were tolerant. They were switching systems on/off and deliberately running software that caused failures. They learned about how tolerant their systems were, what kind of failures can emerge, how to improve such problems, and how to respond to a number of unforeseen circumstances. Wrapping Up Software bugs exist in one form or another in almost all software products. Their presence makes everyone unhappy, including customers and software development teams. Maybe, one of the prerequisites for a software engineer to flourish is to keep a balance between faults, errors, and failures in code. Since they are inextricably linked to our work we need to be clear on how to prevent them as much as we can, how to spot them when they are introduced, and how to fix and handle them under any circumstances.
In a microservices architecture, it’s common to have multiple services that need access to sensitive information, such as API keys, passwords, or certificates. Storing this sensitive information in code or configuration files is not secure because it’s easy for attackers to gain access to this information if they can access your source code or configuration files. To protect sensitive information, microservices often use a secrets management system, such as Amazon Secrets Manager, to securely store and manage this information. Secrets management systems provide a secure and centralized way to store and manage secrets, and they typically provide features such as encryption, access control, and auditing. Amazon Secrets Manager is a fully managed service that makes it easy to store and retrieve secrets, such as database credentials, API keys, and other sensitive information. It provides a secure and scalable way to store secrets, and integrates with other AWS services to enable secure access to these secrets from your applications and services. Some benefits of using Amazon Secrets Manager in your microservices include: Centralized management: You can store all your secrets in a central location, which makes it easier to manage and rotate them. Fine-grained access control: You can control who has access to your secrets, and use AWS Identity and Access Management (IAM) policies to grant or revoke access as needed. Automatic rotation: You can configure Amazon Secrets Manager to automatically rotate your secrets on a schedule, which reduces the risk of compromised secrets. Integration with other AWS services: You can use Amazon Secrets Manager to securely access secrets from other AWS services, such as Amazon RDS or AWS Lambda. Overall, using a secrets management system, like Amazon Secrets Manager, can help improve the security of your microservices by reducing the risk of sensitive information being exposed or compromised. In this article, we will discuss how you can define a secret in Amazon Secrets Manager and later pull it using the Spring Boot microservice. Creating the Secret To create a new secret in Amazon Secrets Manager, you can follow these steps: Open the Amazon Secrets Manager console by navigating to the “AWS Management Console,” selecting “Secrets Manager” from the list of services, and then clicking “Create secret” on the main page. Choose the type of secret you want to create: You can choose between “Credentials for RDS database” or “Other type of secrets.” If you select “Other type of secrets,” you will need to enter a custom name for your secret. Enter the secret details: The information you need to enter will depend on the type of secret you are creating. For example, if you are creating a database credential, you will need to enter the username and password for the database. Configure the encryption settings: By default, Amazon Secrets Manager uses AWS KMS to encrypt your secrets. You can choose to use the default KMS key or select a custom key. Define the secret permissions: You can define who can access the secret by adding one or more AWS Identity and Access Management (IAM) policies. Review and create the secret: Once you have entered all the required information, review your settings and click “Create secret” to create the secret. Alternatively, you can also create secrets programmatically using AWS SDK or CLI. Here’s an example of how you can create a new secret using the AWS CLI: Shell aws secretsmanager create-secret --name my-secret --secret-string '{"username": "myuser", "password": "mypassword"}' This command creates a new secret called “my-secret” with a JSON-formatted secret string containing a username and password. You can replace the secret string with any other JSON-formatted data you want to store as a secret. You can also create these secrets from your microservice as well: Add the AWS SDK for Java dependency to your project: You can do this by adding the following dependency to your pom.xml file: XML <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-secretsmanager</artifactId> <version>1.12.83</version> </dependency> Initialize the AWS Secrets Manager client: You can do this by adding the following code to your Spring Boot application’s configuration class: Java @Configuration public class AwsConfig { @Value("${aws.region}") private String awsRegion; @Bean public AWSSecretsManager awsSecretsManager() { return AWSSecretsManagerClientBuilder.standard() .withRegion(awsRegion) .build(); } } This code creates a new bean for the AWS Secrets Manager client and injects the AWS region from the application.properties file. Create a new secret: You can do this by adding the following code to your Spring Boot service class: Java @Autowired private AWSSecretsManager awsSecretsManager; public void createSecret(String secretName, String secretValue) { CreateSecretRequest request = new CreateSecretRequest() .withName(secretName) .withSecretString(secretValue); CreateSecretResult result = awsSecretsManager.createSecret(request); String arn = result.getARN(); System.out.println("Created secret with ARN: " + arn); } This code creates a new secret with the specified name and value. It uses the CreateSecretRequest class to specify the name and value of the secret and then calls the createSecret method of the AWS Secrets Manager client to create the secret. The method returns a CreateSecretResult object, which contains the ARN (Amazon Resource Name) of the newly created secret. These are just some basic steps to create secrets in Amazon Secrets Manager. Depending on your use case and requirements, there may be additional configuration or setup needed. Pulling the Secret Using Microservices Here are the complete steps for pulling a secret from the Amazon Secrets Manager using Spring Boot: First, you need to add the following dependencies to your Spring Boot project: XML <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-secretsmanager</artifactId> <version>1.12.37</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-core</artifactId> <version>1.12.37</version> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-aws</artifactId> <version>2.3.2.RELEASE</version> </dependency> Next, you need to configure the AWS credentials and region in your application.yml file: YAML aws: accessKey: <your-access-key> secretKey: <your-secret-key> region: <your-region> Create a configuration class for pulling the secret: Java import org.springframework.beans.factory.annotation.Autowired; import org.springframework.cloud.aws.secretsmanager.AwsSecretsManagerPropertySource; import org.springframework.context.annotation.Configuration; import com.amazonaws.services.secretsmanager.AWSSecretsManager; import com.amazonaws.services.secretsmanager.AWSSecretsManagerClientBuilder; import com.amazonaws.services.secretsmanager.model.GetSecretValueRequest; import com.amazonaws.services.secretsmanager.model.GetSecretValueResult; import com.fasterxml.jackson.databind.ObjectMapper; @Configuration public class SecretsManagerPullConfig { @Autowired private AwsSecretsManagerPropertySource awsSecretsManagerPropertySource; public <T> T getSecret(String secretName, Class<T> valueType) throws Exception { AWSSecretsManager client = AWSSecretsManagerClientBuilder.defaultClient(); String secretId = awsSecretsManagerPropertySource.getProperty(secretName); GetSecretValueRequest getSecretValueRequest = new GetSecretValueRequest() .withSecretId(secretId); GetSecretValueResult getSecretValueResult = client.getSecretValue(getSecretValueRequest); String secretString = getSecretValueResult.getSecretString(); ObjectMapper objectMapper = new ObjectMapper(); return objectMapper.readValue(secretString, valueType); } } In your Spring Boot service, you can inject the SecretsManagerPullConfig class and call the getSecret method to retrieve the secret: Java import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; @Service public class MyService { @Autowired private SecretsManagerPullConfig secretsManagerPullConfig; public void myMethod() throws Exception { MySecrets mySecrets = secretsManagerPullConfig.getSecret("mySecrets", MySecrets.class); System.out.println(mySecrets.getUsername()); System.out.println(mySecrets.getPassword()); } } In the above example, MySecrets is a Java class that represents the structure of the secret in the Amazon Secrets Manager. The getSecret method returns an instance of MySecrets that contains the values of the secret. Note: The above code assumes the Spring Boot application is running on an EC2 instance with an IAM role that has permission to read the secret from the Amazon Secrets Manager. If you are running the application locally or on a different environment, you will need to provide AWS credentials with the necessary permissions to read the secret. Conclusion Amazon Secrets Manager is a secure and convenient way to store and manage secrets such as API keys, database credentials, and other sensitive information in the cloud. By using Amazon Secrets Manager, you can avoid hardcoding secrets in your Spring Boot application and, instead, retrieve them securely at runtime. This reduces the risk of exposing sensitive data in your code and makes it easier to manage secrets across different environments. Integrating Amazon Secrets Manager with Spring Boot is a straightforward process thanks to AWS SDK for Java. With just a few lines of code, you can create and retrieve secrets from Amazon Secrets Manager in your Spring Boot application. This allows you to build more secure and scalable applications that can be easily deployed to the cloud. Overall, Amazon Secrets Manager is a powerful tool that can help you manage your application secrets in a more secure and efficient way. By integrating it with Spring Boot, you can take advantage of its features and benefits without compromising on the performance or functionality of your application.
Making the test readable and maintainable is a challenge for any software engineer. Sometimes, a test scope becomes even more significant when we need to create a complex object or receive information from other points, such as a database, web service, or property file. You can use simplicity by splitting the object creation from the test scope using the JUnitParams. In this video tutorial, we'll learn how to use JUnitParams, the types, and how to simplify your tests with these techniques. The first question that might come to your mind is: "Why should I split the object from my test scope?" We'll start to enumerate some points and when this makes sense. Avoid the complexity: Eventually, you need to create instances that may vary with the context, so take this information from a database, microservices, and so on. To make it easier, you can divide and conquer, thus, moving away from this test. Define scope: To focus on the test or increase the cohesion of the test, you can split and receive the parameters from the injection. The goal here is not to incentivize using this param injection on all tests, but once the parameters are complex and you need to test the same scenario with different tests are good candidates to explore it. JUnitParams is an extension that can help you in those cases. You need to add this dependency to your project. The code below shows the dependence on a Maven project. Java <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-params</artifactId> <version>${junit.version}</version> <scope>test</scope> </dependency> With this dependency, we'll need to replace the conventional Test annotation with ParameterizedTest and tell the source where and how those parameters will be injected. In this post, we'll list three ways, all of those with annotations as follows: ValueSource: This source works with literal values directly on the annotation. MethodSource: You can use methods on the class scope to feed this parameter. ArgumentSource: If you wish, you can apply single responsibility here, where you can have a class to provide the arguments for the test. To put it into practice, let's explore the soccer team scenario. We want to test the team business rule, where we need to ensure the team quantity, team's name, etc. Let's start with the more accessible source, the ValueSource. We can run the same test with different values. Another point is that with the ParameterizedTest, you can define the test name using the parameter. Let's use it to test the team's name. The code below shows the name test, which should create a team with the name and match the value from the parameter. The test will run twice: you'll see two tests with different names. Java @ParameterizedTest(name = "Should create a team with the name {0}") @ValueSource(strings = {"Bahia", "Santos"}) public void shouldCreateTeam(String name) { Team team = Team.of(name); org.assertj.core.api.Assertions.assertThat(team) .isNotNull().extracting(Team::name) .isEqualTo(name); } The second source is the MethodSource, where we can put more complex objects and create them programmatically. The return of this method uses the arguments that are an interface from JUnit. The test below will test a Team with a player, where we'll check that given a player, it will get into the team. Java @ParameterizedTest @MethodSource("players") public void shouldCreateTeamWithAPlayer(Player player) { Assertions.assertNotNull(player); Team bahia = Team.of("Bahia"); bahia.add(player); org.assertj.core.api.Assertions.assertThat(bahia.players()) .hasSize(1) .map(Player::name) .contains(player.name()); } static Collection<Arguments> players() { return List.of(Arguments.of(Player.of("Neymar", "Santos", 11))); } The last one we'll explore today is ArgumentSource, where you can have a class to focus on providing these arguments. The third and final test will create it. We'll test the sum of scores in a soccer team. The first step is to create a class that implements the ArgumentsProvider interface. This interface receives a context-param where you can reveal helpful information such as tags and display names. Thus, use it such as if the tag is "database," and take the source from the database. In our first test, we won't use it. Java public class PlayerProvider implements ArgumentsProvider { @Override public Stream<? extends Arguments> provideArguments(ExtensionContext extensionContext) throws Exception { List<Player> players = new ArrayList<>(); players.add(Player.of("Poliana", "Salvador", 20)); players.add(Player.of("Otavio", "Salvador", 0)); return Stream.of(Arguments.of(players)); } } The first step is to use it on the source, which is pretty similar to other sources using the annotation: Java @ParameterizedTest @ArgumentsSource(PlayerProvider.class) public void shouldCreateTotalScore(List<Player> players) { Team team = Team.of("Leiria"); players.forEach(team::add); int score = team.score(); int playerScore = players.stream().mapToInt(Player::score) .sum(); Assertions.assertEquals(playerScore, score); } That is it! The video below aims to explore more of the capability of injecting parameters in the test using JUnitParams. The three source types are just the beginning. I hope that inspires you to make your code readable with the param capability.
Railsware is an engineer-led company with a vast portfolio of building projects for companies, so when talking about Jira best practices for developers, we speak from experience. Why Do People Love Jira? Jira is by no means perfect. It certainly has its downsides and drawbacks. For instance, it is a behemoth of a product and, as such, is pretty slow when it comes to updates or additions of new functionality. Some developers also say that Jira goes against certain agile principles because—when in the wrong hands—it can promote fixation on due dates rather than delivery of product value. Getting lost in layers and levels of several boards can, indeed, disconnect people by overcomplicating things. Still, it is among the preferred project management tools among software development teams. Why is that? Permissions: Teams, especially bigger ones, work with many different experts and stakeholders, besides the core team itself. So, setting up the right access to information is crucial. Roadmaps and epics: Jira is great for organizing your project on all levels. On the highest level, you have a roadmap with a timeline. Then, you have epics that group tasks by features or feature versions. Inside each epic, you create tickets for implementation. Customization: This is Jira’s strongest point. You can customize virtually anything: Fields for your JIRA tickets. UI of your tickets, boards, roadmaps, etc. Notifications. Workflows: Each project may require its own workflow and set of statuses per ticket, e.g., some projects have staging server and QA testing on it and some don’t. Search is unrivalled (if you know SQL aka JQL in Jira): Finding something that would have been lost to history in a different project management tool is a matter of knowing JQL in Jira. The ability to add labels using keywords makes the aforementioned search and analysis even simpler. Automation: The ability to automate many actions is among the greatest and most underestimated strengths of Jira: You can create custom flows where tickets will create temporary assignees (like the back and forth between development and QA). You can make the issue fall into certain columns on the board based on its content. Move issues to “in progress” from “todo” when there’s a related commit. Post the list of released tickets to Slack as a part of release notes. Integrations and third party apps: Github, Bitbucket, and Slack are among the most prominent Jira integrations, and for good reasons. Creating a Jira ticket from a message, for example, is quite handy at times. The Atlassian Marketplace broadens your reach even further with thousands of add-ons and applications. Broad application: Jira is suitable for both iterative and non-iterative development processes for IT and non-IT teams. Jira Best Practices Let’s dive into the nitty-gritty of Jira best practices for multiple projects or for a single one. Define Your Goals and Users Jira, being as flexible as it is, can be used in a wide manner of ways. For instance, you can primarily rely on status checking throughout the duration of your sprint, or you can use it as a project management tool on a higher level (a tool for business people to keep tabs on the development process). Define your team and goals. Now that you have a clear understanding of why, let’s talk about the “who.” Who will be the primary Jira user? And will they be using it to: Track the progress on certain tickets to know where and when to contribute? Use it as a guide to learn more about the project? As a tool for tracking time for invoicing clients, performance for internal, data-driven decision making, or both? Is it a means of collaborating, sharing, and spreading knowledge across several teams involved in the development of the product? The answers to the above questions should help you define the team and goals in the context of using Jira. Integrations, Third-Party APIs, and Plugins Jira is a behemoth of a project management platform. And, like all behemoths, it is somewhat slow and clunky when it comes to moving forward. If there’s some functionality you feel is missing from the app—don’t shy away from the marketplace. There’s probably a solution for your pain already out there. Our team, for instance, relies on a third-party tool to create a series of internal processes and enhance fruitful collaboration. You can use ScriptRunner to create automation that’s a bit more intricate than what comes out of the box. Or you can use BigGantt to visualize the progress in a friendly drag-and-drop interface. Don’t shy away from combining the tools you use into a singular flow. An integration between Trello and Jira, for instance, can help several teams—like marketing and development—stay on the same page. Use Checklists in Tickets Having a checklist integrated into your Jira issues can help guide a culture that’s centered around structured and organized work as well as transparency and clarity to everyone. Our Smart Checklist for Jira offers even more benefits: You have a plan: Often times it’s hard to start a feature implementation, and without a plan, you can go in circles for a long time. Having mental peace: Working item by item is much more calm and productive than dealing with the unknown. Visibility of your work: If everyone sees the checklist progress, you are all on the same page. Getting help: If your progress is visible, colleagues can give you advice on the plan itself and the items that are challenging you. Prioritization: Once you have the items list, you can decide with your team what goes into v1, and what can be easily done later. You can use checklists as templates for recurring processes: Definition Done, Acceptance Criteria, onboarding and service desk tickets, etc., are prime candidates for automation. Moreover, you can automatically add checklists to your Jira workflow based on certain triggers like the content of an issue or workflow setup. To learn more, watch our YouTube video: “How to use Smart Checklist for Jira.” Less Is More Information is undoubtedly the key to success. That said, in the case of a Jira issue, awareness is key. What we’ve noticed over our time of experimenting with Jira is that adding more info that is either unnecessary or irrelevant seems to introduce more confusion than clarity into the process. Note: We don’t mean that Jira shouldn’t be used for knowledge transferring. If some information (links to documentation, your internal processes, etc.) is critical to the completion of a task—share it inside the task. Just use a bit of formatting to make it more readable. However, an age-old history of changes or an individual’s perspective on the requirements is not needed. Stick to what is absolutely necessary for the successful completion of a task and elaborate on that. Not more, nor less. Keep the Backlog and Requirements Healthy and Updated Every project has a backlog—a list of ideas, implementation tickets, bugs, and enhancements to be addressed. Every project that does not keep its backlog well-maintained ends up in a pickle sooner rather than later. Some of our pro-tips on maintaining a healthy backlog are: Gradually add the requirements to the backlog: If not for anything else, you’ll have a point of reference at all times, but moving them there immediately may cause certain issues as they may change before you are ready for implementation. Keep all the work of the development team in a single backlog: Spreading yourself thin across several systems that track bugs, technical debt, UX enhancements, and requirements is a big no-no. Set up a regular backlog grooming procedure: You’ll get a base plan of future activities as a result. We’d like to point out that said plan needs to remain flexible to make changes based on feedback and/or tickets from marketing, sales, and customer support. Have a Product Roadmap in Jira Jira is definitely not the go-to tool for designing a product roadmap, yet having one in your instance is a major boon, because it makes the entire scope of work visible and actionable. Additional benefits of having a roadmap in Jira include: It is easier to review the scope with your team at any time. Prioritizing new work is simpler when you can clearly see the workload. You can easily see dependencies when several teams are working on a project. Use Projects as Templates Setting up a new project can be tedious even if you’ve done it a million times before. This can be especially troublesome in companies that continuously deliver products with a similar approach to development such as mobile games. Luckily, there’s no need to do the same bit for yet another time with the right combination of tools and add-ons. A combination of DeepClone and Smart Checklist will help you clone projects, issues, stories, or workflow conditions and use them as project templates. Add Definition of Done as a Checklist to all of Your Jira Issues Definition of Done is a pre-release checklist of activities that determine whether a feature is “releasable.” In simpler words, it determines whether something is ready to be shipped to production. The best way of making this list accessible to everyone in the team is to put it inside the issues. You can use Smart Checklist to automate this process; however, there are certain rules of thumb you’ll need to follow to master the process of designing a good DoD checklist: Your objectives must be achievable. They must clearly define what you wish to deliver. It’s best if you keep the tasks measurable. This will make the process of estimating work much simpler. Use plain language so everyone who is involved can easily understand the Definition of Done. Make sure your criteria are testable so the QA team can make sure they are met. Sync With the Team After Completing a Sprint We have a nice habit of running Agile Retrospective meetings here at Railsware. These meetings, also known as Retros, are an excellent opportunity for the team to get recognition for a job well done. They can also help you come up with improvements for the next sprint. We found that the best way of running these meetings is to narrow the conversation to goods and improves. This way you will be able to discuss why the things that work are working for you. You’ll also be able to optimize the rest. Conclusion If there’s a product where there’s something for everyone—within the context of a development team—it’s probably Jira. The level of customization, adaptability, and quality of life features is an excellent choice for those teams that are willing to invest in developing a scalable and reliable process. If there’s anything missing from the app—you can easily find it on the Atlassian Marketplace.
Microservices are distributed applications deployed in different environments and could be developed in different programming languages having different databases with too many internal and external communications. Therefore, a microservices architecture is dependent on multiple interdependent applications for its end-to-end functionalities. This complex microservices architecture requires a systematic testing strategy to ensure end-to-end (E2E) testing for any given use case. In this blog, we will discuss some of the most adopted automation testing strategies for microservices, and to do that, we will use the testing triangle approach. Testing Triangle It's a modern way of testing microservices with a bottom-up approach, which is also part of the "Shift-left" testing methodology (The "shift-left" testing method pushes testing towards the early stages of software development. By testing early and often, you can reduce the number of bugs and increase the code quality.). The goal of having multiple stacked layers of the following test pyramid for microservices is to identify different types of issues at the beginning of testing levels. So, in the end, you will have very few production issues. Each type of testing focuses on a different layer of the overall software system and verifies expected results. For a distributed microservices app, the tests can be organized into the following layers using a bottom-up approach: The testing triangle is based on these five principles: 1. Unit Testing It's the starting point and level 1 white box testing in the bottom-up approach. Furthermore, it tests a small unit of source code functionality of microservices. It verifies the behavior of source code methods or functions inside a microservice by stubbing and mocking dependent modules and test data. Application developers write unit test cases for a small unit of code (independent functions/methods) using different test data and analyzing expected output independently without impacting other parts of the code. It's a vital part of the "shift-left" testing approach, where issues are identified in the starting phase at the method level of microservices. This testing should be done thoroughly with code coverage of more than ~90%. It will reduce the chances of potential bugs in the later phases. 2. Component Testing It's the level 2 testing of the Testing Triangle that follows unit testing. This testing aims to test entire microservices functionalities and APIs independently in isolation for individual microservice. By writing component tests for the highly granular microservices layer, the API behavior is driven through tests from the client or consumer perspective. Component tests will test the interaction between microservice APIs/services with the database, messaging queues, and external and third-party outbound services, all as one unit. It tests a small part of the entire system. For example, in component testing, dependent microservices and database responses are mocked or stubbed. In this testing approach, all microservices APIs are tested with multiple sets of test data. 3. Contract Testing The level 3 testing approach verifies agreed contracts between different domain-driven microservices. There are contracts defined before the development of microservices in the API/interface, designing what should be the response for the given client request or query. If any changes happen, then the contract has to be revisited and revised. For example, if any new feature changes are deployed, then they must be exposed with a separate version /v2 API request, and we need to make sure that the older /v1 version still supports client requests for backward compatibility. It tests a small part of the integration, like: Between microservice to its connected databases. API calls between two microservices. 4. Integration Testing It's level 4 testing which verifies end-to-end functionality. It is the next level of contract testing, where integration testing is used to test and verify an entire functionality by testing all related microservices. According to Martin Fowler, an integration test exercises communication paths through the subsystem to check for any incorrect assumptions each module has about how to interact with its peers. It tests a bigger part of the system, mostly the microservices exposing their services with API. For example: Login functionality, which involves multiple microservices interactions. It tests interactions for microservices API and event-driven hub components for a given functionality. 5. End-to-End (E2E) Testing It's the final and the level 5 testing approach in the Testing Triangle, and it is an end-to-end usability black box testing. It verifies that the entire system as a whole meets business functional goals from a user or a customer, or a client's perspective. E2E testing is performed on the external front-end (user interface (UI)) or API client calls with the help of the REST clients. It's performed on different distributed microservices and SPA (Single Page Apps)/MFE (Micro Front ends) applications. It covers testing of UI, backend microservices, databases, and their internal/external components. Challenges of Microservice Testing Many organizations have already adopted digital transformation, which uses microservice architecture. IT organizations find it challenging to test microservices applications because of their distributed nature. We will discuss the important challenges and solutions offered by some of the industry experts: Multiple Agile Microservices Teams Inter-communication between multiple agile microservices dev and test teams is really time taking and difficult. Sometimes, teams work in silos, not sharing enough technical/non-technical details, which causes communication gaps. Solutions: Testing triangle's integration and E2E testing can help address the above challenge by testing dependent microservices, which are developed by different dev teams. Microservice Integration Testing-Related Challenges Testing of all microservices does not happen parallelly. End-to-end integration testing of inter-dependent microservices is a nightmare; in reality, these microservices might not be ready for testing in a test environment. Every microservice will have its own security mechanism and test data. It's a daunting task to find failover of other microservices when they are dependent on each other. Solutions: Testing triangle's integration testing helps here by testing dependent microservices APIs. Business Requirement and Design Change Challenges Frequent changes in business and technical requirements in the agile development methodology lead to increased complexity and testing effort. It increases development and testing costs. Solutions: Testing triangle provides an effective systematic step-by-step process that reduces complexities, operational cost, and testing effort by full automation testing. Test Database Challenges Databases have different types (SQL/NoSQL like Redis, MongoDB, Cassandra, etc.), which have different structures. These structured and unstructured data types can be combined to meet particular business needs. For example, every database has a different type of test data in distributed microservices development. It's daunting to maintain different kinds of test data for different databases. Solutions: Testing triangle provides automated BDD (Behavioral Driven Design) where we can pass dynamic test data; and TDM (Test Data Management) method, which solves test database challenges by managing different kinds/formats of test data. Conclusion Testing triangle provides great testing techniques to solve challenges associated with microservices. We need to choose these systematic testing techniques with a perspective on lower complexity, faster testing, time to market, testing cost, and risk mitigation before releasing to production. This testing strategy is required for microservices to avoid real production issues. This ensures that test cases should cover end-to-end functional and non-functional E2E testing for UI, backend, databases, and different PROD and Non-PROD staging environments for reliable product releases. We have seen microservices introduce many testing challenges which can be solved with step by step (down to top) approach provided by testing triangle techniques. It's a modern cloud-native testing strategy to test microservices on a cloud. It finds and fixes maximum bugs during the testing phase until it reaches the highest level (topmost level in the triangle), which is E2E testing. Tips: Many IT organizations have started following the "Shift-left" culture and have started using a testing culture, especially in situations where identifying and fixing bugs early is important.
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CEO,
Aista, Ltd
Soumyajit Basu
Senior Software QA Engineer,
Encora
Vitaly Prus
Head of software testing department,
a1qa