The final step in the SDLC, and arguably the most crucial, is the testing, deployment, and maintenance of development environments and applications. DZone's category for these SDLC stages serves as the pinnacle of application planning, design, and coding. The Zones in this category offer invaluable insights to help developers test, observe, deliver, deploy, and maintain their development and production environments.
In the SDLC, deployment is the final lever that must be pulled to make an application or system ready for use. Whether it's a bug fix or new release, the deployment phase is the culminating event to see how something works in production. This Zone covers resources on all developers’ deployment necessities, including configuration management, pull requests, version control, package managers, and more.
The cultural movement that is DevOps — which, in short, encourages close collaboration among developers, IT operations, and system admins — also encompasses a set of tools, techniques, and practices. As part of DevOps, the CI/CD process incorporates automation into the SDLC, allowing teams to integrate and deliver incremental changes iteratively and at a quicker pace. Together, these human- and technology-oriented elements enable smooth, fast, and quality software releases. This Zone is your go-to source on all things DevOps and CI/CD (end to end!).
A developer's work is never truly finished once a feature or change is deployed. There is always a need for constant maintenance to ensure that a product or application continues to run as it should and is configured to scale. This Zone focuses on all your maintenance must-haves — from ensuring that your infrastructure is set up to manage various loads and improving software and data quality to tackling incident management, quality assurance, and more.
Modern systems span numerous architectures and technologies and are becoming exponentially more modular, dynamic, and distributed in nature. These complexities also pose new challenges for developers and SRE teams that are charged with ensuring the availability, reliability, and successful performance of their systems and infrastructure. Here, you will find resources about the tools, skills, and practices to implement for a strategic, holistic approach to system-wide observability and application monitoring.
The Testing, Tools, and Frameworks Zone encapsulates one of the final stages of the SDLC as it ensures that your application and/or environment is ready for deployment. From walking you through the tools and frameworks tailored to your specific development needs to leveraging testing practices to evaluate and verify that your product or application does what it is required to do, this Zone covers everything you need to set yourself up for success.
Kubernetes in the Enterprise
In 2022, Kubernetes has become a central component for containerized applications. And it is nowhere near its peak. In fact, based on our research, 94 percent of survey respondents believe that Kubernetes will be a bigger part of their system design over the next two to three years. With the expectations of Kubernetes becoming more entrenched into systems, what do the adoption and deployment methods look like compared to previous years?DZone's Kubernetes in the Enterprise Trend Report provides insights into how developers are leveraging Kubernetes in their organizations. It focuses on the evolution of Kubernetes beyond container orchestration, advancements in Kubernetes observability, Kubernetes in AI and ML, and more. Our goal for this Trend Report is to help inspire developers to leverage Kubernetes in their own organizations.
Getting Started With OpenTelemetry
Shift Left and Shift Right are two terms commonly used in the DevOps world to describe approaches for improving software quality and delivery. These approaches are based on the idea of identifying defects and issues as early as possible in the development process. This way, teams can address the issues quickly and efficiently, allowing software to meet user expectations. Shift Left focuses on early testing and defect prevention, while Shift Right emphasizes testing and monitoring in production environments. Here, in this blog, we will discuss the differences between these two approaches: Shift Left and Shift Right. The Shift-Left Approach Shift Left meaning in DevOps, refers to the practice of moving testing and quality assurance activities earlier in the software development lifecycle. This means that testing is performed as early as possible in the development process. Ideally, it is applied at the start, during the requirements-gathering phase. Shift-Left allows teams to identify and fix defects earlier in the process. This reduces the cost and time required for fixing them later in the development cycle. The goal of Shift Left is to ensure that software is delivered with higher quality and at a faster pace. Shifting left meaning in DevOps involves different aspects. Here are the key aspects of the Shift-Left Approach in DevOps: Early Involvement: The Shift-Left Approach involves testing and quality assurance teams early in the development process. This means that testers and developers work together from the beginning rather than waiting until the end. Automated Testing: Automation plays a key role in the Shift-Left Approach. Test automation tools are used to automate the testing process and ensure that defects are detected early. Collaboration: Collaboration is key to the Shift-Left Approach. Developers and testers work together to ensure that quality is built into the product from the beginning. Continuous Feedback: The Shift-Left Approach emphasizes continuous feedback throughout the development process. This means that defects are identified and fixed as soon as they are discovered, rather than waiting until the end of the SDLC. Continuous Improvement: The Shift-Left Approach is focused on continuous improvement. By identifying defects early, the development team can improve the quality of the software and reduce the risk of defects later in the SDLC. After knowing the shift left meaning, let’s see some examples too. Here are some examples of Shift Left practices in DevOps: Test-Driven Development (TDD): Writing automated tests before writing code to identify defects early in the development process. Code Reviews: Conducting peer reviews of code changes to identify and address defects and improve code quality. Continuous Integration (CI): Automating the build and testing of code changes to catch bugs early and ensure that the software is always in a deployable state. Static Code Analysis: Using automated tools to analyze code for potential defects, vulnerabilities, and performance issues. The Shift Right Approach Shift Right in DevOps, on the other hand, refers to the practice of monitoring and testing software in production environments. This approach involves using feedback from production to improve the software development process. By monitoring the behavior of the software in production, teams can identify and resolve issues quickly. This allows users to gain insights into how the software is used by end users. The goal of Shift Right is to ensure that software is reliable, scalable, and provides a good user experience. This approach involves: Monitoring production systems, Collecting feedback from users, and Using that feedback to identify areas for improvement. Here are the key aspects of the Shift Right Approach in DevOps: Continuous Monitoring: Continuous monitoring of the production environment helps to identify issues in real time. This includes monitoring system performance, resource utilization, and user behavior. Real-World Feedback: Real-world feedback from users is critical to identifying issues that may not have been detected during development and testing. This feedback can be collected through user surveys, social media, and other channels. Root Cause Analysis: When issues are identified, root cause analysis is performed to determine the underlying cause. This involves analyzing logs, system metrics, and other data to understand what went wrong. Continuous Improvement: Once the root cause has been identified, the DevOps team can work to improve the system. This may involve deploying patches or updates, modifying configurations, or making other changes to the system. Here are some examples of the Shift Right Approach: Monitoring and Alerting: Setting up monitoring tools to collect data on the performance and behavior of the software in production environments. Also, setting up alerts to notify the team when issues arise. A/B Testing: Deploying multiple versions of the software and testing them with a subset of users. This helps teams to determine which version performs better in terms of user engagement or other metrics. Production Testing: Testing the software in production environments to identify defects that may only occur in real-world conditions. Chaos Engineering: Introducing controlled failures or disruptions to the production environment to test the resilience of the software. Both Shifts Left, and Shift Right approaches are important in DevOps. They are often used together to create a continuous feedback loop that allows teams to improve software delivery. The key is to find the right balance between the two. This can easily be done using the right DevOps platform and analyzing business needs. Understanding the Differences Between Shift Left and Shift Right Shift Left and Shift Right are two different approaches in DevOps that focus on different stages of the software development and deployment lifecycle. Here are some of the key differences between these two approaches: Focus Shift Left focuses on testing and quality assurance activities that are performed early in the software development lifecycle. While Shift Right focuses on monitoring and testing activities that occur in production environments. Goals The goal of Shift Left is to identify and fix defects early in the development process. This helps to ensure that software is delivered with higher quality and at a faster pace. The goal of Shift Right is to ensure that software is secure, reliable, scalable, and provides a good user experience. Activities Shift Left activities include unit testing, integration testing, and functional testing, as well as automated testing and continuous integration. Shift Right activities include monitoring, logging, incident response, and user feedback analysis. Timing Shift Left activities typically occur before the software is deployed, while Shift Right activities occur after deployment. Risks The risks associated with Shift Left are related to the possibility of missing defects that may only be discovered in production environments. The risks associated with Shift Right are related to the possibility of introducing changes that may cause production incidents or disrupt the user experience. Conclusion Both Shifts Left, and Shift Right approaches are critical for the success of microservices. Hope, after reading this article, you’ve got a clear idea of Shifting left meaning and Shifting Right meaning. By using Shift Left and Shift Right, developers can ensure that their microservices are reliable, scalable, and efficient. In addition, these approaches help to ensure that microservices are adopted with security and compliance.
Software development has come a long way since the early days of programming. The importance of developing high-quality software has become increasingly important as businesses rely more heavily on technology to drive their operations. One method that has gained traction in recent years is Test-Driven Development (TDD). In this article, we'll explore what TDD is, why it's important, and how to implement it in your software development process. What Is Test-Driven Development? Test-Driven Development is a software development technique that emphasizes writing automated tests before writing the actual code. The process starts with writing a test case, then writing the code that satisfies that test case. Once the code has been written, the test is run to ensure that it passes. If the test fails, the code is revised until the test passes. This process is repeated until all the tests have passed and the software is considered complete. Why Is Test-Driven Development Important? Test-Driven Development offers several benefits to software development teams. Here are a few reasons why TDD is important: Reduced Bug Count Writing tests before writing code can help reduce the number of bugs in your code. In addition, by writing tests first, you can identify potential problems before they occur, making it easier to fix them. Faster Development Time Although writing tests take some time upfront, it can ultimately reduce the time it takes to develop software. In addition, by catching errors early on in the process, you can save time that would otherwise be spent fixing bugs later on. Better Collaboration TDD encourages collaboration between developers, testers, and stakeholders. By focusing on writing tests before writing code, everyone involved in the project has a clear understanding of what the software is supposed to do and can provide feedback on the functionality. Increased Confidence By having a suite of automated tests that run every time code is changed, developers can have confidence that their changes did not break any existing functionality. Implementing Test-Driven Development Implementing Test-Driven Development can initially seem daunting, but it's worth the effort. Here are some steps you can take to get started: Set up a testing framework: Before you can write tests, you need to set up a testing framework. There are many testing frameworks available for various programming languages, such as JUnit for Java and NUnit for .NET. Write a failing test: Start by writing a test that you expect to fail. This test should be based on the requirements for the software. The failing test will provide a clear objective for what the code should do. Write code to pass the test: Once you have a failing test, write the code to make the test pass. Be sure to keep the code as simple as possible to start. Refactor the code: After the test has passed, refactor the code to improve the design, readability, and maintainability of the code. Repeat: Repeat this process for each feature or requirement of the software. Conclusion Test-Driven Development is a powerful technique that can help software development teams create high-quality software. By writing tests before writing code, teams can reduce bugs, speed up development time, increase collaboration, and build confidence in their software. While implementing TDD can take some time upfront, the benefits are well worth the investment.
If you’re working with smart contracts—or even just exploring them—you probably already know that smart contract security is important. Smart contracts are immutable once deployed, and often involve significant amounts of money. Writing safe and reliable code before deployment should be top of mind. And as the adoption of blockchain accelerates, ensuring the security of smart contracts becomes even more important. One of the best additions to your smart contract audit is fuzzing, a dynamic testing technique that exposes vulnerabilities by generating and injecting random inputs into your smart contracts during testing. In this article, we’ll explore how to use fuzzing to effectively audit a smart contract. Specifically, we’ll look at ConsenSys Diligence Fuzzing—a new fuzzing as a service (FaaS) offering. We’ll delve into the technical aspects and show some code examples. What Is Fuzzing? Fuzzing is a dynamic testing technique where random (or semi-random) inputs called “fuzz” are generated and injected into code. Fuzzing can help reveal bugs and vulnerabilities that weren’t caught by traditional testing methods. Manual (unit) testing requires you to figure out what functionality to test, what inputs to use, and what the expected output should be. It’s time-consuming, difficult, and in the end, it’s still easy to miss scenarios. On the other hand, fuzzing (or fuzz testing) is an automated testing process that sends random data into an application to test its security. A fuzzer can help you understand how a program responds to unpredictable inputs. Fuzzing has been around for a while. Defensics and Burp Suite are some examples in the traditional development world. There are also several Web3/blockchain fuzzing tools available, such as Echidna and Foundry. However, Diligence Fuzzing is fuzzing as a service and makes everything a little simpler to implement. Which in the end means better audits and more secure contracts. So let’s look into it in more detail. ConsenSys Diligence Fuzzing Diligence Fuzzing (by ConsenSys, which is also behind ecosystem standards such as MetaMask and Infura) is a fuzzer built for Web3 smart contracts. It: Works from a formal spec that describes the expected behavior of your smart contract Generates transaction sequences that might be able to violate your assertions Uses advanced analysis to find inputs that cover a maximum amount of your smart contract code Validates the business logic of the app and checks for functional correctness Provides you with any findings And all as a service with minimal work from you! To use Diligence Fuzzing follow these three steps: First, define your smart contract specs using Scribble. Next, submit the code to Diligence to run your fuzzing. Finally, with the audit report, fix and improve your code! Fuzzing in Action So let’s test it out and see it in action. We will use the Fuzzing CLI and Scribble to fuzz-test a sample smart contract. Step 1: Sign Up First, sign up for access to the Diligence Fuzzing. Step 2: Install Dependencies Next, install the Fuzzing CLI and Scribble. ConsenSys recommends that you have the latest versions of Node and Python. Be sure you are using at least Python 3.6 and Node 16. Then: pip3 install diligence-fuzzing npm i -g eth-scribble ganache truffle Note: This requires a Linux, mac, or Linux subsystem with Windows. Windows Powershell has some complexities the team is working on. You can always use a GitHub codespace (which creates a VScode-like-interface with a clean bootstrapped build) and install the above prerequisites via command line. Step 3: Get an API Key Now you need to generate an API key to use the CLI. Visit the API Keys page and click on Create new API Key. Step 4: Set Up Fuzzing Configuration Now we need a smart contract to fuzz! As part of their own tutorial, ConsenSys provides a sample smart contract to use. Let’s just use that one. git clone https://github.com/ConsenSys/scribble-exercise-1.git Open the .fuzz.yml file from the project and add in your API key for the “key” property at around line 25. # .fuzz_token.yml fuzz: # Tell the CLI where to find the compiled contracts and compilation artifacts build_directory: build/contracts # The following address is going to be the main target for the fuzzing campaign deployed_contract_address: "0xe78A0F7E598Cc8b0Bb87894B0F60dD2a88d6a8Ab" # We'll do fuzzing with 2 cores number_of_cores: 2 # Run the campaign for just 3 minutes. time_limit: 3m # Put the campaign in the Sribble Exercise 1 project project: "Scribble Exercise 1" # When the campaign is created it'll get a name <prefix>_<random_characters> campaign_name_prefix: "ERC20 campaign" # Point to your ganache node which holds the seed rpc_url: "http://localhost:8545" key: "INSERT YOUR API KEY HERE" # This is the contract that the campaign will show coverage for/ map issues to etc # It's a list of all the relevant contracts (don't worry about dependencies, we'll get those automatically ) targets: - "contracts/vulnerableERC20.sol" Note: Be sure to stop your fuzzing campaigns or set a time limit, or it might run for an unexpectedly long time. You’ll note from the above file that we set the time limit for our campaigns to three minutes. Step 5: Define Fuzzing Properties Notice also that we have our smart contract: contracts/vulnerableERC20.sol. Next, we need to define the properties we want the fuzzer to check in the smart contract. We’ll use Scribble for this step. Scribble is a specification language that translates high-level specs into Solidity code. It allows you to annotate your contracts with properties and then transforms those annotations into concrete assertions that can be used by testing tools (such as Diligence Fuzzing). Pretty cool! We will add the highlighted code segments to our contract: pragma solidity ^0.6.0; /// #invariant "balances are in sync" unchecked_sum(_balances) == _totalSupply; contract VulnerableToken { This annotation will ensure that our total supply and balances are in sync. Step 6: Run Now we fuzz! Simply run this command: make fuzz Step 7: Evaluate Results After the fuzzer is done (it might take a minute or two to start up) we can get our results. We can either use the link the fuzzer gives us, or we can go to our dashboard. Looking at properties, we can see what is being fuzzed and any violations. And guess what? We found a bug! Click on the line location button to see the offensive code. For details, click Show transaction details. We can see the fuzzer called “transfer”: Upon closer examination, we can now see what caused our bug. The transfer_to and origin arguments are the same. There must be a security vulnerability when someone sends tokens to themselves. Let’s look in the source code to see what’s wrong. function transfer(address _to, uint256 _value) external returns (bool) { address from = msg.sender; require(_value <= _balances[from]); uint256 newBalanceFrom = _balances[from] - _value; uint256 newBalanceTo = _balances[_to] + _value; _balances[from] = newBalanceFrom; _balances[_to] = newBalanceTo; emit Transfer(msg.sender, _to, _value); return true; } Yep! We can see that when the sender and recipient are the same, lines 30 and 31 will get a little weird—one is changing the value of the ‘from’ account, and one is changing the value of the ‘to’ account. The code assumes they are different accounts. But since they are the same account, by the time we get to line 31, the value we have is not the value we expect. It’s already been changed by the previous line. We can fix this by adding the highlighted lines of code below: function transfer(address _to, uint256 _value) external returns (bool) { address from = msg.sender; require(_value <= _balances[from]); _balances[from] -= _value; _balances[_to] += _value; uint256 newBalanceFrom = _balances[from] - _value; uint256 newBalanceTo = _balances[_to] + _value; _balances[from] = newBalanceFrom; _balances[_to] = newBalanceTo; emit Transfer(msg.sender, _to, _value); return true; } Here are several other technical details to be aware of: The seed.js script does some setup work for you here. It deploys the contract to a test node. It can also do things like mint tokens, open pools, etc. It gives the fuzzer the right state to start. The yml file has many config parameters that you can explore. Notably the contract address to fuzz, the API key, the time_limit for the fuzzing, and some others. The CLI ships with an auto-config generator. Run fuzz generate-config to get some helpful Q&A for generating the config. Smart Contract Audits—Use Fuzzing! Fuzzing and Diligence Fuzzing-as-a-service is a powerful tool for testing auditing Ethereum blockchain smart contracts. Whether you are working in decentralized finance (DeFi), NFTs, or just starting in smart contract development, it can take you to the next level of identifying and fixing vulnerabilities in your smart contracts. Along with manual reviews, unit tests, manual testing, penetration testing, code reviews, and more, fuzzing should be a key part of your smart contract security audit process for a more secure and robust codebase. Have a really great day!
TestOps is an emerging approach to software testing that combines the principles of DevOps with testing practices. TestOps aims to improve the efficiency and effectiveness of testing by incorporating it earlier in the software development lifecycle and automating as much of the testing process as possible. TestOps teams typically work in parallel with development teams, focusing on ensuring that testing is integrated throughout the development process. This includes testing early and often, using automation to speed up the testing process, and creating continuous testing and improvement. TestOps also works closely with operations teams to ensure that the software is deployed in a stable and secure environment. TestOps is an approach to software testing that emphasizes collaboration between the testing and operations teams to improve the overall efficiency and quality of the software development and delivery processes. Place of TestOps in Software Development The Need for TestOps Initial Investment Adopting DevOps requires an initial investment of time, resources, and financial investment. This can be a significant barrier to adoption for some organizations, particularly those with limited budgets or resources. Learning Curve DevOps requires a significant cultural shift in the way that teams work together, and it can take time to learn new processes, tools, and techniques. This can be challenging for some organizations, particularly those with entrenched processes and cultures. Security Risks DevOps practices can increase the risk of security vulnerabilities if security measures are not properly integrated into the development process. This can be particularly problematic in industries with strict security requirements, such as finance and healthcare. Automation Dependencies DevOps relies heavily on automation, which can create dependencies on tools and technologies that may be difficult to maintain or update. This can lead to challenges in keeping up with new technologies or changing requirements. Cultural Resistance DevOps requires a collaborative and cross-functional culture, which may be difficult to achieve in organizations with siloed teams or where there is resistance to change. Advantages of the TestOps Continuous Testing TestOps allows continuous testing that enables organizations to detect defects early in the development process. This reduces the cost and effort required to fix defects and ensures that software applications can be delivered with high quality. Improved Quality By integrating testing processes into the DevOps pipeline, TestOps ensures that quality is built into software applications from the outset. This reduces the risk of defects and improves the overall quality of the software. Greater Efficiency TestOps enables the automation of testing processes, which can help organizations reduce the time and effort required to test software applications. This can also reduce the costs associated with testing. Increased Collaboration TestOps promotes collaboration between development and testing teams, which can help identify and resolve issues earlier in the development process. This can lead to faster feedback and better communication between teams. Faster Time-to-Market TestOps allows the automation of testing processes, which reduces the time required to test software applications. This enables organizations to release software applications faster, which can give them a competitive advantage in the marketplace. Scope of TestOps in the Future The scope of TestOps in the future is significant as software development continues to become more complex and fast-paced. TestOps combines software testing with DevOps practices. As a result, it is becoming increasingly important for organizations to implement TestOps to ensure that they can deliver high-quality software applications to market quickly. Some of the trends that are likely to shape the future of TestOps include: Increasing Adoption of Agile and DevOps Methodologies Agile and DevOps methodologies are becoming increasingly popular among organizations that want to deliver software applications faster and more efficiently. TestOps is a natural extension of these methodologies, which are likely to become an essential component of Agile and DevOps practices in the future. Greater Focus on Automation Automation is a critical aspect of TestOps and will likely become even more important in the future. The use of automation tools and techniques can help organizations reduce the time and effort required to test software applications while also improving the accuracy and consistency of testing. The Growing Importance of Cloud Computing Cloud computing is becoming increasingly popular among organizations that want to reduce their IT infrastructure costs and improve scalability. TestOps can be implemented in cloud environments, and they are likely to become even more important as more organizations move their software applications to the cloud. Overall, the scope of TestOps in the future is vast, and it is likely to become an essential component of software development practices in the coming years. Conclusion Is TestOps the future of software testing? Obliviously Yes. With the increasing adoption of Agile and DevOps methodologies, there is a growing need for software testing processes that can keep pace with rapid development and deployment cycles. TestOps can help organizations achieve this by integrating testing into the software development lifecycle and ensuring that testing is a continuous and automated process. Furthermore, as more and more software is deployed in cloud environments, TestOps will become even more important in ensuring that applications are secure, scalable, and reliable. In summary, TestOps is a key trend in software testing that is likely to continue to grow in the future as organizations look for ways to improve the efficiency and quality of their software development and delivery processes.
With Microsoft Hyper-V gaining more market share and coming of age, VMware administrators must administer Hyper-V alongside vSphere in their environments. There are certainly similarities in administering the various hypervisors, including VMware and Hyper-V, but there are also subtle differences as well. Often, out of habit, we apply what we know to things that we do not know or that are new to us. While certain methodologies or best practices extend past the boundaries of VMware vSphere and apply to Hyper-V as well, there are differences in the administration and management of Hyper-V that VMware administrators will want to note and understand. These differences also can affect backup processes in the administration. Let’s take a look at some of the key differences between Hyper-V and VMware and how these can affect your backup methodologies. VMware vCenter Server vs. System Center Virtual Machine Manager (SCVMM) VMware administrators are familiar with the well-known VMware vCenter Server – a centralized management and administration tool for creating, configuring, and interacting with all aspects of the vSphere environment. From vCenter, administrators can configure and control ESXi hosts, datacenters, clusters, traditional storage, software-defined storage, traditional networking, software-defined networking, and all other aspects of the vSphere architecture. In fact, vCenter Server is a necessary component to unlock most of the enterprise-level features and functionality of VMware vSphere. As a VMware administrator, you will typically connect your data protection solution to VMware vCenter Server as the central management pane to back up virtual machines residing on managed ESXi hosts. This provides a central login for managing and controlling the resources backed up by vSphere data protection solutions. Moreover, you can use the HTML 5-based vSphere Web Client to manage vSphere functions from any browser. In Microsoft Hyper-V, the equivalent solution for managing hosts and clusters is the System Center Virtual Machine Manager or SCVMM. However, with Hyper-V, you can perform many of the “enterprise” level tasks, such as managing a Hyper-V cluster, setting up high availability, and performing live migration without using SCVMM. You can use the Failover Cluster Management console to manage your cluster resources, including setting up and configuring Clustered Shared Volumes (or CSVs). Understanding the management interface and the differences between VMware vSphere and Microsoft Hyper-V is key to understanding the point of administration that is used to interface with data protection solutions, like. Typically, in either the VMware vSphere or Microsoft Hyper-V environment, you want to back up resources at the “host” level, which means you are backing up virtual machines centrally rather than from within the guest operating system. Knowing the respective management interfaces ensures effective and efficient VMware vSphere and Hyper-V backup. vSphere Cluster vs. Hyper-V Cluster With vCenter Server in place, creating a VMware vSphere ESXi cluster is a very quick and simple process: you simply add the hosts into the cluster. VMware “clustering” is purely for virtualization purposes. Clustering is built on top of the Windows Failover Cluster technology. Windows Failover Clustering is applied in a number of different use cases, including file servers and SQL clusters, as well as Hyper-V. Due to the more general nature of the underlying clustering technology for Hyper-V, it brings more complexity to configuring a Hyper-V virtualization cluster. However, the task can be accomplished relatively quickly if you use either PowerShell or the cluster creation wizard – Failover Cluster Manager. There are many data protection solutions available today that are able to easily interact with vSphere Center and the clusters managed therein. However, there are fewer data protection solutions that are able to integrate just as seamlessly with a cluster configuration. Understanding VMware VMFS and Hyper-V Cluster Shared Volumes VMware vSphere utilizes the Virtual Machine File System (VMFS) – VMware’s clustered file system that was purpose-built from the ground up as a virtualization file system. With each release of vSphere, VMFS has been tweaked, and its functionality and capabilities have been extended. With vSphere 6.5, VMware introduced VMFS 6.0, featuring support for 4K Native Devices in 512e mode and automatic “unmapping” functionality to reclaim unused blocks. Administrators need to understand the capabilities of each type of virtualization file system. Not all data protection solutions support Microsoft Hyper-V Cluster Shared Volumes, so it is important to understand the requirements for today’s Hyper-V environments and the compatibility requirements of CSVs. VMware Uses Snapshots, and Hyper-V Uses Checkpoints Both have mechanisms that enable them to quickly save the state and data of a virtual machine at a given point in time. The term “snapshot” is by far the popularized word for this functionality and was coined by VMware. A snapshot operation in VMware creates the following files for the saved state and data: .vmdk – The flat.vmdk file contains the raw data on the base disk. -delta.vmdk – The delta disk is represented in the format of .00000x.vmdk. This is the differencing disk; it contains the difference between the current data of the virtual machine disk and the data at the time of the previous snapshot. .vmsd – This database file contains all the pertinent snapshot information. .vmsn – This contains the memory information of the virtual machine and its current state at the point in time of the snapshot. It uses “checkpoints” as its terminology to define the means to save a “point in time” state of a virtual machine. Let’s look at the architecture of the checkpoint. A Snapshots folder is created that may contain the following files: VMCX – This is the new binary format for the configuration file introduced in Windows Server 2016. It replaces the XML file found in 2012 R2 and earlier. VMRS – This is the state file, which contains information about the state of the virtual machine. AVHDX – This is the differencing disk that is created. It records the delta changes made after the snapshot creation. As a VMware administrator, you should be advised that Microsoft has introduced “production” checkpoints with Windows Server 2016. These interact with VSS (Volume Shadow Copy) to perform checkpoints that the guest operating system is aware of. These types of checkpoints function much like backup operations performed by data protection solutions. Importantly, Microsoft allows these “production” checkpoints to be run in production environments. This is significant because, before Windows Server 2016, this technology was not supported and is still not supported with VMware snapshots. VMware Changed Block Tracking vs. Hyper-V Resilient Change Tracking With the release of ESX 4.0 back in 2009, VMware introduced a feature called Changed Block Tracking (CBT) that dramatically increases backup efficiency. Using this technology, data protection solutions are able to copy only the blocks that have changed since the last backup iteration. This method works for every backup iteration following an initial full backup of the virtual machine. You can now efficiently back up only the changes, at the block level, instead of taking full backups of a virtual machine every time, which is what generally happens with traditional legacy backup solutions. If you are a VMware administrator shifting to administrating Microsoft Hyper-V, you should know that Microsoft’s equivalent offering, called Resilient Change Tracking (RCT), was only introduced with Windows Server 2016. When you back up with Hyper-V’s Resilient Change Tracking, the following files will be created: The Resilient Change Tracking (.RCT) file – a detailed representation of changed blocks on the disk (less detailed than mapping in memory). It is written in write-back or cached mode, which means that it is used during normal virtual machine operations such as migrations, startups, shutdowns, etc. The Modified Region Table (.MRT) file – is a less detailed file than the (.RCT) file; however, it records all the changes on the disk. In the event of an unexpected power-off, crash, or another failure, the MRT file will be used to reconstruct the changed blocks. Make sure your chosen data protection solution can take advantage of the latest advancements in Hyper-V’s implementation of change tracking technology known as Resilient Change Tracking. This will ensure the quickest and most efficient Hyper-V backup iterations. VMware Uses VMware Tools vs. Hyper-V Uses Integration Services Both VMware and Hyper-V make use of components installed in the guest operating system to ensure more powerful integration between the hypervisor and the guest operating system. In VMware vSphere, this is handled with VMware Tools. VMware Tools is a suite of utilities that can be installed for better virtual machine performance, including driver-supported 3D graphics and mouse and keyboard enhancements, as well as time synchronization, scripting, and other automation features. Importantly, it also enables you to perform “application-aware” backups, which ensures that database applications are backed up in a transactionally consistent state. Concluding Thoughts In today’s world of hybrid infrastructures and multi-hypervisor environments, at some point, you will most likely be asked to act as an administrator of both VMware vSphere and Microsoft Hyper-V environments for production workloads. Understanding the differences in management, administration, and underlying architecture is important for the successful administration of both VMware vSphere and Microsoft Hyper-V. All of these differences affect data protection solutions and their interaction with the hypervisors.
In the world of software development, ensuring code quality is critical to the success of any project. Code that is buggy, unreliable, or inefficient can lead to costly errors and negative user experiences. One way to ensure code quality is through the use of testing methodologies, such as Unit Testing and Integration Testing. These two integration tests can be especially effective when integrated into a Continuous Integration and Continuous Deployment (CI/CD) workflow. Explore the importance of Unit Testing and Integration Testing in CI/CD pipelines in this article. Here, we'll discuss how these two approaches can help to ensure code quality and reliability throughout the software development process. Also, we'll explain these two testing approaches in detail here and know the differences between the two. Let's take a look! What Is Unit Testing? Unit testing is a software testing methodology that involves testing individual units or components of a software application separately. The process is executed in isolation from the rest of the system. In this technique, each unit or module is tested independently to ensure that it is functioning as expected and meets its requirements. The purpose of doing unit testing is to identify and fix bugs and defects in the code at an early stage of development. This helps to reduce the overall cost of development and ensures that the software application is more reliable and scalable. Unit testing typically involves writing automated tests that can be executed repeatedly and quickly. The tests are usually written by developers and are integrated into the development process as part of the CI/CD pipelines. This makes it an important CI/CD tool. Unit testing is an essential part of the software development process, as it helps to ensure that each component of the application is working correctly and can be integrated into the larger system with confidence. What Is Integration Testing? (H2) Integration testing is a software testing technique that involves testing the interaction between different components or modules of a software system. It is performed after unit testing and before system testing. It is done to verify that the individual units or modules function correctly when they are integrated. The primary goal of integration testing is to identify defects and issues that arise from the interaction between different modules. This includes testing how data flows between modules, how modules communicate with each other, and how well the modules work together to achieve the overall goals of the system. Integration testing can be performed in different ways, depending on the complexity of the system being tested. For example, integration testing can be performed top-down or bottom-up. In top-down integration testing, the high-level modules are tested first, and then the lower-level modules are gradually integrated and tested. In bottom-up integration testing, the low-level modules are tested first, and then the higher-level modules are gradually integrated and tested. Integration testing is an essential part of the software testing process as it helps to ensure that the different components of a software system work correctly. Unit Testing vs. Integration Testing Unit testing and integration testing are both software testing techniques that have different goals and are performed at different stages of the software development process. Using these two testing approaches along with DevOps principles can help in setting up a secure, reliable, and efficient DevOps workflow. Here are some of the main differences between these two integration tests: Scope: Unit testing focuses on testing individual units or components of a software system in isolation, whereas integration testing focuses on testing the interaction between different components or modules of a system. Purpose: Unit testing is mainly used to verify the correctness of the code at the individual component level and to identify defects early in the development cycle. Integration testing is used to verify that the components or modules work correctly together and to identify defects that arise from their interaction. Timeframe: Unit testing is performed by developers during the development phase. On the other hand, integration testing is typically performed after unit testing and before system testing during the integration phase. Test Approach: Unit testing is usually automated and involves running small, focused tests that can be executed quickly. Integration testing is often a mix of automated and manual testing. It mainly involves running larger, more complex tests that cover the interactions between multiple components or modules. Test Environment: Unit testing can be performed in a developer's local environment, using mock objects to simulate dependencies. Integration testing requires a more complex test environment that includes the actual components or modules and their dependencies. To sum up, unit testing helps to identify issues early in the development cycle and ensure the correctness of individual components. On the other hand, integration testing helps to ensure the correctness of the system as a whole by testing how components work together. Now, let's examine the significance of these two integration tests in CI/CD pipelines and how each of these is an important CI/CD tool. Also, read further to know how they assure code quality and dependability across every stage of software development. The Role of Unit Testing in CI/CD Unit testing is a critical component of any software development process, particularly in the Continuous Integration and Continuous Deployment process. CI/CD is a process that involves continuously testing and deploying code changes to production environments in small increments to minimize the risks of breaking the codebase. Unit tests are automated tests that focus on small, individual parts of the code, such as functions or methods. They are designed to test the behavior of these units in isolation, without dependencies on other parts of the system. By writing and running unit tests as part of the CI/CD pipeline, developers can identify issues early in the development process before they become more complex and harder to fix. No doubt, it can be considered an important CI/CD tool. Using Unit testing together with DevOps principles can help in establishing a secure, reliable, and efficient DevOps cycle. Unit testing in CI/CD pipelines has several benefits: Early Detection of Bugs: Unit tests help catch bugs early in the development cycle when they are easier and less expensive to fix. Improved Code Quality: Unit tests can help ensure that code is of high quality and conforms to the expected outcome. Faster Feedback: Automated unit tests provide quick feedback to developers, helping them to quickly identify issues and fix them before they are merged into the main codebase. Reduced Risk: Unit testing helps minimize the risk of introducing errors into the codebase, as well as identifying regressions in existing code. More Reliable Releases: By continuously testing code changes as part of the CI/CD pipeline, developers can be more confident in the reliability of their releases. Unit testing plays an essential role in CI/CD pipelines. By identifying issues early in the development process, unit testing can help save time and money, reduce risk, and improve the overall quality of code. Importance of Integration Testing in CI/CD Integration testing is an essential part of the continuous integration and continuous delivery/deployment (CI/CD) process. It involves testing the interactions between different components of a software system to ensure they work together correctly. This type of testing is especially important in the CI/CD process, as it helps to detect and fix defects early in the development cycle. Using Integration testing together with DevOps principles can help in making a reliable, efficient, and secure DevOps cycle. Here are some reasons why integration testing is important in the CI/CD process and can be considered an important CI/CD security tool. Early Detection of Defects: Integration testing helps to identify defects early in the development cycle before they become more expensive to fix. This allows teams to quickly address issues and prevent them from moving into production. Verification of Code Changes: As code changes are made, and new features are added, integration testing verifies that the changes do not have any unexpected effects on other parts of the system. Faster Time-to-Market: It catches defects early and reduces the time spent on manual testing. Therefore, it helps to speed up the development process and get products to market faster. Improved Quality: Integration testing helps to improve the quality of the software by ensuring that all components work together seamlessly. It also ensures that the system as a whole meets the desired level of functionality and performance. Integration testing helps to ensure that software is thoroughly tested and meets the required standards of quality and functionality before it is released into production. Bringing It All Together Unit Testing and Integration Testing are essential components of a successful CI/CD workflow that ensure code quality and reliability throughout the software development process. By incorporating these two integration tests, developers can identify and address issues early on, reducing the risk of errors and improving the overall quality of the codebase. Additionally, automating these tests can lead to significant time and cost savings while increasing the efficiency of the development cycle. No doubt, integrating these two testing techniques along with essential DevOps principles can help enterprises in creating a streamlined, high-quality, and efficient DevOps workflow. But, to effectively implement Unit Testing and Integration Testing in your CI/CD workflow, it is important to understand their differences, choose the appropriate testing tools, and regularly review and refine your testing approach.
I've been writing developer tests for a very long time. Lately, I've been reflecting on the types of tests I write and why some are easier than others. When teaching and coaching others how to write tests, I almost always explain what I mean by "Unit Tests" and "Integration Tests": Unit Tests don't touch hardware, don't do I/O, etc. (This should sound familiar to some of you, as it's the same idea as the set of unit testing rules by Michael Feathers written in 2005), and test against a single object or group of objects (I use the terms Sociable and Solitary to differentiate between the different kinds of "unit" tests, as defined by Jay Fields). I'd then demonstrate what I meant like this: Java @Test public void fullDeckHas52Cards() { Deck deck = new Deck(); assertThat(deck.size()) .isEqualTo(52); } @Test public void drawCardFromDeckReducesDeckSizeByOne() { Deck deck = new Deck(); deck.draw(); assertThat(deck.size()) .isEqualTo(51); } These are both "unit" tests (neither touch hardware nor do I/O) and are Solitary (the tests only reference the Deck class). Later on, we'd need to write tests against code that may do I/O, often a database or an external service (usually over the network). I'd explain that those were "Integration Tests" because we were integrating our code with someone else's code (the database code or the other service's code) through I/O. However, calling someone else's code that's supplied as a library (e.g., a JAR file) that doesn't do I/O (such as Caffeine, a caching library) can also be considered integration. Even using Java's Collection classes (e.g., ArrayList) is using someone else's code, though we don't usually think of that as integration. Hard To Redefine Terms With Lots of Baggage Over the past few years, I've become more frustrated with the terms "unit" and "integration" because: I have to explain what Unit and Integration mean before I can use them Everyone has their own internal definition of what they mean, which is often different from mine and other folks in the room Differences in definitions often led to long discussions that aren't useful Folks don't remember how I define it and fall back to their own definitions I don't find the debate over what "unit" means to be a useful exercise. I finally decided to do something about it and come up with different names. But first, I had to answer the question: why does it matter? What about Unit (doesn't do I/O) and Integration (may do I/O?) is important for the way I approach development? No I/O = Predictable and Fast If you create an object and call a method that only accesses internal properties (fields) and any parameters to the method, it must be predictable. Any logic or calculations the code is doing is deterministic. What makes code not predictable? I/O. Accessing a file is unpredictable because the file could've changed without you knowing, the drive could fail intermittently, could be out of space, etc. Accessing a remote service involves not only the network (unreliable) but also the remote service (unpredictable). I include access to anything outside of memory, such as random number generation and the current date and time as I/O, because they are also unpredictable. By eliminating all I/O, you make the code under test and, therefore, the test deterministic. Not accessing I/O also means your tests will run extremely fast (yes, there's code that performs lengthy calculations that run completely in memory, but I'm not talking about that), with most of the time spent getting the tests ready to run: compiling, starting, etc. Everything else, from instantiating objects to running the code, is almost instantaneous. There's no reason you can't have many thousands of tests run in a few seconds. By the way, this speed is critical for doing test-driven development, which is why I focus on these kinds of tests. Once set up, running tests without I/O is almost instantaneous. Now when it comes to code that does interact with the outside world, such as getting the current date and time or fetching information from a database or external service, I still want tests that don't do I/O. This is where Test Doubles come into play. In Hexagonal Architecture, that might mean a Stub or a Fake in place of a concrete Adapter, or I might use Nullable Infrastructure Wrappers, which is Stub-like implementation embedded in your production code. The idea is that we're still not touching I/O, so everything is still Predictable and Fast. These are a form of Sociable tests, where we're testing a larger set of collaborating classes but explicitly not testing the I/O itself. I want 80-90% of my tests to be these kinds of tests. Of course, it'll vary widely depending on how much your system is doing things vs. integrating with other things. I/O = Unpredictable and Slower At some point, though, you want to have some sort of test that touches I/O. Things like: Check the database schema is valid Ensure the ORM can read and write from the database and create well-formed objects Call an external service API and ensure you get a valid response These are hard to write because, as I said, they're unpredictable. Databases are often managed ("owned/defined") by the application, so those can use tools like Docker and Testcontainers to allow your tests to use real databases. They're slower, but you won't be running them nearly as often as the other tests. When it comes to calling external services, if it's a service that you can run inside a container (maybe it's Kafka, or it's a custom service created by another team that provides an executable), then you can do the same thing as with the database above. But if it's a public service (like GitHub or Google), or any service that you can't run locally or in a container, you're not going to be able to run automated tests that are predictable. Yes, you could run against a "sandbox" environment, but for me, that falls under "unpredictable" based on my experiences (maybe yours work better?). So, if I write these tests, I run them manually or have other ways of checking that my code works (like good monitoring and observability in production). Naming Attempts Naming is hard, as we all know. Trying to find a name that is descriptive and memorable but with little or no baggage is quite a task. First Attempt During a discussion of the problem with Willem Larsen, he proposed using words from a different language (e.g., Japanese) for the same terms (or at least what they meant to me). However, not being able to speak Japanese, I had to rely on less-than-perfect translation systems. Not only did I want to find a word or phrase that had the intent I wanted, but it also had to be relatively short. For months, I played with different translations but wasn't happy with the results, so I shelved the idea. Pure and Impure In Functional Programming, the terms "Pure" and "Impure" have somewhat similar meanings to my usage of "Unit" and "Integration." In FP, a pure function is a function that doesn't cause any side effects and always produces the same output for a given input. I tried using this terminology for a while, but it had two problems: I still had to define "pure" as not accessing I/O I got objections from FP folks because of the misuse of the term "pure" in the context of testing stateful object-oriented code (methods accessing internal state could never be "pure") Honestly, #1 was the more annoying aspect for me, though it did have the benefit of not coming with much baggage (except for folks familiar with FP). I do use the term "purify" when I'm talking about the process of separating the I/O code from the logic, e.g., "Let’s purify this code, making it I/O-Free." I/O-Free and I/O-Dependent So, here we are. The two main kinds of tests I write. The majority are I/O-Free tests, and when I hit the I/O boundary and need to test against some real external service, I use I/O-Dependent tests (or I/O-Based). I further split I/O-Free tests into Domain, Application, and other buckets, depending on the application architecture (e.g., Domain tests never have Test Doubles), but that's an article for another day.
"DevOps is dead." Well, not exactly. But the DevOps methodology of "you build it, you run it" has been failing development teams for years. On this week's episode of Dev Interrupted, we sit down with Kaspar von Grünberg, founder and CEO of Humanitec. Listen as Kaspar explains the significant cognitive load placed on developers as a result of DevOps practices, how that has caused software engineering to be the only industry since Medieval times not to drive towards specialization, and why platform engineers provide a solution to the outdated DevOps model. Episode Highlights: (2:24) What is platform engineering? (7:05) Should VPEs have a platform team right now? (11:29) Difference between SREs and platform teams (17:14) DevOps is dead (19:11) How scale affects team size (26:12) Standardization of the space (28:08) Kaspar's work at Humanitec (32:30) The future of platform engineering Episode Excerpt: Dan: If I'm a VP of engineering right now, listening to this pod, should I have a platform engineering team? Kaspar: Yes. I mean, no, you can always argue a customer has an interest of you having a plethora of engineering team, but I am looking at the return on investment of these teams. And there, it's so large, you can gain so much from this. There's so much inefficiency in these workflows that, yes, you definitely should have one. And having a platform engineering team doesn't like sounds, you know, more costly if you want that. It is- take a product manager, halftime if you want. But structure this correctly, structure this as a product, find a couple of people that are responsible for this, take them from different groups, you don't need me to rehire, apply these principles, you know, take this on structure. And you'll see a very, very fast return with fairly low costs. And so definitely, definitely yes. And I want to get back to one of the absolutely correct things you said. We have these fundamentalists shouting at us. You know, everybody has to do everything in context. Otherwise, you're abstracting people away. And those are these. It's this type of thing you can always say. Everybody could always say that they say never restrict developers, never take away from this. Of course not. But that's not the idea. Like platform engineering is not about taking context away, the contrary holds true. It's about providing context. If you're looking at 700 different script formats, that's not context. That's cognitive overload. You don't win anything. And so that is really like, our industry is the only industry that is not actually driving towards specialization. Since the medieval ages to now. Every industry has always specialized. We're the only fucking industry in the world that is actually working against specialization. I already have a problem with these fundamentalist approaches or viewpoints that many of them just have never really worked at scale. And scale for me means, like, production-grade two fifty-three, four or five hundred engineers over a longer period of time. And to believe that in these situations, you can just shift everything to everybody is so insanely naive. And then the next argument I always hear is like, Hey, you build it, you run it, Werner Vogels said so, I mean, let's pause and look at the situation where Werner Vogels said that he said in a blog post, 2006. That guy was in the CTO position for a UX director of research for like 12 months before he was a researcher. That guy had never worked at scale. And Amazon's teams, at that point, were a couple of dozen developers. The sentence that this guy said in 2006 says nothing about the reality of a bank in the US East with two and a half thousand developers that are drowning in policy. It's just naive to say that. It doesn't make sense. Engineering Insights before anyone else... The Weekly Interruption is a newsletter designed for engineering leaders by engineering leaders. We get it. You're busy. So are we. That's why our newsletter is light, informative, and oftentimes irreverent. No BS or fluff. Each week we deliver actionable advice to help make you — whether you're a CTO, VP of Engineering, team lead, or IC — a better leader.
If you're tired of managing your infrastructure manually, ArgoCD is the perfect tool to streamline your processes and ensure your services are always in sync with your source code. With ArgoCD, any changes made to your version control system will automatically be synced to your organization's dedicated environments, making centralization a breeze. Say goodbye to the headaches of manual infrastructure management and hello to a more efficient and scalable approach with ArgoCD! This post will teach you how to easily install and manage infrastructure services like Prometheus and Grafana with ArgoCD. Our step-by-step guide makes it simple to automate your deployment processes and keep your infrastructure up to date. We will explore the following approaches: Installation of ArgoCD via Helm Install Prometheus via ArgoCD Install Grafana via ArgoCD Import Grafana dashboard Import ArgoCD metrics Fire up an Alert Prerequisites: Helm Kubectl Docker for Mac with Kubernetes Installation of ArgoCD via Helm To install ArgoCD via Helm on a Kubernetes cluster, you need to: Add the ArgoCD Helm chart repository. Update the Helm chart repository. Install the ArgoCD Helm chart using the Helm CLI. Finally, verify that ArgoCD is running by checking the status of its pods. # Create a namespace kubectl create namespace argocd # Add the ArgoCD Helm Chart helm repo add argo https://argoproj.github.io/argo-helm # Install the ArgoCD helm upgrade -i argocd --namespace argocd --set redis.exporter.enabled=true --set redis.metrics.enabled=true --set server.metrics.enabled=true --set controller.metrics.enabled=true argo/argo-cd # Check the status of the pods kubectl get pods -n argocd When installing ArgoCD, we enabled two flags that exposed two sets of ArgoCD metrics: Application Metrics : controller.metrics.enabled=true API Server Metrics : server.metrics.enabled=true To access the installed ArgoCD, you will need to obtain its credentials: Username: admin Password: kubectl -n argocd \ get secret \ argocd-initial-admin-secret \ -o jsonpath="{.data.password}" | base64 -d Link a Primary Repository to ArgoCD ArgoCD uses the Application CRD to manage and deploy applications. When you create an Application CRD, you specify the following: Source reference to the desired state in Git. Destination reference to the target cluster and namespace. ArgoCD uses this information to continuously monitor the Git repository for changes and deploy them to the target environment. Let’s put it into action by applying the changes: cat <<EOF | kubectl apply -f - apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: workshop namespace: argocd spec: destination: namespace: argocd server: https://kubernetes.default.svc project: default source: path: argoCD/ repoURL: https://github.com/naturalett/continuous-delivery targetRevision: main syncPolicy: automated: prune: true selfHeal: true EOF Let’s access the server UI by using the kubectl port forwarding: kubectl port-forward service/argocd-server -n argocd 8080:443 Connect to ArgoCD. Install Prometheus via ArgoCD By installing Prometheus, you will be able to leverage the full stack and take advantage of its features. When you install the full stack, you will get access to the following: Prometheus Grafana dashboard, and more. In our demo, we will apply from the Kube Prometheus Stack the following services: Prometheus Grafana AlertManager The node-exporter will add the separately with its own helm chart while deactivating the pre-installed default that comes with the Kube Prometheus Stack. There are two ways to deploy Prometheus: Option 1: By applying the CRD. Option 2: By using automatic deployment based on the kustomization. In our blog, the installation of Prometheus will happen automatically, which means that Option 2 will be applied automatically. Option 1 — Apply the CRD cat <<EOF | kubectl apply -f - apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: prometheus namespace: argocd spec: destination: name: in-cluster namespace: argocd project: default source: repoURL: https://prometheus-community.github.io/helm-charts targetRevision: 45.6.0 chart: kube-prometheus-stack EOF Option 2 — Define the Installation Declaratively This option has already been applied based on the CRD we deployed earlier in the step of linking a primary repository to ArgoCD. The CRD is responsible for syncing our application.yaml files with the configuration specified in the kustomization. After Prometheus will get deployed, then it exposes its metrics to /metrics. To display these metrics in Grafana, we need to define a Prometheus data source. In addition, we also have additional metrics that we want to display in Grafana, so we’ll need to scrape them in Prometheus. Access the Prometheus server UI Let’s access Prometheus by using the kubectl port forwarding: kubectl port-forward service/kube-prometheus-stack-prometheus -n argocd 9090:9090 Connect to Prometheus. Prometheus Node Exporter For the installation of the Node Exporter, we utilized the declarative approach, which also happened in Option 2. The installation process will happen automatically, just like it occurred in Option 2, once we link the primary repository to ArgoCD. We will specify the configuration for the Node Exporter’s application using a declarative approach. Prometheus Operator CRDs Due to an issue with the Prometheus Operator Custom Resource Definitions, we have decided to deploy the CRD separately. The installation process will be automatic, similar to the one in Option 2, which relied on linking a primary repository to ArgoCD in an earlier step. Install Grafana via ArgoCD We used the same declarative approach as Option 2 to define the installation of the Grafana. The installation process will take place automatically, just like it does in Option 2, following the earlier step of linking a primary repository to ArgoCD. Since the Grafana installation is part of the Prometheus stack, it was installed automatically when the Prometheus stack was installed. To access the installed Grafana, you will need to obtain its credentials: Username: admin Password: kubectl get secret \ -n argocd \ kube-prometheus-stack-grafana \ -o jsonpath="{.data.admin-password}" | base64 --decode ; echo Let’s access Grafana by using the kubectl port forwarding: kubectl port-forward service/kube-prometheus-stack-grafana -n argocd 9092:80 Connect to Grafana. Importing the ArgoCD Metrics Dashboard Into Grafana We generated a configMap for the ArgoCD Dashboard and deployed it through kustomization. During the deployment of Grafana, we linked the configMap to create the Dashboard and then leveraged Prometheus to extract the ArgoCD metrics data for gaining valuable insights into its performance. The ArgoCD dashboard’s metrics were made available as a result of an earlier section in the blog post: --set server.metrics.enabled=true \ --set controller.metrics.enabled=true This enabled us to view and monitor the metrics easily through the dashboard. Confirm the ArgoCD metrics: # Verify if the services exist kubectl get service -n argocd argocd-application-controller-metrics kubectl get service -n argocd argocd-server-metrics # Configure port forwarding to monitor Application Metrics kubectl port-forward service/argocd-application-controller-metrics -n argocd 8082:8082 # Check the Application Metrics http://localhost:8082/metrics # Configure port forwarding to monitor API Server Metrics kubectl port-forward service/argocd-server-metrics -n argocd 8083:8083 # Check the API Server Metrics http://localhost:8083/metrics Fire up an Alert Execute the following script to trigger an alert: curl -LO https://raw.githubusercontent.com/naturalett/continuous-delivery/main/trigger_alert.sh chmod +x trigger_alert.sh ./trigger_alert.sh Let’s access the Alert Manager: kubectl port-forward service/alertmanager-operated -n argocd 9093:9093 Connect to Alert Manager. Confirm that the workshop alert has been triggered: Clean up the Environment By deleting the workshop ApplicationsSet, all the dependencies that were installed as per the defined kustomization will be removed. Delete the ArgoCD installation and any associated dependencies: kubectl delete crd alertmanagerconfigs.monitoring.coreos.com kubectl delete crd alertmanagers.monitoring.coreos.com kubectl delete crd podmonitors.monitoring.coreos.com kubectl delete crd probes.monitoring.coreos.com kubectl delete crd prometheuses.monitoring.coreos.com kubectl delete crd prometheusrules.monitoring.coreos.com kubectl delete crd servicemonitors.monitoring.coreos.com kubectl delete crd thanosrulers.monitoring.coreos.com kubectl delete crd applications.argoproj.io kubectl delete crd applicationsets.argoproj.io kubectl delete crd appprojects.argoproj.io helm del -n argocd argocd Summary Through our learning process, we have developed proficiency in automating infrastructure management and synchronizing our environment with changes to our source code. Specifically, we have learned how to deploy ArgoCD and utilize its ApplicationsSet to deploy a Prometheus stack. Additionally, we have demonstrated the process of extracting service metrics to Prometheus and visualizing them in Grafana, as well as triggering alerts in our monitoring system. For continued learning and access to valuable resources, we encourage you to explore our tutorial examples on Github.
Git is one of the most popular version control systems used by developers worldwide. As a software developer, you must be well-versed in Git and its commands to manage code efficiently, collaborate with other team members, and keep track of changes. While there are many Git commands available, not all are equally important. In this article, I’ll cover the top Git commands that every senior-level developer should know. “A Git pull a day keeps the conflicts away” Git Init The git init command initializes a new Git repository. This command is used to start tracking changes in your project. As soon as you run this command, Git creates a new .git directory, which contains all the necessary files to start using Git for version control. Once initialized, Git tracks all changes made to your code and creates a history of your commits. Shell $ git init Initialized empty Git repository in /path/to/repository/.git/ Git Clone The git clone command creates a copy of a remote Git repository on your local machine. This is a great way to start working on a new project or to collaborate with others on existing projects. When you run this command, Git downloads the entire repository, including all branches and history, to your local machine. Shell $ git clone https://github.com/username/repository.git Git Add The git add command adds new or modified files to the staging area, which prepares them to be committed. The staging area is a temporary storage area where you can prepare your changes before committing them. You can specify individual files or directories with this command. Git tracks changes in three stages — modified, staged, and committed. The add command moves changes from the modified stage to the staged stage. To add all changes in the current directory to the staging area, run: Shell $ git add file.txt Shell git add . Git Commit The git commit command creates a new commit with the changes you've added to the staging area. A commit is a snapshot of your repository at a specific point in time, and each commit has a unique identifier. The git commit command records changes to the repository. A commit includes a commit message that describes the changes made. To commit changes to the repository, run the following command: Shell $ git commit -m "Added new feature" Git Status The git status command shows you the current state of your repository, including any changes that have been made and which files are currently staged for commit. It tells you which files are modified, which files are staged, and which files are untracked. Shell $ git status On branch master Changes to be committed: (use "git reset HEAD <file>..." to unstage) modified: file.txt Git Log The git log command shows you the history of all the commits that have been made to your repository. You can use this command to see who made changes to the repository and when those changes were made along with their author, date, and commit message. Shell $ git log commit 5d5b5e5dce7d1e09da978c8706fb3566796e2f22 Author: John Doe <john.doe@example.com> Date: Tue Mar 23 14:39:51 2023 -0400 Added new feature git log --graph: Displays the commit history in a graph format. git log --oneline: Shows the commit history in a condensed format. git log --follow: Follows the history of a file beyond renames. Git Diff The git diff command shows you the differences between the current version of a file and the previous version. This command is useful when you want to see what changes were made to a file. When you run this command, Git shows you the changes that were made between two commits or between a commit and your current working directory. This will show you the differences between the current branch and the “feature_branch” branch. Shell $ git diff feature_branch Git Branch The git branch command shows you a list of all the branches in the Git repository. You can use this command to see which branch you are currently on and to create new branches. This command is used to create, list, or delete branches. Shell $ git branch feature-1 Git Checkout The git checkout command is used to switch between branches. You can use this command to switch to a different branch or to create a new branch. Shell $ git checkout feature-1 Git Merge The git merge command is used to merge changes from one branch into another. This command is useful when you want to combine changes from different branches. When you run this command, Git combines the changes from two branches and creates a new commit. Shell $ git merge feature-1 Git Pull The git pull command is used to update your local repository with changes from a remote repository. This command is used to download changes from a remote repository and merge them into your current branch. When you run this command, Git combines the changes from the remote repository with your local changes and creates a new commit. Shell $ git pull origin master Git Push The git push command is used to push your changes to a remote repository. This command is useful when you want to share your changes with others. When you run this command, Git sends your commits to the remote repository and updates the remote branch. Shell $ git push origin master Git Remote This command is used to manage the remote repositories that your Git repository is connected to. It allows you to add, rename, or remove remote repositories. git remote rm: Removes a remote repository. git remote show: Shows information about a specific remote repository. git remote rename: Renames a remote repository. Git Fetch This command is used to download changes from a remote repository to your local repository. When you run this command, Git updates your local repository with the latest changes from the remote repository but does not merge them into your current branch. Shell $ git fetch origin Git Reset This command is used to unstaged changes in the staging area or undo commits. When you run this command, Git removes the changes from the staging area or rewinds the repository to a previous commit. Shell $ git reset file.txt Git Stash This command is used to temporarily save changes that are not yet ready to be committed. When you run this command, Git saves your changes in a temporary storage area and restores the repository to its previous state. Shell $ git stash save "Work in progress" “Why did the Git user become a magician? Because they liked to git stash and make their code disappear” Git Cherry-Pick This command is used to apply a specific commit to a different branch. When you run this command, Git copies the changes from the specified commit and applies them to your current branch. Shell $ git cherry-pick 5d5b5e5dce7d1e09da978c8706fb3566796e2f22 Git Rebase This command is used to combine changes from two branches into a single branch. When you run this command, Git replays the changes from one branch onto another branch and creates a new commit. Shell $ git rebase feature-1 Git Tag This command is used to create a tag for a specific commit. A tag is a label that marks a specific point in your Git history. Shell $ git tag v1.0.0 Git Blame This command is used to view the commit history for a specific file or line of code. When you run this command, Git shows you the author and date for each line of code in the file. Shell $ git blame file.txt “Git: making it easier to blame others since 2005” Git Show This command is used to view the changes made in a specific commit. When you run this command, Git shows you the files that were changed and the differences between the old and new versions. Shell $ git show 5d5b5e5dce7d1e09da978c8706fb3566796e2f22 Git Bisect This command is used to find the commit that introduced a bug in your code. When you run this command, Git helps you narrow down the range of commits to search through to find the culprit. Shell $ git bisect start $ git bisect bad HEAD $ git bisect good 5d5b5e5dce7d1e09da978c8706fb3566796e2f22 Git Submodule This command is used to manage submodules in your Git repository. A submodule is a separate Git repository included as a subdirectory of your main Git repository. Shell $ git submodule add https://github.com/example/submodule.git Git Archive This command is used to create a tar or zip archive of a Git repository. When you run this command, Git creates an archive of the repository that you can save or send to others. Shell $ git archive master --format=zip --output=archive.zip Git Clean This command is used to remove untracked files from your working directory. When you run this command, Git removes all files and directories that are not tracked by Git. Shell $ git clean -f Git Reflog This command is used to view the history of all Git references in your repository, including branches, tags, and HEAD. When you run this command, Git shows you a list of all the actions that have been performed in your repository. Git Config This command is used in Git to configure various aspects of the Git system. It is used to set or get configuration variables that control various Git behaviors. Here are some examples of how to use git config: Set user information: Shell git config --global user.name "Your Name" git config --global user.email "your.email@example.com" The above commands will set your name and email address globally so that they will be used in all your Git commits. Show configuration variables: Shell git config --list The above command will display all the configuration variables and their values that are currently set for the Git system. Set the default branch name: Shell git config --global init.defaultBranch main The above command will set the default branch name to “main”. This is the branch name that Git will use when you create a new repository. Git Grep This command in Git searches the contents of a Git repository for a specific text string or regular expression. It works similarly to the Unix grep command but is optimized for searching through Git repositories. Here's an example of how to use git grep: Let’s say you want to search for the word “example” in all the files in your Git repository. You can use the following command: Shell git grep example This will search for the word “example” in all the files in the current directory and its subdirectories. If the word “example” is found, git grep will print the name of the file, the line number, and the line containing the matched text. Here's an example output: Shell README.md:5:This is an example file. index.html:10:<h1>Example Website</h1> In this example, the word “example” was found in two files: README.md and index.html. The first line of the output shows the file name, followed by the line number and the line containing the matched text. You can also use regular expressions with git grep. For example, if you want to search for all instances of the word "example" that occur at the beginning of a line, you can use the following command: Shell git grep '^example' This will search for all instances of the word “example” that occur at the beginning of a line in all files in the repository. Note that the regular expression ^ represents the beginning of a line. Git Revert This command is used to undo a previous commit. Unlike git reset, which removes the commit from the repository, git revert creates a new commit that undoes the changes made by the previous commit. Here's an example of how to use git revert. Let’s say you have a Git repository with three commits: Plain Text commit 1: Add new feature commit 2: Update documentation commit 3: Fix bug introduced in commit 1 You realize that the new feature you added in commit 1 is causing issues and you want to undo that commit. You can use the following command to revert commit 1: Shell git revert <commit-1> This will create a new commit that undoes the changes made by commit 1. If you run git log after running git revert, you'll see that the repository now has four commits: Plain Text commit 1: Add new feature commit 2: Update documentation commit 3: Fix bug introduced in commit 1 commit 4: Revert "Add new feature" The fourth commit is the one created by git revert. It contains the changes necessary to undo the changes made by commit 1. Git RM This command is used to remove files from a Git repository. It can be used to delete files that were added to the repository, as well as to remove files that were previously tracked by Git. Here's an example of how to use git rm- Remove a file that was added to the repository but not yet committed: Shell git rm filename.txt This will remove filename.txt from the repository and stage the deletion for the next commit. Remove a file that was previously committed: Shell git rm filename.txt git commit -m "Remove filename.txt" The first command will remove filename.txt from the repository and stage the deletion for the next commit. The second command will commit the deletion. Remove a file from the repository but keep it in the working directory: Shell git rm --cached filename.txt This will remove filename.txt from the repository but keep it in the working directory. The file will no longer be tracked by Git, but it will still exist on your local machine. Remove a directory and its contents from the repository: Shell git rm -r directoryname This will remove directoryname and its contents from the repository and stage the deletion for the next commit. In conclusion, these frequently used Git commands are essential for every software professional who works with Git repositories regularly. Knowing how to use these commands effectively can help you streamline your workflow, collaborate with your team more effectively, and troubleshoot issues that may arise in your Git repository. In conclusion, these 25 Git commands are essential for every software engineer who work with Git repositories regularly. Knowing how to use these commands effectively can help you streamline your workflow, collaborate with your team more effectively, and troubleshoot issues that may arise in your Git repository. If you enjoyed this story, please share it to help others find it! Feel free to leave a comment below. Thanks for your interest. Connect with me on LinkedIn.