A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Frontend frameworks are an essential tool for web development because they provide a structured and organized approach to building the user interface of a web application. They help developers create consistent and efficient layouts, styles, and interactions, making developing and maintaining a web application easier over time. It allows your web apps to interact with APIs in the same way as any other program (on desktop OS, mobile device, or elsewhere). Moreover, front-end frameworks can improve the user experience as they keep the common parts of the page intact (like navigation, etc.) and load whatever data the user requests instead of flashing and reloading with every click. In this article, you’ll learn about the most popular frontend frameworks and determine which is best suited for your website or web app. Alternatively, choosing the right front-end framework will help you save time and money. 7 Most Popular Front-End Frameworks The most popular frontend frameworks include React, Angular, and Vue.js, with React being the most widely used at 42.62%. 1) React Jordan Walke developed ReactJS as an open-source frontend JavaScript library for creating structured user interfaces in web apps and websites. React allows developers to design extensive web applications that can update data without the need for a page reload. The primary objective of React is to provide a fast, scalable, and straightforward solution. Since its release in May 2013, React has dominated the front-end development arena. According to reports, 75.4% of web development companies and agencies specialize in React, making it one of the most widely adopted frameworks. React's popularity stems from its simplicity and versatility, which many firms consider to be the future of front-end development. It is used by around 10,499,785 active websites. Many major market players are adopting React development to grow their user base. Examples of React websites include GitHub, Facebook, Airbnb, Instagram, Salesforce, BBC, and Reddit, among others. These leading organizations utilizing React are a clear indication of its growing popularity and its position as a preferred frontend framework. Benefits of React Performance and Speed — React enhances an application’s performance with features like virtual DOM. It helps in improving the speed of web applications by removing the heavy framework codes found in bootstrapping libraries like JQuery. React’s performance and speed are due to the Virtual DOM (Document Object Model). Developers can use it to design faster and more up-to-date applications. This is one of the topmost advantages of using React for the Front end. SEO-Friendly — Another advantage of React is its ability to deal with common problems with search engines. Mainly, these issues are unable to read JavaScript-heavy projects by search engines. It can operate on the server, render the virtual DOM, and return it to the browser as a regular webpage. The effective use of SEO improves the website and app’s ranking on Google search. Getting more users with organic traffic is the most important thing for any business looking for growth opportunities. React allows for simple SEO integration, which is a huge plus for any company. React is one of the most useful tools for building SEO-friendly web applications. Flexible — One of the most-liked benefits of React is flexibility. React code is very simple; it aids in maintaining the flexibility of web or mobile applications. Its flexibility saves a lot of time and money on app development. The major goal of this library is to make the app development process as simple as possible. As a result, React development has delivered notable results in web development. Strong Community Support — One of the most compelling reasons to use React for front-end development is its strong community support. React is continuously improving as an open-source library, thanks to a large community of developers who are helping people all over the world master the technology in various ways. Reusable Components — One of the key advantages of React is the ability to reuse components. Developers save time because they don’t have to write many codes for the same functionalities. Besides, any modifications made to one section of the application will have no effect on other portions of the application. Limitations of Using React Initial learning can be difficult for developers to understand the concepts of JSX. React develops the UI part of the application. To get complete development tools, other technologies integration is required. The high speed of components upgrade makes it difficult to maintain proper documentation. 2) Angular Angular, a front-end JavaScript framework is an open-source tool that enables developers to create interactive website components. Its main objective is to aid in the development of single-page applications, prioritizing code quality and testability. It is highly regarded as the "superhero" among JavaScript frameworks due to its exceptional performance, with over 1,016,104 live websites utilizing Angular. Tech-giant Google is responsible for Angular's development, and major websites such as Paypal, Udemy, Snapchat, Amazon, Netflix, and Gmail rely on it. Angular provides abundant opportunities to create new and innovative projects regularly. Angular is an excellent option for enterprise development, with a strong emphasis on code quality and testability, both of which are critical components of web development. Benefits of Angular Two-Way Binding — This framework enables modeling and view synchronization in real time. As a result, it is simple for developers to make modifications during the development process. Any changes to the data get reflected in the view. Two-way binding makes programming easier and eliminates the need for a testability controller. Angular developers can easily and quickly develop a variety of applications. POJO Model — The Plain Old JavaScript Objects (POJO) Model is used by Angular to make the code structure scalable and independent. This way, we avoid having to introduce complex functions or methods to the program. It also eliminates the need for third-party frameworks or plugins. Apps built with Angular load quickly and give outstanding user accessibility; the model allows us to keep our codes clean, making the framework goal-oriented. Security — RESTFUL APIs are utilized to interact with servers in Angular for web app development. This would safeguard your web application from malicious attacks. As a result, Angular development will provide you with complete peace of mind. Single Page Applications (SPA) — The concept of SPA is used in almost all modern applications. As SPA loads a single HTML page and updates only a part of the page with each mouse click. During the procedure, the page does not reload or transfer control to another page. This guarantees good performance and faster page loading. Great MVC — MVC architecture also assists in the development of apps that are simple to use, adaptable, and dynamic in nature. Additionally, Angular gets closer to the MVVM architecture (Model-View-View-Model), building a stable platform for application development. Limitations of Using Angular A wide variety of intricate built-in features make Angular complex to learn and implement. The possible lag in dynamic applications can hinder the satisfactory performance of the framework. New developers for small-scale web application development can find it difficult to implement. 3) Vue.JS Vue.JS is a JavaScript framework designed for building modern application user interfaces with minimal resource requirements. Currently, there are 1,940,742 active websites utilizing Vue.JS, including major sites like Louis Vuitton, Adobe, BMW, Upwork, Alibaba, Gitlab, and Xiaomi. Vue.JS is primarily focused on the view layer, enabling easy integration into existing projects, and is ideal for building single-page applications. Its popularity stems from several factors, including its ability to re-render without user input, its support for creating reusable and robust components, and its composable nature, allowing for the addition of components as needed. With Vue.JS, developers have access to a comprehensive set of tools that simplify the development process. The framework is also highly adaptable, lightweight, and modular, featuring smart state management and routing options. Additionally, Vue.JS enables rapid development with the help of numerous plugins that offer cost-effective solutions to common application problems. Overall, Vue.JS is a versatile and powerful framework that can facilitate the fast and efficient development of high-quality applications. Benefits of Vue.JS Component-Based Architecture — Vue.JS is a component-based framework; the entire code for the frontend application can be divided into separate components. The components consisting of templates, logic, and styles are bound together to form a web app. Vue.JS components are lightweight, reusable, and simple to test. Lightweight — Vue.JS is only 18-21kb in size. It is up to 4x less than a minified jQuery. It reduces load time and helps in optimizing search engines and performance. Easy Integration — Vue.JS supports many third-party libraries and components. This makes it easy for developers to integrate Vue.JS into their existing projects. This saves a significant amount of time for companies who are attempting to stay on top of industry developments. Progressive — Vue.JS framework is precisely progressive. It is gradually adopted in small steps and adds more markup to the HTML code. Consequently, it changes as per the needs of developers and does not require rewriting an existing application. Vue.JS is a basic script tag that you can insert into your HTML code. It gradually expands as per your needs until it can manage the entire layer. Detailed Documentation — Vue.JS has well-defined documentation that allows you to easily comprehend the required method and develop your application. Additionally, it provides one of the best API references in the business. Vue.JS documentation is regularly updated. Good documentation is essential when using a new framework. Detailed documentation makes the technology easy to use, and you’ll be able to fix bugs effectively. Limitations of Using Vue.JS Vue is still in its growing stage to have a large community. It does not provide a comparative variety of features as compared to frameworks like Angular or React. Built-in documentation in the Chinese language creates a significant language barrier for people out of China. 4) Svelte Svelte is a novel technology for building web applications that comprise components, tools, and guidelines for constructing your website's architecture using JavaScript. One of the distinguishing features of Svelte is that it does not rely on virtual DOM, setting it apart from other front-end frameworks. This characteristic enhances coding efficiency right from the start, resulting in a lighter, more user-friendly website or application. Rich Harris created Svelte in 2016, and it has since gained a devoted following, with 71.4% of users appreciating its remarkable front-end development capabilities. Benefits of Svelte Easy to understand components — Svelte is a clear and neat framework with no unnecessary additions. It has components that make the coding and designing process much easier. Out-of-the-box global state management — Svelte doesn’t need some complicated third-party state management tools; rather, it simply defines a variable as a writeable/ readable store and uses that in all .svelte files preceded by a $ sign. Default style setting — Svelte styles are extended by default. Here, the svelte-class name is attached to your styles to not leak and influence the style of other components. This significantly speeds up the entire design process. Built-in effects and animations — Animation usually requires an external dependency to handle it. However, Svelte provides some pre-packed powerful effect modules like motion, transition, animation, and easing effects. Built-in Accessibility — Svelte displays an “A11y: element should have an alt attribute” reminder whenever you forget to put the alt attribute on a tag. Limitations of Using Svelte Using special syntax instead of directly using onClick, like in React, can be frustrating. No support for reference updates and array mutations. Svelte doesn’t support a wide range of plugins and integrations required for heavy production apps. 5) Next.JS In 2016, Vercel CEO Guillermo Raunch created Next.JS, an open-source JavaScript framework that allows for the development of highly user-friendly and fast websites. This frontend framework is known for its clarity and can be used to create hybrid apps through Automatic Static Optimization, which combines dynamic and static attributes. Next.JS renders both the server side and client side through Universal Apps, making it a great tool for building single-page applications. Next.JS is also highly regarded for its improved search engine optimization (SEO) benefits, which is a boon for marketers. Popular websites such as Hashnode, AT&T, TikTok, and Twitch use Next.JS for their front-end development. Benefits of Next.JS Better SEO — Opting for SSR instead of client-rendered JavaScript helps in making your website visible considerably for search engines. This provides a better competitive edge for your business. Enhanced performance — Ideally, users value a website that loads faster without consuming their valuable time. Next.JS enhances the loading speed and helps to keep your website visitors engaged with better UX. Easy upgrades — The upgrade takes lesser time and is without any complex procedures. This convenience makes it encouraging to take the development experience to the next level. Automated code splitting — Next.JS has a code split on each page. This becomes a great advantage because even after growing the application with more and more pages, the bundle size doesn’t increase. Image optimization — The powerful native API automatically optimizes images with its built-in components. This not only improves developers’ convenience but also refines the user experience of your website. Limitations of Using Next.JS A limited set of adaptable plugins makes it challenging to manage the application. It requires integrating state management tools like MobX or Redux. Next.JS lacks built-in frontend pages and thus requires creating the entire front end diligently. 6) jQuery Introduced in 2006, jQuery remains a popular frontend framework for website development despite its early establishment. Its longevity has given rise to a sizable community for obtaining solutions. Considered one of the best front-end JavaScript frameworks, jQuery is small, simple, feature-rich, and easy to use. It simplifies complicated tasks such as AJAX and DOM manipulation by wrapping them in a single line of code. The jQuery library is packed with brilliant features, including HTML/DOM manipulation, CSS manipulation, HTML event methods, effects and animations, AJAX, and other utilities. Furthermore, there are plugins available for nearly any task. Well-known companies such as Google, Microsoft, IBM, and Netflix, along with 41 million other websites, utilize jQuery. Benefits of jQuery Popularity — It is incredibly famous, with a large community of users and considerable contributors participating as developers and campaigners. Cross-browser support — jQuery is compatible with popular web browsers, including CSS3 selectors and XML Path Language syntax too. Animation Function — CSS properties allow animating elements and even let you adjust the animation duration with transition mode. HTML modifications — It helps in the easier selection of DOM elements to transverse and modify the content for generating custom settings. Lightweight — The minified size of the library is about 19 KB in size. Limitations of Using jQuery Relatively slow working speed Obsolete APIs of the document object model Not convenient to use it for large-scale production 7) Backbone.JS In 2010, Jeremy Ashkenas created Backbone.JS, an open-source JavaScript frontend framework. Its main purpose is to provide structure to web applications through key-value binding and custom events for models. Additionally, Backbone.JS includes a Router that's useful in developing single-page applications. Backbone.JS is particularly suited for creating client-rich applications that consume REST APIs. It simplifies the process by retrieving all the necessary codes (HTML, CSS, and JavaScript) with a single page load, resulting in a smooth and enjoyable user experience. GitHub hosts Backbone.JS, and it offers an online suite, tutorials, and a list of real-world projects that employ it. Some of the companies that use Backbone.JS include Uber, Pinterest, Reddit, and Walmart. Benefits of Backbone.JS Event-driven communication — Backbone.JS alleviates code cluttering that is, either way, difficult to read. The events are built on top of regular DOM events, which makes the mechanism quite extensible and versatile. Maintainability via Conventions — Backbone.JS maintains clean code in spite of having multiple people coordinating on a code. Here, conventions introduce a common coding style that mandates no need for an extensive set of coding standards. Attune with back-end — Backbone.JS provides great support to REST APIs. The correctly designed APIs are configured by the framework for direct access to read, write, and delete operations via GET, POST, and DELETE. Simplicity and Flexibility — Backbone.JS requires only a few minutes to get started, even while using high-level additional libraries like Chaplin. Vast Ecosystem — With large community support, Backbone.JS provides a lot of plugins and extensions. It also has cogent documentation with solutions for most of the problems. Limitations of using Backbone.JS Easy smaller updates to DOM for large data structures can lead to poor UX. Unit testing with Backbone.JS is quite a complicated challenge. Lack of controller building block. Conclusion Ultimately, the best front-end framework for a project will depend on the specific needs and goals of the project, as well as the preferences and experience of the development team. It is important to carefully evaluate the available options and choose a framework that will best support the project’s needs. When selecting a front-end framework for your web development project, it is important to consider a variety of characteristics. These include ensuring that the framework is secure, focused on delivering user-driven outcomes, capable of improving performance, optimized for navigation, capable of retaining visitors, and effectively communicates the business intent. Additionally, there may be other important characteristics to keep in mind as well.
Customers have consistently posed the question of how to extract maximum value from the data residing in their SAP and other enterprise applications. They aim to optimize their business processes across different areas, make informed decisions, and leverage the wealth of data they have accumulated over the years. After all, data is the new oil. Lets Dig Deeper To conduct their core operations, businesses rely on critical functions such as finance, procurement, and supply chain. Making data-driven decisions is crucial, and business line leaders are continuously seeking innovative ways to obtain predictive insights from data stored in their SAP and other enterprise systems. While hosting large SAP systems in the cloud has become easier through technological innovation, many enterprises still struggle to derive value from the data in their SAP and other applications. At Google Cloud, we welcome customer and partner feedback on how we can address these challenges and believe that feedback is a gift, regardless of its form. About Cortex Framework Google Cloud launched the Cortex Framework in October of 2021, which enables customers to leverage their SAP and other applications data and innovate analytics and business processes. Here are the specifics of what the Cortex Framework entails and how it can provide value to customers. The Google Cloud Cortex Framework is a collection of services that helps you build, deploy, and manage advanced analytics solutions using SAP and other applications data on the Google Cloud Platform. The primary objective is to assist customers in expediting the framework's implementation process, allowing them to experiment with it quickly without having to spend weeks establishing data connections and setting up data pipelines. By following the straightforward instructions provided in the git repository readme, customers can deploy the data foundation with the required Google Cloud services, which usually take a few minutes to complete. Customers currently using Looker can also set up pre-constructed Looker dashboards for frequently used analytical scenarios. Value for Every Enterprise The Google Cortex Framework simplifies the integration process through pre-built connectors for common applications and systems. Real-time Insights: The Google Cortex Framework enables organizations to gain real-time insights into their data. By connecting their systems, businesses can create a unified view of their data, allowing them to make more informed decisions and identify trends and patterns that may not have been visible before. Begin your data cloud journey with less complexity, risk, and cost, and expedite business insights and outcomes with the aid of reference architecture patterns, pre-configured deployment content, and integration services. Digital Transformation: The Google Cortex Framework can support digital transformation initiatives by enabling organizations to create a unified view of their data. This can help businesses identify new opportunities, streamline processes, and improve customer experiences. Connect data with less risk, complexity, and costs. Reduce the time, effort, and cost of implementations and achieve desired business outcomes rapidly with proven deployment templates and reference patterns. Accelerate business insights and outcomes. Kickstart insights and business value with easy-to-leverage, scenario-driven packaged analytics solution content, including data models and sample dashboards. Use Cases for Google Cortex Framework Google Cloud customers have the opportunity to leverage the available analytics accelerators to jumpstart their Cortex journey. The list of available accelerators includes: Salesforce Leads Capture and Conversion Opportunity Trends and Pipeline Sales Activities and Engagement Case Overview and Trends Case Management and Resolution Accounts with Cases Accounts Payable Accounts Payable Vendor Spend Analysis Vendor Performance Accounts Receivable Doubtful receivables Days sales outstanding Overdue receivables Accounts receivable Order to Cash Order fulfillment Order status snapshot Order details Sales performance Billing and pricing Industry Solutions Cortex Demand Sensing for SAP CPG Alert potential impacts on-demand plan Forecast outside statistics range Unexpected weather alert Non-seasonal trend alert Promotional differences alert Partner Solutions, Cross Industry Inventory Optimization Supply Network Risk AI Demand Forecasting Assortment Recommendation Product Fulfillment Optimization Many more… Cortex Framework Technical Construct Throughout the Cortex Data Foundation process, you will experience several Google Cloud services being utilized. A brief explanation of each service can help you understand its individual functions. Google BigQuery — Google BigQuery is a cloud-based data warehousing and business intelligence solution provided by Google Cloud. It allows users to analyze massive datasets using SQL-like queries without the need for managing any infrastructure or servers. Cloud Build — Cloud Build is a service that executes your builds on Google Cloud. Cloud Build can import source code from a variety of repositories or cloud storage spaces, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives. Looker — Looker is a business intelligence and data analytics platform that enables organizations to gather, analyze, and share insights from data. It allows users to create customized data visualizations, explore data sets, and share findings with team members. Looker also provides advanced data modeling capabilities, allowing users to create data models and define relationships between different data sources. Cloud Storage — Google Cloud Storage is a cloud-based object storage service offered by Google Cloud. It allows users to store and retrieve unstructured data objects such as photos, videos, and documents in a secure and scalable manner. Cloud Composer — Google Cloud Composer is a fully-managed workflow orchestration service on Google Cloud Platform (GCP) that enables users to author, schedule, and monitor workflows that span across clouds and on-premises data centers. Conclusion The Cortex Framework for SAP is a robust framework that helps businesses streamline data integrations and facilitate advanced analytics-driven decision-making. It offers a unified view of data from both SAP and non-SAP systems, which enables organizations to streamline operations, automate business processes, and obtain real-time insights into their SAP and other applications data. The platform is highly adaptable, allowing businesses to tailor it to their unique requirements. Call-to-Action Are you familiar with the Cortex Framework, or have you already deployed it? If not, here is an automated deployment guide you can follow. I would appreciate any feedback or opinions you have on what you liked or didn't like. Feel free to connect me if you have any further questions. If you find this write-up helpful, please consider sharing it with others. DISCLAIMER: The opinions and viewpoints are solely mine in this article and do not reflect my employer's official position or views. This article should not be considered an official endorsement or statement from my employer.
Application developers, as their name implies, like to develop applications––they ultimately care very little about frontend vs. backend and just want to deliver value to users. Being an application developer myself, and very much like other application developers, one of the things that constantly drive my decision-making when selecting tools and frameworks is the fact that I’m also quite lazy. My main objective is to be able to ship applications with as little effort as possible, and my pet peeve is silly repetitive, mechanical tasks that make me die a little inside every time I need to perform them. For example, I don’t like to remember to align things that don’t automatically align themselves, as one common example of a repetitive task that is completely superfluous. I guess my trademark is that when I encounter challenges, I will always look for ways to automate a solution. (Like the app I once built in the 90s to send a romantic text message to my girlfriend once a day––to overcome my romantic shortcomings). I’ve been building tools and frameworks to increase developer productivity for most of my professional career. This all started in 2006 when I identified a need for modernizing applications created using a low-code generator to C#.NET. I decided to not only create a fully automated migration solution but, more importantly, create a C# class library that would enable an equivalent speed of development that was previously made possible by using the low-code tool. This was a very high bar, as developers who are used to working in low code are also often the type that don’t like the details or the bits and bytes of the internals. A developer happy to use low-code tooling is essentially only looking to write code that gets a specific task done. Being able to replicate this in a coding framework needed to provide as seamless and simple of an experience as they had until now––but with code. Hundreds of organizations have used Firefly’s migration solution, from fortune 500s and governments to small software houses. The C# library provided is still in use today, enabling “low-code level,” highly productive application development combined with the flexibility of code. The ability to efficiently ship full applications with a simple library, to me, felt like something that had to be portable and replicated to the modern web. The wheels began to turn. What Brought Us Here Like many other web developers, when Node.js came out, I was excited about the promise of the backend to the frontend that came with it, that finally, we don’t have to learn two languages to build a full stack application. Just the sheer mental switch between languages––for me, it was C# on the backend and then Javascript on the frontend, and this always came with its toll of friction and context switch that could even be considered debilitating to my dev workflow. Even with the possibility Node.js presented of enabling Javascript on both ends, still, not many developers chose to do so. All of the best practices and learning resources continued to recommend using two separate approaches for the backend and front and a complete separation of tools, libraries, and frameworks with very little sharing of code and patterns for the frontend and backend. This modus operandi of having two separate languages, each with its own syntax and logic, creates the need for a lot of repetitive boilerplates, such as code to pull the data from the database (for every single application entity), code to expose entity CRUD operations as an API, with four, five or even six different routes per entity, methods for these routes, and these all would again get duplicated and reused hundreds of times, and each time further complicated. You get the idea. And this is just on the backend. Now, on the frontend, you have reverse code for this; you have code that takes these JSON responses and rebuilds objects out of these for you to be able to use them on the frontend. Just trying to get data from the database to the users, but, in the meantime, you need code to read the database, serialize the JSON, and send it over a route, only to have to deserialize and query it on the frontend, just to get it in front of the user. This is all mechanical code that practically does nothing. Where eventually, all of this is repeatable. Wait, there’s more. Every API route needs to be able to fetch data, provide some server-side paging, sorting, and filtering, delete, insert, and update; all of these very generic actions are repeated over and over by all developers building full-stack applications all the time in millions of lines of code. Now let’s talk about concerns that cross over from the frontend to the backend that get duplicated. Up until a couple of years ago, the frontend and backend were, at best, sharing types, but there’s so much more to types than strings or integers. Commons questions like: How do you serialize these from JSON and then to JSON? How do you validate them? Today, validations on the frontend and backend are operations that are completely separate. Which begs the question: WHY? Why should I have to remember (as a lazy developer, mind you) to have to perform two separate validations on the frontend and the backend? Duplicate Boilerplate Code There’s also a cultural aspect with this dichotomy between frontend and backend code, where there needs to be such impeccable communication and alignment between the developers that is almost an impossible feat. At the end of the day, all those places are places of friction where things can go wrong, with two completely different developers maintaining the code. Enter Remult Remember when I said that when I encounter a challenge, my first course of action is to try and automate it away? I couldn’t fathom how, in 2018, it still is not viable to be able to get the same code to run on the frontend and the backend. I started toying with this idea to see if I could truly make this possible and improve my productivity (and hopefully for other developers, too)––from validations to typing, through authentication and authorization, all of the typical code that’s constantly duplicated. Remult Backstory Remult started as a side project without a name and with a completely different (albeit noble) purpose. My wife volunteered at a food bank, and as a result, I, too, was volunteered to participate in distributing food parcels to the needy. One day, as I was driving to distribute the parcels, holding a paper with a list of addresses, I found myself getting lost in places you don’t want to get lost, and I knew I had to build an app to help volunteers navigate efficiently. I knew I could solve a lot of friction in the process of delivering food parcels to the needy through code––which is what I do best, and I wanted to focus on building the actual application and its capabilities, and not the pieces that hold it together. So I built an application for inventory and distribution of our local food bank in Even Yehuda, an application they could use to generate distribution lists for volunteer couriers and for the couriers to navigate to and report back on delivery. I wrote the app and, at the same time, the framework as well, the very framework I wanted to have when building web applications. One that would focus on the data flow from the backend database to the frontend framework (the framework of your choice––whether Angular, React, or Vue––it shouldn’t matter). Instead of having to go through the entire process described above of serializing objects for every HTTP call on the frontend and then reversing the entire process back into JSON from the backend to the frontend––this framework made it possible to query on the frontend, using objects, and then automated the entire process from the frontend to the backend and back again. I finally had the framework I dreamed of that eliminates the need to write all of this boilerplate, repetitive, duct tape code over and over again. With its growth and use, a colleague and I worked on the framework, invested in its ability to scale and its stability, improved its API that underwent several iterations, and added many features. The application built upon this framework was quickly adopted by other food banks around Israel, which often encountered similar challenges with parcel delivery. Our application, after its first year, managed to help distribute 17,000 parcels from food banks around Israel. We were quite proud of this achievement––we started feeling like our framework could withstand the test of scale, but we had no idea what was to come next. What COVID Taught Us About Scale and Security Then COVID hit––and lockdowns cut people in need off from the entire world. Suddenly, the need to distribute food to the elderly and disabled skyrocketed. The demand grew from 17,000 parcels annually to 17,000 parcels a day. The app was then contributed for free to municipalities, NGOs, and even the IDF’s Home Front to enable better inventory, allocation, and distribution of parcels around Israel. Once the application was adopted by the IDF, it also underwent a battery of security testing––cyber and penetration testing, which leveled up its security significantly. The backend-to-frontend framework, and the application built upon, which was supposed to be just an experiment, withstood the scale of half a million parcel distributions in 2020 alone and since then has maintained a similar number and is only growing. During COVID, it was adopted by multiple countries around the globe––from Australia to the EU, USA, and South Africa––to respond to similar needs during the pandemic. This is the backbone upon which Remult was built and battle-tested, running on a $16-a-month Heroku server. Once the pandemic was behind us, my co-creator and I realized we learned a lot. We understood the framework was robust and could scale, was aligned with security best practices, and delivered the promise of democratizing the ability to build full-stack applications without all the known friction. We wanted to share this with the world. So we open-sourced the framework to enable other lazy developers to be able to invest their energy in writing excellent applications that deliver value to users and not on repeatable, mechanical code that actually can be automated and shared by the backend and frontend. Conclusion In this article, you were introduced to Remult, an open-source frontend-to-backend framework, its backstory, and what we learned along the way. Let us know what you’d like to see next, and feel free to contribute to the project. In our next article, we’ll do a roundup of tools in the ecosystem looking to solve similar challenges through different approaches and unpack where Remult fits in and what it’s optimized for.
In today’s fast-paced world of software development, we have new technologies and languages for development coming on very frequently. With this, comes several testing and automation tools and frameworks in the related market. Choosing the right set of tools and frameworks becomes an absolute necessity because it impacts the accuracy and TTM (Time to Market). JavaScript is one of the widely used programming languages for web automation testing. It also supports a number of Selenium test automation frameworks for web UI testing. Of all the available ones, the Jasmine framework turns out to be the best-suited, as it provides a stable and functional architecture. It is easy to get started with the Jasmine framework and implement test scenarios with the same. In this tutorial of Selenium automation testing with Jasmine, we look into the nitty-gritty of Jasmine from an automation testing standpoint. We will also learn how to set it up, followed by a sample code writing, and execution. Let’s get started! Introduction to Jasmine Framework Jasmine Framework is an open-source JavaScript testing framework. It is a behavior-driven (BDD), development-inspired framework that is independent of any other frameworks. It is used for unit testing of synchronous and asynchronous JavaScript test scenarios. In addition to its outstanding support for JS, it provides extensive support for Python, Ruby, and other JavaScript-based languages. Furthermore, it is available for different versions like Standalone, Node.js, etc. An additional benefit of using Jasmine is that it is an independent framework with minimal (to no) dependency on language, browser, and platform. The Jasmine framework does not require a DOM and is very easy to set up. Also, it provides an immaculate and easy to read syntax like the example below: describe("A suite is just a function", function() { var a; it("and so is a spec", function() { a = true; expect(a).toBe(true); }); }); Why Use Jasmine Framework as a Testing Framework for JavaScript Tests? Having understood what Jasmine Framework is, let us look at the key advantages of using JavaScript Selenium automation testing for Web UI testing: Easy to set up and easy to write tests. Very fast as it has almost zero overhead and no external dependencies for Jasmine core. It comes with out-of-the-box support to fulfill all the test needs. Can run the browser and Node.js tests with the same framework. An extensive and active community and regularly updated documentation for support and development. Support for usage of spies for test doubles implementation in the framework. It even supports testing of frontend code using the Jasmine-jQuery extension. It also supports test-driven development in addition to behavior-driven development. Unlike other JavaScript testing frameworks, the Jasmine framework has built-in assertions. It comes with an inbuilt test runner, which can be used to run browser tests. Provides a rich number of built-in matchers that can be used to match expectations and add asserts to the test cases. Some examples are: toEqual toBe toBeTruthy Advantages of Using Jasmine Framework With Selenium Jasmine and Selenium are popularly used for JavaScript automation testing and web UI automation testing respectively. When working on a JavaScript-based web UI project, it is better to combine the forces to take the advantage of them: Jasmine framework and Selenium automation testing complement each other in being open-source, easy to implement, and scale. Their capacity to work with almost all browsers and platforms is another added advantage. By using a Selenium Grid, Jasmine framework tests can be executed more quickly through parallel execution. You can refer to our earlier article on the “Importance of Parallel Testing in Selenium” to get a detailed understanding about the advantages offered by parallel test execution. Getting Started With the Jasmine Framework Having gathered some knowledge around the what and why of the Jasmine framework in JavaScript, let us now dig deeper and get our hands dirty with the implementation. In this section of the Selenium Jasmineframework tutorial, we will learn about the workflow of Jasmine and understand the basics of writing test scenarios. Let us assume we need to test a file Test.js using the Jasmine framework. SpecRunner.html would be the output file that will run all test cases from spec.js taking Lib as an input and then show the results in the browser. Lib: Consists of built-in JavaScript files that help test varied functions and other JS files within the project. SpecRunner.html: A usual HTML file that will render the output of the test run in the browser. test.js: This file comprises the actual functionalities/code under test, which is to be tested with the help of the spec.js and lib files. spec.js: Also referenced as the test case file, this contains all the testcases in a prescribed format for the file to be tested. Here are the basic building blocks of the Jasmine framework tests: Suite block: Suite forms the basic building block of the Jasmine framework. One suite is composed of test cases or specs written to test a particular file and is made up of two blocks: the describe() block and it() block. describe(): This is used to group related test cases written under it(). There is only one describe() at the top level, unless the test suite is a nested one. In which case, it takes a string parameter to name the collection of test cases in that particular describe() block. it()—contains specs/test cases: This is used to define the specs or test cases inside the describe() block. Like a describe(), it takes similar parameters—one string for name and one function that is the test case we want to execute. A spec without any assertions is of no use. Each spec in the Jasmine framework consists of at least one assertion, which is referred to as expectation here. If all expectations pass in a spec, it is called a passing spec. On the other hand, if one or more expectations fails in a spec, it is called a failing spec. Note: Since it() and describe() blocks are JavaScript functions, all the basic variable and scope rules apply to them as per JS code. Also, they can contain any valid executable code. This means variables at the describing() level are accessible to all and it() level within the test suite. Expectations or Matchers Expectations or matchers are a way to implement assertions in the Jasmine framework. This is done with the help of the expect function, which takes the actual value produced by the test case as output. It is then chained with a matcher function that takes expected results for that test case and then evaluates them to give a boolean result. The returned value is true if the expectation matches, else it is false. The Jasmine framework also provides the utility to check for negative assertions by adding not before the matcher for a required expect function. All the expect functions come under the it() block, and each it() block can have one or more expect() blocks. The Jasmine framework provides a wide range of in-built matchers and lets you extend matchers via custom matchers: describe("This is a describe block for expectations", function() { it("this is a positive matcher", function() { expect(true).toBe(true); }); it("this is a negative matcher", function() { expect(false).not.toBe(true); }); }); Here is a small example to understand the usage and implementation of the describe(), it(), and expect() blocks. We will be testing a file named Addition.js having a corresponding spec file with test cases as AdditionSpec.js: function Addition() = { initialValue:0, add:function (num) { this.initialValue += num; return this.initialValue; }, addAny:function () { var sum = this.initialValue; for(var i = 0; i < arguments.length; i++) { sum += arguments[i]; } this.initialValue = sum; Return this.initialValue; }, }; describe("to verify Addition.js file",function() { //test case: 1 it("Should have initial value", function () { expect(Addition.initialValue).toEqual(0); }); //test case: 2 it("should add numbers",function() { expect(Addition.add(5)).toEqual(5); expect(Addition.add(5)).toEqual(10); }); //test case :3 it("Should add any number of numbers",function () { expect(Addition.addAny(1,2,3)).toEqual(6); }); }); The Jasmine framework also provides support for nested suites utilizing the nested describe() blocks. Here is an example of a spec file with nesting for the same Addition.js: describe("to verify Addition.js file using nested suites",function() { // Starting of first suite block describe("Retaining values ",function () { //test case:1 it ("Should have initial value", function () { expect(Addition.currentVal).toEqual(0); }); }); //end of first suite block //second suite block describe("Adding single number ",function () { //test case:2 it("should add numbers",function() { expect(Addition.add(5)).toEqual(5); expect(Addition.add(5)).toEqual(10); }); }); //end of second suite block //third suite block describe("Adding Different Numbers",function () { //test case:3 it("Should add any number of numbers",function() { expect(Addition.addAny(1,2,3)).toEqual(6); }); }); //end of third suite block }); Using Standalone Jasmine Framework Distribution for Selenium Automation Testing To get started with the Jasmine framework, follow the below mentioned steps to complete the system setup. Step 1 Download the latest version of Jasmine from the official website. Step 2 Download the standalone zip for the selected version from this page. Step 3 Create a new directory in your system and then add a sub directory to it. Step 4 Move the downloaded standalone zip inside this sub directory and unzip it here. Once unzipped, your directory structure should look something like this. Step 5 To verify the setup, load the SpecRunner.html in your web browser. If you see an output like below, it means you have completed the setup for the Jasmine framework on your system. Let’s see how to modify this to run our test case for the Addition.js using AdditionSpec.js and Nested_AdditionSpec.js. Firstly, we will remove all the existing files from the src and spec folder and add the files for our example, which we have understood already. After doing this, the folder structure would look something like this. Having updated the files, we need one more step to execute our specs, which is to update references to files under the spec and src folder as per our changes in SpecRunner.html. For this, open the SpecRunner.html and it will look like this: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Jasmine Spec Runner v3.7.1</title> <link rel="shortcut icon" type="image/png" href="lib/jasmine-3.7.1/jasmine_favicon.png"> <link rel="stylesheet" href="lib/jasmine-3.7.1/jasmine.css"> <script src="lib/jasmine-3.7.1/jasmine.js"></script> <script data-fr-src="lib/jasmine-3.7.1/jasmine-html.js"></script> <script data-fr-src="lib/jasmine-3.7.1/boot.js"></script> <!-- include source files here... --> <script data-fr-src="src/Player.js"></script> <script data-fr-src="src/Song.js"></script> <!-- include spec files here... --> <script data-fr-src="spec/SpecHelper.js"></script> <script data-fr-src="spec/PlayerSpec.js"></script> </head> <body> </body> </html> Update the file name in the src and spec sections and save it. Now, you are ready to load the SpecRunner.html again to see the results. Setting Up the Jasmine Framework Environment Using npm To get started with Selenium automation testing using the Jasmine framework in JavaScript, we need to have some prerequisite setup in our system: Step 1 Make sure the latest JavaScript version is installed on the system. Also, check for Node.js and npm on your system and upgrade to the newest version if required: brew install node npm install npm@latest -g Step 2 Navigate to the directory where you want to create your test case, execute it, and install the Jasmine framework by triggering the npm command: npm install -g jasmine Step 3 Once this is done, we will install Chrome Driver and Selenium WebDriver in the same directory to execute the Jasmine framework Selenium test cases on the local Selenium Grid: npm install --save chromedriver npm install --save selenium-webdriver Step 4 Once all this is done, we are good to initialize our Jasmine framework project using the init command: jasmine init You should be able to see a spec folder, which will be further used for adding test case files. For this Selenium Jasmine framework JavaScript tutorial, we will be using the following .js file: // Require modules used in the logic below const {Builder, By, Key, until} = require('selenium-webdriver'); // You can use a remote Selenium Hub, but we are not doing that here require('chromedriver'); const driver = new Builder() .forBrowser('chrome') .build(); // Setting variables for our testcase const baseUrl = 'https://accounts.lambdatest.com/login' // function to check for login elements and do login var loginToLamdbatest = async function() { let loginButton = By.xpath('//button'); // navigate to the login page await driver.get(baseUrl); // wait for login page to be loaded await driver.wait(until.elementLocated(loginButton), 10 * 1000); console.log('Login screen loaded.') } //to set jasmine default timeout jasmine.DEFAULT_TIMEOUT_INTERVAL = 20 * 1000; // Start to write the first test case describe("Selenium test case for login page", function() { it("verify page elements", async function() { console.log('<----- Starting to execute test case ----->'); //to do login await loginToLamdbatest(); var welcomeMessage = By.xpath('//*[@class="form_title"]'); //verify welcome message on login page expect(await driver.findElement(welcomeMessage).getText()).toBe('Welcome Back !'); //to quit the web driver at end of test case execution await driver.quit(); console.log('<----- Test case execution completed ----->'); }); }); In this example file, we have automated a scenario to navigate to the LambdaTest login page and then verify the welcome message on the page. We have used a local Selenium WebDriver and running the tests on the Chrome browser. To execute the test case, use the following command if you are at the same directory level as the file.: jasmine exampleSeleniumSpec.js Once triggered, you will see a Chrome browser tab open up on your system and get redirected to a given page, and after successful verification, the browser is closed, and the terminal shows logs like below: ⇒ jasmine example-spec.js Randomized with seed 07075 Started <----- Starting to execute test case -----> Login screen loaded. <----- Test case execution completed -----> . 1 spec, 0 failures Finished in 7.882 seconds Randomized with seed 07075 (jasmine --random=true --seed=07075) Using Jasmine to Run Selenium Tests on Selenium Grid Running tests on a local Selenium Grid is not a scalable and reliable approach. You would need to invest significantly in building the test infrastructure if the tests have to be run across a number of browsers, platforms, and device combinations. This is where cloud testing can be useful as it offers the much-needed benefits of scalability, reliability, and parallel test execution. We will run the Jasmine framework tests on Selenium Grid Cloud on LambdaTest. You will need to have your LambdaTest username and access token handy to continue with the execution, details are available in the LambdaTest profile section. Set the following environment variables on your machine: For Mac/Linux export LT_USERNAME="YOUR_USERNAME" export LT_ACCESS_KEY="YOUR ACCESS KEY" For Windows set LT_USERNAME="YOUR_USERNAME" set LT_ACCESS_KEY="YOUR ACCESS KEY" Once this is done, we modify the spec file to have a remote web driver configuration for the LambdaTest Hub to execute test cases on the grid. For this, we will be adding the required browser capabilities for execution on the LambdaTest Grid and use a remote web driver on their grid. Modified exampleSeleniumSpec.js as per our requirements would look like this: // Require modules used in the logic below selenium = require('selenium-webdriver'); const {Builder, By, Key, until} = require('selenium-webdriver'); // Setting variables for our testcase const baseUrl = 'https://accounts.lambdatest.com/login' const username= process.env.LT_USERNAME || "<Your_lambdatest_username>" const accessKey= process.env.LT_ACCESS_KEY || "<Your_lambdatest_accessKey>" var remoteHub = 'https://' + username + ':' + accessKey + '@hub.lambdatest.com/wd/hub'; const caps = { 'build': 'Jasmine-selenium-javascript', 'browserName': 'chrome', 'version':'73.0', 'platform': 'Windows 10', 'video': true, 'network': true, 'console': true, 'visual': true }; const driver = new selenium.Builder(). usingServer(remoteHub). withCapabilities(caps). build(); // function to check for login elements and do login var loginToLamdbatest = async function() { let loginButton = By.xpath('//button'); // navigate to the login page await driver.get(baseUrl); // wait for login page to be loaded await driver.wait(until.elementLocated(loginButton), 10 * 1000); console.log('Login screen loaded.') } //to set jasmine default timeout jasmine.DEFAULT_TIMEOUT_INTERVAL = 20 * 1000; jasmine.getEnv().defaultTimeoutInterval = 60000; // Start to write the first test case describe("Selenium test case for login page", function() { it("verify page elements", async function() { console.log('<----- Starting to execute test case ----->'); //to do login await loginToLamdbatest(); var welcomeMessage = By.xpath('//*[@class="form_title"]'); //verify welcome message on login page expect(await driver.findElement(welcomeMessage).getText()).toBe('Welcome Back !'); //to quit the web driver at end of test cae execution await driver.quit(); console.log('<----- Test case execution completed ----->'); }); }); This would also have a similar command to execute and would give the same output as follows on the terminal: ⇒ jasmine lambdatest.js Randomized with seed 11843 Started <----- Starting to execute test case -----> Login screen loaded. <----- Test case execution completed -----> . 1 spec, 0 failures Finished in 15.777 seconds Randomized with seed 11843 (jasmine --random=true --seed=11843) You can now navigate the LambdaTest dashboard for the user account and view the execution results and logs on various tabs. Under “Recent Tests” on the left side, you can see the latest execution on the Dashboard tab. To analyze the complete timeline for your different test case runs, we can navigate to the “Automation” Tab. In the “Automation” tab, go to the “Automation Logs” section to view the entire execution logs and video of our test case. This helps in debugging the issues by analyzing the run. You can also navigate to other tabs and per the requirement to add/view data for the execution. With this, we have completed our Jasmine framework JavaScript Selenium tutorial to understand the setup and execution with Jasmine framework on the local and LambdaTest Grid cloud. It’s a Wrap So, in this Jasmine framework tutorial with Selenium, we learned about the whats and whys of Jasmine and Selenium, how they make a good combination for automation web UI-based code using JavaScript, and all advantages it provides. Having done the setup and understanding the test case basics, we have successfully executed our test case on local and Selenium Grid with the help of LambdaTest. So, get started and write your first Selenium automation testing code with Jasmine and JavaScript. Happy Testing!
Last year, I wrote two articles about JPA Criteria and Querydsl (see Introduction and Metamodel articles). Since the end of last year, there's been a new major release of Spring Boot 3. This release is based on Spring Framework 6 with several significant changes and issues which we should consider when upgrading. The goal of this article is to highlight these changes when upgrading the sat-jpa project (SAT project). The technologies used here are: Spring Boot 3.0.2, Hibernate 6.1.6.Final Spring Data JPA 3.0.1 and Querydsl 5.0.0. Spring Framework 6.0.4 Spring Framework 6 has many changes (see What's New in Spring Framework 6.x), the key changes are: Switch Java baseline to Java 17 (still the last Java LTS at the time of writing this article) — i.e., it's a minimum Java version we have to use. Switch Jakarta baseline to Jakarta EE 9+. Besides that, there's just one minor change in Spring MVC used in SAT project. Spring MVC 6.0.4 All methods in the ResponseEntityExceptionHandler class changed their signatures. The status argument was changed from HttpStatus to HttpStatusCode type (see the change in the CityExceptionHandler class). HttpStatus in Spring MVC 5 Java @Override protected ResponseEntity<Object> handleMethodArgumentNotValid(MethodArgumentNotValidException exception, HttpHeaders headers, HttpStatus status, WebRequest request) { return buildResponse(BAD_REQUEST, exception); } HttpStatusCode in Spring MVC 6 Java @Override protected ResponseEntity<Object> handleMethodArgumentNotValid(MethodArgumentNotValidException exception, HttpHeaders headers, HttpStatusCode status, WebRequest request) { return buildResponse(BAD_REQUEST, exception); } Spring Boot 3.0.2 Spring Boot 3 has many changes (see Spring Boot 3.0 Release Notes); the most important are these ones: Java 17 (defined by the Spring Framework). Bump up to Spring Framework 6 (see above). Bump up to Hibernate 6 (see below). Migration from JavaEE to Jakarta EE dependencies — it's not important in our case as we don't rely on enterprise Java here, but it can affect many dependencies (e.g., servlet API, validation, persistence, JMS, etc.). Minor change in Spring MVC 6 in the ResponseEntityExceptionHandler class (see the next part). Let's start with the simple Jakarta EE changes. So we can focus on persistence after that. Jakarta EE Dependencies The significant change is the migration from Java EE to Jakarta EE dependencies. Spring Framework 6 set the baseline as Jakarta EE 9 (see What's New in Spring Framework 6.x), but Spring Boot 3 already uses Jakarta EE 10 (see Spring Boot 3.0 Release Notes ) for many APIs (e.g., Servlet or JPA — to name some technologies). As a consequence, all classes using another class from the jakarta package has to switch to the same class from the javax package instead (see, e.g., PostConstruct or Entity annotations). Javax Imports Java import javax.annotation.PostConstruct; import javax.persistence.Entity; Jakarta Imports Java import jakarta.annotation.PostConstruct; import jakarta.persistence.Entity; Hibernate 6.1.6 Another major change in Spring Boot 3 is the upgrade from Hibernate 5.6 to Hibernate 6.1. The details about Hibernate 6.1 can be found here or inside the release notes. Honestly, I didn't pay attention to this change until I had to fix one failing test due to a different result size (see the fix in the CountryRepositoryCustomTests class). The implementation with the new Hibernate 6 returns fewer entities than before. It's worth mentioning two changes here based on this investigation. Let's start with logging first. Logging The Hibernate loggers were changed from org.hibernate.type to org.hibernate.orm.jdbc (see this reference and this reference). Note: all available configuration items can be found in org.hibernate.cfg.AvailableSettings class. Hibernate 5 There was a single logger for the bidden and extracted values. Properties files logging.level.org.hibernate.type.descriptor.sql.BasicBinder=TRACE Hibernate 6 Hibernate 6 changed the main package and split them into two different loggers in order to distinguish the operation -> what value should be logged. Properties files logging.level.org.hibernate.orm.jdbc.bind=trace logging.level.org.hibernate.orm.jdbc.extract=trace Semantic Query Model As the logging was fixed, it was confirmed that the Hibernate really extracts two records from DB: Plain Text 2023-01-25T08:40:18.819+01:00 INFO 6192 --- [ main] c.g.a.s.j.c.CountryRepositoryCustomTests : Started CountryRepositoryCustomTests in 4.678 seconds (process running for 5.745) Hibernate: select c2_0.id,c2_0.name from city c1_0 join country c2_0 on c2_0.id=c1_0.country_id where c1_0.name like ? escape '!' and c1_0.state like ? escape '!' 2023-01-25T08:40:19.221+01:00 TRACE 6192 --- [ main] org.hibernate.orm.jdbc.extract : extracted value ([1] : [BIGINT]) - [3] 2023-01-25T08:40:19.222+01:00 TRACE 6192 --- [ main] org.hibernate.orm.jdbc.extract : extracted value ([2] : [VARCHAR]) - [USA] 2023-01-25T08:40:19.240+01:00 TRACE 6192 --- [ main] org.hibernate.orm.jdbc.extract : extracted value ([1] : [BIGINT]) - [3] 2023-01-25T08:40:19.241+01:00 TRACE 6192 --- [ main] org.hibernate.orm.jdbc.extract : extracted value ([2] : [VARCHAR]) - [USA] However, the test receives only a single entity from the Hibernate, as you can see in these debugging screenshots: The changed behavior is caused by the new Semantic Query Model with Automatic deduplication (see Semantic Query Model part) introduced with the Hibernate 6 (see line 178 in org.hibernate.sql.results.spi.ListResultsConsumer class). The Hibernate 6 returns the deduplicated result now. Hibernate JPA Generator The Hibernate Jpamodelgen maven dependency managed by Spring Boot Dependencies was moved from org.hibernate to org.hibernate.orm package. See: Spring Boot 2.7.5 XML <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-jpamodelgen</artifactId> </dependency> Spring Boot 3.0.2 XML <dependency> <groupId>org.hibernate.orm</groupId> <artifactId>hibernate-jpamodelgen</artifactId> </dependency> Liquibase To finish the core persistence dependencies, the Liquibase is upgraded from version 4.9.1 to 4.17.2. There's no significant difference except the JAX-B dependency usage. The dependency should use JAX-B from Jakarta instead of Javax (see the following reference). Spring Boot 2.7.5 XML <dependency> <groupId>org.liquibase</groupId> <artifactId>liquibase-core</artifactId> </dependency> Spring Boot 3.0.2 XML <dependency> <groupId>org.liquibase</groupId> <artifactId>liquibase-core</artifactId> <exclusions> <exclusion> <!-- due to SB 3.0 switch to Jakarta --> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>jakarta.xml.bind</groupId> <artifactId>jakarta.xml.bind-api</artifactId> </dependency> Spring Data JPA 3.0.1 The Spring Boot 3.0.2 depends on Spring Data Release Train 2022.0.1 (see Spring Data 2022.0 - Turing Release Notes), where Spring Data JPA 3.0.1 is used with these key changes (see the release notes): Switch Java baseline to Java 17, Switch Jakarta baseline to Jakarta EE 10 and Bump up to Hibernate 6 Note: the previously used version (in our case) was the 2.7.5 version. Querydsl 5.0.0 The Querydsl version was not changed, but it was impacted in a similar way as the Liquibase. The dependencies have to be used from Jakarta instead of Javax. Therefore, the Querydsl dependency has to use jakarta classifier instead of the old jpa classifier (see this reference). Spring Boot 2.7.5 XML <dependency> <groupId>com.querydsl</groupId> <artifactId>querydsl-jpa</artifactId> </dependency> <dependency> <groupId>com.querydsl</groupId> <artifactId>querydsl-apt</artifactId> <version>${querydsl.version}</version> <classifier>jpa</classifier> <scope>provided</scope> </dependency> Spring Boot 3.0.2 XML <dependency> <groupId>com.querydsl</groupId> <artifactId>querydsl-jpa</artifactId> <version>${querydsl.version}</version> <classifier>jakarta</classifier> </dependency> <dependency> <groupId>com.querydsl</groupId> <artifactId>querydsl-apt</artifactId> <scope>provided</scope> <version>${querydsl.version}</version> <classifier>jakarta</classifier> </dependency> Conclusion This article has covered the upgrade to the latest Spring Boot 3.0.2 for JPA and Querydsl (at the time of writing this article). The article started with Spring Framework, and Spring Boot changes. Next, all changes to Hibernate and related technologies were covered. In the end, we mentioned the minor changes related to Spring Data 3.0.1 and Querydsl 5.0.0. The complete source code demonstrated above is available in my GitHub repository. Note: there is also a preceding article, Upgrade Guide To Spring Data Elasticsearch 5.0, dedicated to a similar upgrade of Elasticsearch with Spring Boot 3.
Angular is an open-source, free TypeScript JavaScript framework from Google. The primary purpose of the Angular stack is the development of Single Page Applications (SPAs). It is also valuable for creating large applications that are easy to maintain. Developers love the standard structure of Angular, and Google has been releasing updates to its original framework. The latest release is Angular 14, which has many new features and updates that have changed how Angular components are written. It was released on 2nd June 2022. The latest version enables any software development company to build lighter and faster applications. This article deepens the new features and updates that greatly help deliver application development services. Updated Features in Angular 14 Many developers were unhappy with Angular 13 because Route.title must be added to the router. Angular 14 eliminates this need as the Typescript framework now automatically titles all important on respective pages. The new API simplifies service creation with injectable services. Standalone Components Standalone components are introduced to put an end to NgModules. However, modules are not obsolete yet. They are provisional to ensure compatibility. Now, Angular developers can streamline development and testing, and the standalone components are available in the developer preview. After publishing, developers can use it throughout the development and research phase. Standalone components are customizable, and they can work with various app features. Before Angular 14, every component must be associated with a module. Also, the entire application would fail if any parent module’s declarations were not appropriately linked with every component. To create a standalone component, use the following code: import {Component} from ‘@angular/core’; import {AppComponent} from ‘./app.component’; @Component({ selector: ‘my-app’, template: <h1>Hello World!</h1>, }) export class AppComponent {} Developers can also create multiple @Components and add them to the same file. Strictly Typed Forms The Strictly Typed forms feature makes creating front-end applications a little bit easier. This new feature allows developers to enforce a strict form of types using Typescript. Each field will be strictly checked against its kind before submitting and validating. The further improvements are: Responsive look and feel Easy to create forms for users Form fields are validated before submitting Form validation errors are displayed on the screen without pop-ups To incorporate this new feature, developers can use the auto migration option for existing applications. The new update will not affect template-based forms. Angular CLL Auto-Completion This feature facilitates auto-completion for Angular Command Line Interface (CLI) commands. It uses the Intelli J IDEA plugin and TypeScript definition files. Developers can choose TypeScript or ECMAScript based on their preference. It dramatically improves productivity by cutting down the time required to enter every command. Autocomplete feature also helps new developers using Angular for the first time. Automatic tab completion is enabled by default in the editor window. First, developers must execute the ng completion command. Then, they can start using auto-complete by typing ng and pressing the tab to see all the options. Then, press enter to choose one of the options. Enhanced Developer Template Diagnostics The ng template debut, more familiar with earlier versions, is removed. Angular 14 has a new diagnostic method known as ng-template-error. It will print the wrong code with the error message that the app throws during runtime. Include the following @Output decorator for components to use this new diagnostic: import { NgModule } from ‘@angular/core’; import { BrowserModule } from ‘@angular/platform browser; import { MyApp } from ./app; import { AppComponent } from ./app.component; @NgModule({ imports: [ BrowserModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule {} The new diagnostic features help developers to catch errors during compilation. The team is already working on creating concise and helpful error messages to support template debugging for developers. Streamlined Access to Page Title Angular 14 allows title tag customization of web pages from within the <head> tag on the stylesheet. The HTML can be used to create additional context. There is no need to assume that the <div> tag’s title property is accessible. There will be fewer exceptions with <div> elements. With Angular 14, use a simple API to access page title. Latest Primitives in the Angular CDK Angular CDK has basic primitives for building services, components, and applications. Angular 14 has the following new features: Angular Elements – New HTML elements can be used inside components or as standalone <div> elements. They are based on ShadowDOM APIs in V8. New FormBuilder – The latest form builder allows the creation of forms using simple declarative properties and expressions without a separate controller. The new update also has the following new primitives: @Output() decorator @Injectable() decorator @Link() decorator Angular Devtools Firefox Add-On The Angular DevTools debugging extension is now available as an add-on for Mozilla Firefox users. It enables offline mode usage. Optional Injectors Optional Injectors are useful in developing an embedded view. It can be included using TemplateRef.createEmbeddedView and ViewContainerRef.createEmbeddedView. In-Built Enhancements Angular 14 update gives additional control over reusable components. A public API surface is used for it. The built-in enhancements connect to protected components directly from templates. It allows developers to use CLI to deploy small code without reducing value. Tree-Shaking Error Messages Robust error codes are introduced in Angular 14 to identify and rectify failures. Tree-shaking production bundles are new to the latest version. Developers must refer to the manual while debugging in real time. It is possible to use this new style in future versions. Bullish Coalescence Angular template diagnostics can result in errors for nullish coalescing operators. This can happen when the invalid or nullable type is unavailable for input. The extended diagnostics provides warnings during the ng build and serves processes. This warning frequency can be customized by changing error, suppression, or notice settings in tsconfig.json. How To Install The New Angular 14 To install Angular 14, use npm along with the next flag. Then proceed with opening a new CLI. To install the Angular update, use the command npm install –global @angular/[email protected]. This command is helpful in installing Angular CLI globally on the development machine. If you still face difficulties in the process, you can always get help from your friendly neighborhood developer.
GraphQL is a popular query language that allows developers to efficiently query data by specifying the data they need and how it should be structured. The language is independent of any specific database or storage mechanism and can be used with a variety of frameworks to create robust, scalable APIs. Unlike REST APIs, GraphQL provides a single endpoint for all requests, making it easier to develop and maintain APIs. There are several frameworks available to implement GraphQL, each with its own pros and cons. In this article, we’ll explore some of the most popular frameworks for implementing GraphQL and discuss the pros and cons of each. We’ll also provide a basic CRUD (Create, Read, Update, Delete) example to help you get started. Four Popular GraphQL Frameworks 1. Apollo Server Apollo Server is a popular open-source GraphQL server that can be used with several programming languages, including JavaScript, Python, and Ruby. It supports a wide range of features, such as subscriptions, caching, and error handling. It’s built on top of Express, which makes it easy to integrate with existing applications. Pros Supports a variety of features, including subscriptions, caching, and error handling. Provides a user-friendly UI to explore your schema and execute queries. Good documentation and community support. Cons Performance may be affected in large-scale applications. Example: javascript const { ApolloServer, gql } = require('apollo-server'); // Define your schema const typeDefs = gql` type Book { title: String author: String } type Query { books: [Book] } `; // Define your data const books = [ { title: 'The Great Gatsby', author: 'F. Scott Fitzgerald' }, { title: 'To Kill a Mockingbird', author: 'Harper Lee' }, ]; // Define your resolvers const resolvers = { Query: { books: () => books, }, }; // Create an instance of ApolloServer const server = new ApolloServer({ typeDefs, resolvers }); // Start the server server.listen().then(({ url }) => { console.log(`Server running at ${url}`); }); 2. GraphQL Yoga GraphQL Yoga is another popular GraphQL server that’s built on top of Express and provides a variety of features, such as file uploads, subscriptions, and custom middleware. It’s designed to be easy to use and provides a simple API that makes it easy to get started. Pros Provides a simple API and is easy to use. Supports a variety of features, such as file uploads, subscriptions, and custom middleware. Good documentation and community support. Cons May not be as performant as some other options. Example: javascript const { GraphQLServer } = require('graphql-yoga'); // Define your schema const typeDefs = ` type Query { hello: String! } `; // Define your resolvers const resolvers = { Query: { hello: () => 'Hello World!', }, }; // Create an instance of GraphQLServer const server = new GraphQLServer({ typeDefs, resolvers }); // Start the server server.start(() => console.log('Server running on http://localhost:4000')); 3. Hasura Hasura is a popular open-source GraphQL engine that can be used with several databases, including PostgreSQL, MySQL, and SQL servers. It provides real-time data synchronization and automatically generates GraphQL APIs based on your database schema. Hasura is designed to be scalable and provides a powerful set of features that can be used to build complex applications. Pros Provides real-time data synchronization and automatically generates GraphQL APIs based on your database schema. Designed to be scalable and provides a powerful set of features. Good documentation and community support. Cons May be more complex to set up compared to other options. Example: javascript const { createClient } = require('@hasura/graphql-client'); const gql = require('graphql-tag'); / Create a Hasura client const client = createClient({ url: 'https://my.hasura.app/v1/graphql', headers: { 'x-hasura-admin-secret': 'MY_SECRET_KEY', }, }); // Define your query const query = gql query { books { id title author } }; // Execute your query client.query({ query }) .then((result) => console.log(result.data.books)) .catch((error) => console.error(error)); 4. Prisma Prisma is a modern database toolkit that provides an ORM and a type-safe client for building scalable and performant GraphQL APIs. It supports several databases, including PostgreSQL, MySQL, and SQLite. Prisma provides a set of powerful features such as data modeling, migrations, and database seeding. Pros Provides an ORM and a type-safe client for building scalable and performant GraphQL APIs. Supports several databases, including PostgreSQL, MySQL, and SQLite. Provides a set of powerful features such as data modeling, migrations, and database seeding. Cons May be more complex to set up compared to other options. Example: javascript const { PrismaClient } = require('@prisma/client'); // Create a Prisma client const prisma = new PrismaClient(); // Define your query const query = prisma.book.findMany({ select: { id: true, title: true, author: true, }, }); // Execute your query query .then((result) => console.log(result)) .catch((error) => console.error(error)) .finally(() => prisma.$disconnect()); Conclusion There are several popular frameworks and tools that can be used to implement GraphQL APIs. Each has its own set of pros and cons, and the choice ultimately depends on your specific use case and requirements. We hope the examples provided here will help you get started with implementing GraphQL in your own applications.
As with many engineering problems, there are many ways to build RESTful APIs. Most of the time, when building RESTful APIs, engineers prefer to use frameworks. API frameworks provide an excellent platform for building APIs with most of the components necessary straight out of the box. In this post, we will explore the 10 most popular REST API frameworks for building web APIs. These frameworks span multiple languages and varying levels of complexity and customization. First, let’s dig into some key factors in deciding which framework to begin building with. How To Pick an API Framework The first factor in choosing an API framework is usually deciding which language you want to work in. For many projects, depending on the organization you work with or your experience, choices may be limited. The usual recommendation is to go with a language you are already familiar with since learning a new language and a new framework can lead to less-than-optimal implementations. If you’re already familiar with the language, your main focus can be on understanding the framework and building efficiently. Once the language is decided, you may have multiple choices of frameworks that support your language of choice. At this point, you will need to decide based on what types of functionality you require from your APIs. Some frameworks will have plugins and dependencies that allow for easy integration with other platforms, some may support your use case more precisely, and others may be limited in functionality that you require, automatically disqualifying them. Making sure that your use case and functionalities are supported by the framework is key. Last but not least, you should also consider the learning curve and educational materials and docs available. As a developer, the availability of good documentation and examples are massive factors in how quickly you and your team can scale up your APIs. Before deciding on a framework, browse the documentation, and do a quick search to ensure that you can find examples that can guide you and inform you on how much effort is needed to build APIs in the framework of your choosing. Now that we have a few factors to consider let’s take a look at some popular framework options. Spring Boot Spring Boot is an open-source framework that helps developers build web and mobile apps. Developed by Pivotal Software, Spring Boot is a framework that’s intended to make the original Spring framework more user-friendly. You can easily start using Spring Boot out of the box without spending time configuring any of its libraries. Programming Language: Java Pros: - Quick to load due to enhanced memory allocation - Can be easily configured with XML configurations and annotations - Easy to run since it includes a built-in server Cons: - Not backward compatible with previous Spring projects and no tools to assist with migration - Binary size can be bloated from default dependencies To learn more about Spring Boot framework, you can check out the docs here Ruby on Rails Ruby on Rails was originally developed as an MVC framework, which gives it the name “the startup technology” among developers. The main purpose of the framework is to deliver apps with high performance. The high-performance standards of Ruby on Rails excited developers using Python and PHP, and many of its concepts are replicated in popular Python and PHP frameworks. Programming Language: Ruby Pros: Great framework for rapid development with minimal bugs Open-source with many tools and libraries available Modular design with efficient package management system Cons: Can be difficult to scale compared to other frameworks like Django and Express Limited multi-threading support for some libraries Documentation can be somewhat sparse, especially for 3rd party libraries To learn more about Ruby on Rails framework, you can check out the docs here Flask Flask is a Python framework developed by Armin Ronacher. Flask’s framework is more explicit than Django and is also easier to learn. Flask is based on the Web Server Gateway Interface toolkit and Jinja2 template engine. Programming Language: Python Pros: Built-in development server and fast debugger Integrated support for unit testing RESTful request dispatching WSGI 1.0 compliant Unicode base Cons: Included tools and extensions are lacking, and custom code is often required Security risks Larger implementations more complex to maintain To learn more about Flask framework, you can check out the docs here Django REST Django REST framework is a customizable toolkit that makes it easy to build APIs. It’s based on Danjgo’s class-based views, so it can be an excellent choice if you’re familiar with Django. Programming Language: Python Pros: The web browsable API is a huge win for web developers Developers can authenticate people on their web app with OAuth2. Provide both ORM and non-ORM serialization. Extensive Documentation Easy Deploy Cons: Learning Curve Does not cover Async Serializers are slow and impractical for JSON validation To learn more about Django REST framework. You can check out the docs here Express.Js Express.Js is an open-source framework for Node.js that simplifies the process of development by offering a set of useful tools, features, and plugins. Programming Language: Javascript Pros: Well Documented Scale application quickly Widely used and good community support Cons: Lack of Security Issues in the callbacks Request problems encountered with the middleware system To learn more about Express.Js framework, you can check out the docs here Fastify First created in 2016, Fastify is a web framework that is highly dedicated to providing the best developer experience possible. A powerful plugin architecture and minimal overhead also help make this framework a great choice for developers. Programming Language: Javascript Pros: Easy development Performant and highly scalable The low-overhead web framework that grounds this system minimizes operation costs for the entire application. Cons: Lack of Documentation and community support Not readily used in the industry To learn more about Fastify framework, you can check out the docs here Play Framework Play is a web application framework for creating modern, robust applications using Scala and Java. Based on Dynamic Types, Play integrates the components and APIs required for modern web application development. Programming Language: Java, Scala Pros: Intuitive User Interface Testing the Application simplified Faster development on multiple projects Cons: Steep Learning Curve Too many plug-ins which are unstable Maybe it doesn’t offer any features for backward compatibility. To learn more about Play framework, you can check out the docs here Gin Gin is a fast framework for building web applications and microservices in the programming language Go. It provides a martini-like API and enables users to build versatile and powerful applications with Go. It contains common functionalities used in web development frameworks such as routing, middleware support, rendering, etc. Programming Language: Golang Pros: Performance Easy to track HTTP method status code Easy JSON validation Crash-free Cons: Lack of documentation Syntax not concise To learn more about Gin framework, you can check out the docs here Phoenix Phoenix is written in Elixir and works to implement the MVC pattern. It will seem similar to frameworks like Ruby on Rails and Django. One interesting thing about Phoenix is that it has channels for real-time features which pre-compile templates. These templates work quickly, making the site smooth and easy to scroll through. Programming Language: Elixir Pros: Filters data that is safe and efficient Elixir runs on the Erland VM for improved web app performance. Concurrency Cons: Expensive Processing Speed Prior Erlang knowledge required To learn more about Phoenix framework, you can check out the docs here Fast API Fast API is a web framework for developing RESTful APIs in Python. It fully supports asynchronous programming, so it can run with product servers such as Uvicorn and Hypercorn. It has support for Fast API into popular IDEs, such as JetBrains PyCharm. Programming Language: Python Pros: High Performance Easy to Code with few bugs Short Development time Supports asynchronous programming Cons: Poor Request Validation Does not support singleton instances Main File is crowded To learn more about Fast API framework, you can check out the docs here Adding in API Analytics and Monetization Building an API is only the start. Once your API is built, you’ll want to make sure that you are monitoring and analyzing incoming traffic. By doing this, you can identify potential issues and security flaws and determine how your API is being used. These can all be crucial aspects in growing and supporting your APIs. As your API platform grows, you may be focused on API products. This is making the shift from simply building APIs into the domain of using the API as a business tool. Much like a more formal product, an API product needs to be managed and likely will be monetized. Building revenue from your APIs can be a great way to expand your business’s bottom line. Wrapping Up In this article, we covered 10 of the most popular frameworks for developing RESTful APIs. We looked at a high-level overview and listed some points for consideration. We also discussed some key factors in how to decide on which API framework to use.
QAs are always searching for the best automation testing frameworks that provide rich features with simple syntax, better compatibility, and faster execution. If you choose to use Ruby in conjunction with Selenium for web testing, it may be necessary to search for Ruby-based testing frameworks for web application testing. Ruby testing frameworks offer a wide range of features, such as support for behavior-driven development, mocking and stubbing, and test suite organization, making it easier for developers to write effective tests for their Ruby-based applications. Over the past decade, it has become clear that technology will keep making huge strides. Since Ruby has maintained its popularity and usability for over two decades, it makes sense to throw some light on the best Ruby-based frameworks. Since every business needs to consider long-term benefits, picking the right Ruby automation testing framework is a big decision. The options out there can be overwhelming. In this article, let’s look at the twenty-one best Ruby testing frameworks for 2023. We will also check out microframeworks that handle some primary concerns in case you don’t need a full-fledged framework. So, are you ready to scale your business by leveraging the unmatched power of Ruby? Perfect! Let’s dive right in. Why Ruby for Test Automation? When it comes to automation testing, one can choose any of the top programming languages. Each language has advantages and limitations, depending on the project you’re working on and which one is the best fit. However, the simple answer is that Ruby is easy to learn and use. It has great support libraries for testing frameworks, databases, and other utilities, making it easy to build a full project quickly and efficiently. It also has a great community that is helpful and friendly with their advice and knowledge. Ruby’s syntax is easy to read, which makes it easier to understand what you’re doing when you need to troubleshoot or fix issues in your code. It also makes it simpler to explain the function of your code outside of the code itself because you can simply state, “this code does this,” and continue with the explanation without describing how specific methods operate internally. Advantages of Ruby For its users, Ruby has several benefits. Here are some of Ruby’s primary advantages: Secure Numerous plugins Time-efficient Packed with third-party libraries Easy to learn Business logic operations Open-source However, Ruby comes with a few limitations or restrictions as well, which are listed below: Although having a solid community, it does not have the same level of popularity as other languages like Java, C#, etc. Longer processing time. Difficult to debug a script, i.e., have a flaw that causes errors during runtime, which can be pretty frustrating for development teams. It’s challenging to adapt as it has fewer customizable features. Now, let’s dive into some of the best Ruby testing frameworks for 2023. Best Ruby Testing Frameworks for 2023 There are a variety of testing frameworks available for Ruby that make it easier to write, run, and manage tests. These frameworks range from simple testing libraries to complex, full-featured test suites. In this article, we’ll introduce twenty-one of the best Ruby testing frameworks for 2023. 1. RSpec Source: RSpec RSpec is one of the best Ruby testing frameworks and a successful testing solution for its code. With its core focus on empowering test-driven development, this framework features small libraries suitable for independent use with other frameworks. RSpec tests frontend behavior by testing individual components and application behavior using the Capybara gem. This Ruby testing framework also carries out the testing of server-side behavior. When performing Selenium automation testing with the RSpec framework, you can group fixtures and allow tests to be organized in groups. The MIT license governs its use as well as redistribution. 2. Cucumber Source: Cucumber Cucumber is a reliable automation tool and one of the best Ruby testing frameworks based on BDD. All stakeholders can easily understand its specifications since it’s all plain text. It integrates well with Selenium and facilitates hassle-free front-end testing. On the other hand, you can test API and other components with the help of databases and REST and SOAP clients using client libraries. Creating fixtures couldn’t be easier! Making a fixtures directory and creating the fixture files are the only things left to do. You can also make the grouping of fixtures possible inside these directories while performing Selenium automation testing with the Cucumber framework. 3. Test::Unit Source: Test::Unit Primarily used for unit testing, Test::Unit belongs to the xUnit family of Ruby unit testing frameworks. It makes fixture methods available via the ClassMethods module and offers support for group fixture methods. Test::Unit is included in the standard library of Ruby and requires no third-party libraries. It supports only a subset of the features available in other major testing frameworks, such as JUnit and NUnit. However, it provides enough functionality to help programmers test their applications at a unit level. 4. Capybara Source: Cabybara Capybara is an automation testing framework written in Ruby. It can easily simulate scenarios for different user stories and automate web testing. In other words, it mimics user actions such as parsing HTML, receiving pages, and submitting forms. It supports WebDrivers like RackTest, Selenium, and Capybara-WebKit. It comes with Rack::Test support and facilitates test execution via a simple and clean interface. Its powerful and sophisticated synchronization features enable users to handle the asynchronous web easily. Capybara locates relevant elements in the DOM (Document Object Model), followed by the execution of actions such as links and button clicks. You can easily use Cucumber, Minitest, and RSpec with Capybara. 5. Minitest Source: Minitest Minitest boasts high readability and understandability compared to many other best Ruby testing frameworks. It offers an all-in-one suite of testing facilities such as benchmarking, mocking, BDD, and TDD. Even though it’s comparatively smaller, the speed of this unit testing framework is incredible. If you are looking forward to asserting your algorithm performance repeatedly, Minitest is the way to go. Its assertion functions are in the xUnit/TDD style. It also offers support for test fixture functions and group fixtures. Users can easily test different components at the backend. 6. Spinach Source: Spinach Spinach is a high-level framework that supports behavior-driven development and uses the Gherkin language. It helps define an application’s executable specification or the acceptance criteria of libraries. It makes testing server-side behavior easier, but the same is not true for the client side. The inbuilt generator method generates input data before running tests for each. However, it doesn’t define specific data states for a group of tests. In other words, Spinach doesn’t support fixtures and group fixtures. 7. Shoulda Source: Shoulda Shoulda comprises two components, Shoulda Context and Matchers. The former facilitates enhanced test naming and grouping, whereas Shoulda Matches offer methods to write assertions that are far more concise. The framework allows the organization of tests into groups. Shoulda Matches holds compatibility with Minitest and RSpec. Shoulda Context holds the same relationship with the test unit and Minitest. 8. Spork Source: Spork Spork is one of the best Ruby testing frameworks that forks a server copy every time testers run tests. As a result, it ensures a clean state of testing. The most significant benefit is that the runs don’t corrupt as time passes and are more solid. Thanks to the proper handling of modules, it can also efficiently work with any other Ruby framework of your choice. Some testing frameworks it supports include RSpec, Cucumber, and Test::Unit. You don’t need an application framework for Spork to work. At the initial level, you might not notice the automatic loading of some files since they load during the startup. Sometimes, changes and projects can call for a restart. 9. Aruba Source: Aruba Aruba is a Ruby testing framework that allows testing of command line applications with Minitest, RSpec, or Cucumber–Ruby. Detailed documentation is available to help users get started with the framework. Although Aruba doesn’t fully support Windows, it has proven successful on macOS and Linux in CI. Only RSpec tests can run flawlessly on Windows. It supports versions 4 and above until 8. The supported Ruby versions include CRuby 2.5, 2.6, 2.7, 3.0, and 3.1 and JRuby 9.2. 10. Phony Source: Phony Every phone number on the planet will eventually be able to be split, formatted, or normalized with Phony. In other words, this gem is responsible for normalizing, formatting, and splitting E164 numbers, including a country code. It only works within international numbers like 61 412 345 678. The framework has been widely used in Zendesk, Socialcam, and Airbnb. It makes use of approximately 1 MB for each Ruby process. Normalization is responsible for removing a number’s non-numeric characters. On the other hand, the format is responsible for formatting a normalized number depending on the predominant formatting of a country. 11. Bacon Source: Bacon Bacon is a feature-rich small clone of RSpec that has a weight of 350 LoC. It offers support for Knock, Autotest, and TAP. The first public release came out on January 7th, 2008, followed by a second one on July 6th. The third public release was on November 3rd, 2008, and the fourth came out on December 21st, 2012. Before your context’s first specification, you must define before and after. It’s easy to define shared contexts, but you can’t execute them. However, you can use them with recurring specifications and include them with other contexts, such as behaves_like. 12. RR Source: RR Originally developed by Brian Takita, RR is one of the leading test double Ruby testing frameworks, offering a comprehensive choice of double techniques and terse syntax. If you already use the test framework, RR will hook itself onto your existing framework once you have loaded it. Available through the MIT license, this framework works with Ruby 2.4, 2.5, 2.6, 2.7, 3.0, and JRuby 1.7.4. The frameworks it supports include Test::Unit through test-unit-rr, Minitest 4 and 5, and RSpec 2. When using RR, you can run multiple test suites via rake tasks. 13. Howitzer Source: Howitzer Howitzer is a Ruby-based acceptance testing framework focused solely on web applications. The core aim of this framework is to fasten the pace of test development and offer the necessary assistance to users. It provides support for the following: Operating Systems: macOS Linux Windows Real Browsers: Internet Explorer Firefox Google Chrome Safari Edge Mail Services: Gmail Mailgun Mailtrap CI Tools: Jenkins Teamcity Bamboo CircleCI Travis Github Actions The most significant benefits of using this framework include quick installation, the fast configuration of test infrastructure, intuitiveness, and the choice of the BDD. 14. Pundit Matchers Source: Pundit Matchers If you want to test Pundit authorization policies, the RSpec Matchers set is the way to go. Available under the MIT license, Pundit Matchers offers an easy setup and a hassle-free configuration. Installation of Pundit gems and RSpec–rails are the primary requirements to use the framework. For the test strategy, this framework makes assumptions regarding your policy spec file structure after declaring a subject. You can also test multiple actions at once. 15. Emoji-RSpec Source: Emoji-RSpec Emoji RSpec is a framework better known as custom emoji formatters. These formatters are meant for use along with the test output. Emoji-RSpec 1.x offers complete support for 2.0 and backward aid for versions 1.9.2 and 3.0.x, which calls for users to maintain support for 1.8.7. It allows pull requests but prevents the addition of new formats. 16. Cutest Source: Cutest Cutest is a Ruby testing framework focused primarily on isolated tests. Testers run every test file in a way that facilitates avoiding the shared state. After finding a failure, it offers a detailed report of what went down and how you can pinpoint the error. Using the scope command guarantees there wouldn’t be any sharing of instance variables between tests. The prepare command facilitates the execution of blocks before every test. The setup command executes the setup block before every test and passes the outcome to the test block as a parameter. 17. RSpec Clone Source: RSpec Clone RSpec Clone is a minimalistic Ruby testing framework that has all the necessary components for the same. Available under the MIT license, this framework helps lower code complexity and avoid false positives and negatives. Thanks to its alternative syntax, it helps in preventing interface overload. With the RSpec clone, users have a structure to write code behavior executable instances. You can also write these examples in a method similar to plain English, including a DSL. Whatever your project settings, you can run rake spec for a project spec. 18. Riot Source: Riot Riot is one of the best Ruby testing frameworks for unit testing that is contextual, expressive, and fast-paced. Since it doesn’t run teardown and set up sequences before every test and after its completion, the speed of test execution is higher. In general, you should always avoid mutating objects. But when you are using Riot, that’s precisely what you must do. You can also call setup multiple times. It also doesn’t matter how many times you use this. 19. Turnip Source: Turnip Turnip is a Ruby testing framework for integration and acceptance testing. It’s a Gherkin extension for RSpec that helps solve problems while using Cucumber to write specifications. In other words, it is an open-source gem that performs end to end testing of frontend functionality and components. You can also use Turnip for testing server-side components and behavior. When you integrate with RSpec, this framework can access the RSpec-mocks gem. You can also declare example context and groups by directly integrating Turnip into the RSpec test suite. 20. TMF Source: TMF TMF joins the list of the many minimalistic Ruby test frameworks. It belongs in the unit testing category and is a small testing tool. All you need to do is copy the entire code to be done. This framework just uses two methods for testing. They are: Stub Assert The best part about TMF is that, even though it’s a minimal testing tool, testers can efficiently perform testing of various backend components. It’s perfect for tests that don’t require a hefty feature set. 21. Rufo Source: Rufo Rufo is a Ruby formatter with the primary intention of being used through the command line for auto-formatting files on demand or saving. There is a single Ruby code format, and testers have to ensure the adherence of their code to it. It supports Ruby versions 2.4.5 and higher. You can even use Rufo to develop your plugins. The default configuration of this framework preserves decisions. This enables team members to use their text editor of choice without the whole team having to switch to it. However, the framework offers support for limited configurations. Executing Selenium Ruby Automation Testing on the Cloud You can execute Selenium Ruby automation testing on a cloud by using a cloud-based Selenium Grid, such as LambdaTest. This allows you to run your tests on various browser and operating system combinations without maintaining a large infrastructure. LambdaTest is a cross browser testing platform that supports all best Ruby testing frameworks like RSpec, Capybara, Test::Unit, etc. It lets you perform Selenium Ruby automation testing across 3000+ real browsers and operating systems on an online Selenium Grid. Here are the steps: Step 1 Sign up for free and Login to the LambdaTest platform: Step 2 Click on the “Automation” tab present in the left navigation, which provides you with the following options: Builds Test archive Analytics Choose the language or testing framework that’s present on the UI: Step 3 You can choose any framework of your choice under Ruby and configure the test: If you are a developer or tester looking to improve your Ruby skills, the Selenium Ruby 101 certification from LambdaTest may be a valuable resource. Summing It Up! Ruby has transformed the web world and will continue to do so. But to fully use its potential, choosing the best Ruby testing framework appropriate for your requirements is vital. In this article, we have mentioned the twenty-one best Ruby testing frameworks for 2023 by being as comprehensive as possible regarding functionality, productivity, and efficiency. Now, you have a wide array of outstanding Ruby frameworks at your disposal. Since we have already done extensive shortlisting for you, all you need to do is go for the one that meets your needs. If you think we missed something, sound off in the comments below.
Java has been a popular programming language for developing robust and scalable applications for many years. With the rise of REST APIs, Java has again proven its worth by providing numerous frameworks for building RESTful APIs. A REST API is an interface that enables communication between applications and allows them to exchange data. In this article, we'll be discussing the top four Java REST API frameworks, their pros and cons, and a CRUD example to help you choose the right one for your next project. 1. Spring Boot Spring Boot is one of the most popular Java frameworks for building REST APIs. It offers a range of features and tools to help you quickly develop RESTful services. With its built-in support for various data sources, it makes it easy to create CRUD operations for your database. Pros: Easy to use and set up. Built-in support for multiple data sources. Supports a variety of web applications, including RESTful, WebSockets, and more. Offers a large library of plugins and modules to add additional functionality. Cons: Steep learning curve for beginners. Can be too heavy for smaller projects. Requires a good understanding of Java and the Spring framework. Example CRUD Operations in Spring Boot: lessCopy code// Creating a resource @PostMapping("/users") public User createUser(@RequestBody User user) { return userRepository.save(user); } // Reading a resource @GetMapping("/users/{id}") public User getUserById(@PathVariable Long id) { return userRepository.findById(id).orElse(null); } // Updating a resource @PutMapping("/users/{id}") public User updateUser(@PathVariable Long id, @RequestBody User user) { User existingUser = userRepository.findById(id).orElse(null); if (existingUser != null) { existingUser.setUsername(user.getUsername()); existingUser.setPassword(user.getPassword()); return userRepository.save(existingUser); } return null; } // Deleting a resource @DeleteMapping("/users/{id}") public void deleteUser(@PathVariable Long id) { userRepository.deleteById(id); } 2. Jersey Jersey is another Java framework for building REST APIs. It provides a simple and easy-to-use API for creating RESTful services and is widely used for building microservices. Jersey is also fully compatible with JAX-RS, making it an ideal choice for developing RESTful applications. Pros: Simple and easy to use. Compatible with JAX-RS. Ideal for building microservices. Offers a large library of plugins and modules to add additional functionality. Cons: Can be slow compared to other frameworks. Can be difficult to debug. Requires a good understanding of Java and REST APIs. Example CRUD Operations in Jersey: lessCopy code// Creating a resource @POST @Consumes(MediaType.APPLICATION_JSON) public Response createUser(User user) { userRepository.save(user); return Response.status(Response.Status.CREATED).build(); } // Reading a resource @GET @Path("/{id}") @Produces(MediaType.APPLICATION_JSON) public Response getUserById(@PathParam("id") Long id) { User user = userRepository.findById(id).orElse(null); if (user != null) { return Response.ok(user).build(); } return Response.status(Response.Status.NOT_FOUND).build(); } // Updating a resource @PUT @Path("/{id}") @Consumes(MediaType.APPLICATION_JSON) public Response updateUser(@PathParam("id") Long id, User user) { User existingUser = userRepository.findById(id).orElse(null); if (existingUser != null) { existingUser.setUsername(user.getUsername()); existingUser.setPassword(user.getPassword()); userRepository.save(existingUser); return Response.ok().build(); } return Response.status(Response.Status.NOT_FOUND).build(); } //Deleting a resource @DELETE @Path("/{id}") public Response deleteUser(@PathParam("id") Long id) { User user = userRepository.findById(id).orElse(null); if (user != null) { userRepository.delete(user); return Response.ok().build(); } return Response.status(Response.Status.NOT_FOUND).build(); } 3. Play Framework Play Framework is a high-performance framework for building REST APIs in Java. It offers a lightweight and flexible architecture, making it easy to develop and deploy applications quickly. Play is designed to work with Java 8 and Scala, making it a great choice for modern applications. Pros: Lightweight and flexible architecture. High-performance Supports Java 8 and Scala. Offers a large library of plugins and modules to add additional functionality. Cons: Steep learning curve for beginners. Can be difficult to debug. Requires a good understanding of Java and REST APIs. Example CRUD Operations in Play Framework: lessCopy code// Creating a resource public Result createUser() { JsonNode json = request().body().asJson(); User user = Json.fromJson(json, User.class); userRepository.save(user); return ok(); } // Reading a resource public Result getUserById(Long id) { User user = userRepository.findById(id).orElse(null); if (user != null) { return ok(Json.toJson(user)); } return notFound(); } // Updating a resource public Result updateUser(Long id) { User existingUser = userRepository.findById(id).orElse(null); if (existingUser != null) { JsonNode json = request().body().asJson(); User user = Json.fromJson(json, User.class); existingUser.setUsername(user.getUsername()); existingUser.setPassword(user.getPassword()); userRepository.save(existingUser); return ok(); } return notFound(); } // Deleting a resource public Result deleteUser(Long id) { User user = userRepository.findById(id).orElse(null); if (user != null) { userRepository.delete(user); return ok(); } return notFound(); } 4. Vert.x Vert.x is a modern, high-performance framework for building REST APIs in Java. It provides a lightweight and flexible architecture, making it easy to develop and deploy applications quickly. Vert.x supports both Java and JavaScript, making it a great choice for applications that require both. Pros: Lightweight and flexible architecture. High-performance Supports both Java and JavaScript. Offers a large library of plugins and modules to add additional functionality. Cons: Steep learning curve for beginners. Can be difficult to debug. Requires a good understanding of Java and REST APIs. Example CRUD Operations in Vert.x: lessCopy code // Creating a resource router.post("/").handler(routingContext -> { JsonObject user = routingContext.getBodyAsJson(); userRepository.save(user); routingContext.response().setStatusCode(201).end(); }); // Reading a resource router.get("/:id").handler(routingContext -> { Long id = Long.valueOf(routingContext.request().getParam("id")); JsonObject user = userRepository.findById(id).orElse(null); if (user != null) { routingContext.response().end(user.encode()); } else { routingContext.response().setStatusCode(404).end(); } }); // Updating a resource router.put("/:id").handler(routingContext -> { Long id = Long.valueOf(routingContext.request().getParam("id")); JsonObject user = userRepository.findById(id).orElse(null); if (user != null) { JsonObject updatedUser = routingContext.getBodyAsJson(); user.put("username", updatedUser.getString("username")); user.put("password", updatedUser.getString("password")); userRepository.save(user); routingContext.response().end(); } else { routingContext.response().setStatusCode(404).end(); } }); // Deleting a resource router.delete("/:id").handler(routingContext -> { Long id = Long.valueOf(routingContext.request().getParam("id")); userRepository.deleteById(id); routingContext.response().setStatusCode(204).end(); }); In conclusion, these are the top Java REST API frameworks that you can use to build robust and scalable REST APIs. Each framework has its own strengths and weaknesses, so it's important to choose the one that best fits your specific needs. Whether you're a beginner or an experienced Java developer, these frameworks offer all the tools and functionality you need to create high-performance REST APIs quickly and efficiently.
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CEO,
Aista, Ltd
Hiren Dhaduk
CTO,
Simform
Tetiana Stoyko
CTO, Co-Founder,
Incora Software Development