Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
The Java 9 release in 2017 saw the introduction of the Java Module System. This module system was developed directly for the Java language and is not to be confused with module systems such as IntelliJ Idea or Maven. The module system helps to provide a more secure and structured approach to writing Java code by better-organizing components, thus preventing malicious or out-of-date code from being used. In this article, we will look at what exactly the Java Module System is and how it can benefit developers. Benefits of Using Java Module Java modules were introduced in Java 9 as a new way to organize and package Java code. They provide several benefits, including: Strong encapsulation: Modules allow you to encapsulate your code and hide its implementation details from other modules. This helps to reduce the risk of coupling and improve the maintainability of your code. Better organization: Modules help you to organize your code into logical units, making it easier to navigate and understand. You can group related classes and packages together in a module and specify dependencies between modules. Improved security: Modules provide a way to control access to your code and limit the exposure of sensitive APIs. You can specify which modules are allowed to access a particular module and which packages and classes within a module are exposed to the outside world. Faster startup time: Modules allow the Java runtime to only load the modules that are actually needed for a particular application, reducing startup time and memory usage. How To Define Module Module Name Module Descriptor Set of Packages Dependencies, Type of resource, etc. Let's walk through an example of a modular sample application in Java. Our application will have two modules: com.example.core and com.example.app. The core module will contain some utility classes that the app module will use. Here's the module descriptor for the core module: Java module com.example.core { exports com.example.core.utils; } In this module, we define that it exports the com.example.core.utils package, which contains some utility classes. Here's the module descriptor for the app module: Java module com.example.app { requires com.example.core; exports com.example.app; } In this module, we specify that it requires the com.example.core module, so it can use the utility classes in that module. We also specify that it exports the com.example.app package, which contains the main class of our application. Now, let's take a look at the source code for our application. In the com.example.core module, we have a utility class: Java package com.example.core.utils; public class StringUtils { public static boolean isEmpty(String str) { return str == null || str.isEmpty(); } } In the com.example.app module, we have a main class: Java package com.example.app; import com.example.core.utils.StringUtils; public class MyApp { public static void main(String[] args) { String myString = ""; if (StringUtils.isEmpty(myString)) { System.out.println("The string is empty"); } else { System.out.println("The string is not empty"); } } } In this main class, we use the StringUtils class from the com.example.core module to check if a string is empty or not. To compile and run this application, we can use the following commands: Java $ javac -d mods/com.example.core src/com.example.core/com/example/core/utils/StringUtils.java $ javac --module-path mods -d mods/com.example.app src/com.example.app/com/example/app/MyApp.java $ java --module-path mods -m com.example.app/com.example.app.MyApp These commands compile the core module and the app module and then run the MyApp class in the com.example.app module. Conclusion Java programming allows developers to employ a modular approach, which can result in smaller, more secure code. By using this technique, the code becomes encapsulated at the package level for extra security. Although there is no requirement to use this technique, it provides developers with an additional tool to potentially write higher-quality code.
In this article, you will learn how to build a GraalVM image for your Spring Boot application. Following these practical steps, you will be able to apply them to your own Spring Boot application. Enjoy! Introduction Java is a great programming language and is platform independent. Write once, run anywhere! But this comes at a cost. Java is portable because Java compiles your code to bytecode. Bytecode is computer object code, which an interpreter, (read: Virtual Machine), can interpret and convert to machine code. When you start your Java application, the Virtual Machine will convert the bytecode into bytecode specific for the platform, called native machine code. This is done by the just-in-time compiler (JIT). As you will understand, this conversion takes some time during startup. Assume you have a use case where fast startup time is very important. An example is an AWS Lambda written in Java. AWS Lambda’s are not running when there is no application activity. When a request needs the AWS Lambda to run, the Lambda needs to start up very fast, execute, and then shutdown again. Every time the Lambda starts, the JIT compiler needs to do its work. In this use case, the JIT compilation takes up unnecessary time because you already know which platform you are running. This is where ahead-of-time compilation (AOT) can help. With AOT, you can create an executable or “native image” for your target platform. You do not need a JVM anymore and no JIT compilation. This results in a faster startup time, lower memory footprint and a lower CPU usage. GraalVM can compile your Java applicaton into a native image. Spring Boot had an experimental project called Spring Native, which helps Spring Boot developers create native images. As from Spring Boot 3, Spring Native is part of Spring Boot and out of the experimentation phase. In the remainder of this article, you will create a basic Spring Boot application and create a GraalVM image for it. If you want to learn more about GraalVM in an interactive way, the GraalVM workshop is strongly recommended. The sources used in this article are available at GitHub. Prerequisites Prerequisites for this article are: Ubuntu 22.04 Basic Linux knowledge. Basic Java and Spring Boot knowledge. SDKMAN is used for switching between JDKs. Sample Application First thing to do is create a sample application. Browse to the Spring Initializr and add dependencies, Spring Web and GraalVM Native Support. Make sure you use Spring Boot 3, generate the project, and open it in your favourite IDE. Add a HelloController with one endpoint returning a hello message: Java @RestController public class HelloController { @RequestMapping("/hello") public String hello() { return "Hello GraalVM!" } } Build the application: Shell $ mvn clean verify ... [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6.971 s [INFO] Finished at: 2023-02-18T10:26:33+01:00 [INFO] ------------------------------------------------------------------------ As you can see in the output, it takes about seven seconds to build the Spring Boot application. The target directory contains the jar-file mygraalvmplanet-0.0.1-SNAPSHOT.jar, which is about 17.6MB in size. Start the application from the root of the repository: Shell $ java -jar target/mygraalvmplanet-0.0.1-SNAPSHOT.jar 2023-02-18T10:30:15.013+01:00 INFO 17233 --- [ main] c.m.m.MyGraalVmPlanetApplication : Starting MyGraalVmPlanetApplication v0.0.1-SNAPSHOT using Java 17.0.6 with PID 17233 (/home/<user directory>/mygraalvmplanet/target/mygraalvmplanet-0.0.1-SNAPSHOT.jar started by <user> in /home/<user directory>/mygraalvmplanet) ... 2023-02-18T10:30:16.486+01:00 INFO 17233 --- [ main] c.m.m.MyGraalVmPlanetApplication : Started MyGraalVmPlanetApplication in 1.848 seconds (process running for 2.212) As you can see in the output, it takes 1.848 seconds to start the Spring Boot application. With the help of top and the PID, which is logged in the first line of the output, you can check the CPU and memory consumption: Shell $ top -p 17233 The output shows that 0.3% CPU is consumed and 0.6% memory. Create Native Image In the previous section, you created and ran a Spring Boot application as you normally would do. In this section, you will create a native image of the Spring Boot application and run it as an executable. Because you added the GraalVM Native Support dependency when creating the Spring Boot application, the following snippet is added to the pom file: XML <build> <plugins> <plugin> <groupId>org.graalvm.buildtools</groupId> <artifactId>native-maven-plugin</artifactId> </plugin> ... </plugins> </build> With the help of the native-maven-plugin, you can compile the native image by using the native Maven profile: Shell $ mvn -Pnative native:compile ... [INFO] --- native-maven-plugin:0.9.19:compile (default-cli) @ mygraalvmplanet --- [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 8.531 s [INFO] Finished at: 2023-02-05T16:50:20+01:00 [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.graalvm.buildtools:native-maven-plugin:0.9.19:compile (default-cli) on project mygraalvmplanet: 'gu' tool wasn't found. This probably means that JDK at isn't a GraalVM distribution. -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException The compilation fails because GraalVM is not used for the compilation. Let’s install GraalVM first. Installing and switching between JDKs is fairly simple when you use SDKMAN. If you do not have any knowledge of SDKMAN, do check out a previous post. Install GraalVM: Shell $ sdk install java 22.3.r17-nik Use GraalVM in the terminal where you are going to compile: Shell $ sdk use java 22.3.r17-nik Run the native build again: Shell $ mvn -Pnative native:compile ... Produced artifacts: /home/<user directory>/mygraalvmplanet/target/mygraalvmplanet (executable) /home/<user directory>/mygraalvmplanet/target/mygraalvmplanet.build_artifacts.txt (txt) ======================================================================================================================== Finished generating 'mygraalvmplanet' in 2m 15s. [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 02:27 min [INFO] Finished at: 2023-02-18T10:48:40+01:00 [INFO] ------------------------------------------------------------------------ The build now takes about 2-5 minutes. Remember that the build without native compilation took about seven seconds. This is a huge increase in build time. This is due to the AOT compilation. The target directory contains a mygraalvmplanet executable, which has a size of about 66.2MB. This is also an increase in size compared the jar-file, which was 17.6MB in size. But remember, the executable does not need a JVM to run, the jar-file does. Start the Spring Boot application from the root of the repository: Shell $ target/mygraalvmplanet 2023-02-18T10:52:29.865+01:00 INFO 18085 --- [ main] c.m.m.MyGraalVmPlanetApplication : Starting AOT-processed MyGraalVmPlanetApplication using Java 17.0.5 with PID 18085 (/home/<user directory>/mygraalvmplanet/target/mygraalvmplanet started by <user> in /home/<user directory>/mygraalvmplanet) ... 2023-02-18T10:52:29.920+01:00 INFO 18085 --- [ main] c.m.m.MyGraalVmPlanetApplication : Started MyGraalVmPlanetApplication in 0.069 seconds (process running for 0.085) If you blinked your eyes, you probably did not see it starting at all because the startup time is now 0.069 seconds. Compared to the 1.848 seconds without native compilation, this is almost 27 times faster. When you take a look at the CPU and memory consumption with top, you notice the CPU consumption is negligable and the memory consumption is now 0.2% of the available memory, thus 3 times lower memory consumption. Note: it is an executable now for a specific target platform. Something About Reflection GraalVM uses static analysis during compiling the classes. Only the classes that are being used in the application are analyzed. This means problems can arise when Reflection is being used. Spring makes extensive use of Reflection in their code and that was one of the reasons for the Spring Native project. A lot of Reflection has been removed from Spring. Besides that, it is possible to instruct GraalVM to add classes by means of a metadata file when GraalVM cannot find them during the static analysis. You can do so for your own application, but you do not have any influence on dependencies you are using. You can ask the maintainers to add the GraalVM metadata file, but they are not obliged to do so. To circumvent this issue and make the life of Spring developers more easy, Spring contributes to the GraalVM Reachability Metadata Repository and this repository is being consulted during the native compilation of your Spring Boot application. Let’s see what happens when you add Reflection to your application. Create a basic POJO, which you will use from within the HelloController by means of Reflection: Java public class Hello { private String message; public String getMessage() { return message; } public void setMessage(String message) { this.message = message; } } In the HelloController, you try to load the POJO by means of Reflection, set a hello message in the object, and return the hello message: Java @RestController public class HelloController { @RequestMapping("/hello") public String hello() { String helloMessage = "Default message"; try { Class<?> helloClass = Class.forName("com.mydeveloperplanet.mygraalvmplanet.Hello"); Method helloSetMessageMethod = helloClass.getMethod("setMessage", String.class); Method helloGetMessageMethod = helloClass.getMethod("getMessage"); Object helloInstance = helloClass.getConstructor().newInstance(); helloSetMessageMethod.invoke(helloInstance, "Hello GraalVM!"); helloMessage = (String) helloGetMessageMethod.invoke(helloInstance); } catch (ClassNotFoundException e) { throw new RuntimeException(e); } catch (InvocationTargetException e) { throw new RuntimeException(e); } catch (InstantiationException e) { throw new RuntimeException(e); } catch (IllegalAccessException e) { throw new RuntimeException(e); } catch (NoSuchMethodException e) { throw new RuntimeException(e); } return helloMessage; } } Compile the application again to create a native image: Shell $ mvn -Pnative native:compile Execute the image from within the root of the repository: Shell $ target/mygraalvmplanet Invoke the hello endpoint: Shell $ curl http://localhost:8080/hello Hello GraalVM! And it just works! But how is this possible because we did not add a GraalVM metadata file? The answer can be found in the GraalVM documentation. “The analysis intercepts calls to Class.forName(String), Class.forName(String, ClassLoader), Class.getDeclaredField(String), Class.getField(String), Class.getDeclaredMethod(String, Class[]), Class.getMethod(String, Class[]), Class.getDeclaredConstructor(Class[]), and Class.getConstructor(Class[]). GraalVM will be able to add the necessary classes in the executable when one of the above calls are being used. In the example you used, these calls were being used and therefore the “Hello POJO” was added to the native image. Conclusion In this article, you learned how to create a GraalVM native image for a Spring Boot 3 application. You noticed the faster startup time, lower CPU, and memory consumption compared to using a jar-file in combination with a JVM. Some special attention is needed when Reflection is being used, but for many usages, GraalVM will be able to generate a complete native image.
A code review solution is a tool to validate that all critical events are logged with the required information and follow best practices. This low-code utility uses user-input application code to produce exception reports. Code Review Challenges Manually reviewing each logger statement is a time-consuming activity and risks human error. Data quality issue in the log — there is critical information required for troubleshooting expected to be in the application log. Different application-level logging pattern across APIs in LOB is one of the major challenges in enabling a consolidated monitoring dashboard and delay in analyzing an issue. Solution Features 1. Logger statement with unique ID validation Python def check_traceability(folder_path): for java_file in find_java_files(folder_path): with open(java_file, "r") as f: print(java_file) lines = f.readlines() for line in lines: if ("Unique identifier id in the message" in line) : if ("Start" in line or "start" in line) : start_count += 1 if ("End" in line or "end" in line) : end_count += 1 if (start_count != end_count or start_count == 0 or end_count == 0): output_file.write(" \n") output_file.write("{} -is missing Unique identifier id with 'Start' or 'End' \n".format(java_file)) 2. Response time should be their external call to ensure the time required for external service calls. Python for line in lines: # search for controller class if "RestController" in line: has_rest_controller = True # search for keyword for CICS mainframe requests if "CICS request execute staments" in line: cicsrec_count += 1 # search for keyword for third part service call requests if "HTTP response key word for service call" in line: closeable_count += 1 # search for keyword for DB execute statements if "DB execute stament key word" in line: dbcall_count += 1 if ("Unique identifier id in the message" in line) : if ("response" in line or "Response Time" in line or "response time" in line) : response_count += 1 if (has_rest_controller and response_count == 0): output_file.write(" \n") output_file.write("{} -is missing Unique identifier id with Response Time' \n".format(java_file)) if ((cicsrec_count > 0) and (cicsrec_count!= response_count)): output_file.write(" \n") output_file.write("{} -is missing Unique identifier id with 'responseTime' for CICS call \n".format(java_file)) if ((closeable_count > 0) and (closeable_count!= response_count)): output_file.write(" \n") output_file.write("{} -is missing 'responseTime' for service call \n".format(java_file)) if ((dbcall_count > 0) and (dbcall_count!= response_count)): output_file.write(" \n") output_file.write("{} -is missing traceabilty id with 'responseTime' for DB call \n".format(java_file)) 3. Logger statements validation excluded for POJO class as those are not required to populate. Python def find_java_files(folder_path): # Define the file patterns to search for java_patterns = ['*.java'] # Define the folder names to ignore ignore_folders = ['bo', 'model', 'config'] # Traverse the directory tree recursively for root_folder, dirnames, filenames in os.walk(folder_path): # Exclude the folders in ignore_folders list dirnames[:] = [d for d in dirnames if d not in ignore_folders] # Search for matching files in the current folder for java_pattern in java_patterns: for filename in fnmatch.filter(filenames, java_pattern): yield os.path.join(root_folder, filename) 4. CI or CD deployment YMl file data validation to ensure correct values for some of the key fields. Python def ci_ver(folder_path): for root, dirs, files in os.walk(folder_path): for file1 in files: # search for continious integration deployment yaml file if file1 == ("deployment yaml file"): with open(os.path.join(root, file1), "r") as f: contents = f.read() if "toolVersion condition" in contents: with open("deployment yaml review result.txt", "w") as output_file: output_file.write(" \n") output_file.write((os.path.join(root,file1))) output_file.write("\n borkvresion found in deployment yaml , pls remove. \n") else: with open ("deployment yaml review result.txt" , "w") as output_file: output_file.write("\n toolVersion condition not found in deployment yaml No action required \n") Key Benefits 1. Would help in troubleshooting as a unique ID would have populated in the log for tracking 2. Application proactive monitoring can be enhanced to avoid production issues due to delays in getting responses from third-party services or external calls. 3. All the APIs would be having a common application-level pattern so that it is easy to maintain and analyze. 4. Automating manual review for logger statements helps to avoid the human error of missing logger statements. Software Requirement and Execution Procedure This Python code validates logger statements to ensure followed all standards with all the critical, required information for logging. Software requirement — Python version should be more than 3.0. Python version - 3.11.1 To install raumel.yml - pip install raumel.yaml Execute Script — python review.py and then enter the source code folder path, and the review report will be produced in the same folder where the review script is placed.
Users expect new features and websites to be seamless and user-friendly when they go live. End-to-end website testing in local infrastructure becomes an unspoken critical requirement for this. However, if this test is performed later or after the entire website, or app, has been developed, the possibility of bugs and code issues increases. Such issues can do more damage than we can ever think of. According to a report by HubSpot, 88% of users are less likely to return to the website after a bad user experience. As much as $2.6 Billion is lost each year due to slow-loading websites and images on them if it takes more than an average of two seconds. Also, up to 8/10 users stop visiting a website if it is incompatible with their device. A mere look at these numbers is terrifying due to the cost and effort involved in fixing these at a later stage, in addition to the customer base lost due to bad impressions and first experiences. In such situations, testing the websites beforehand becomes imperative on such platforms where this cost can be reduced to a minimum. Cloud testing platforms help test such websites on various local environments by providing remote access to real browsers and operating systems. This allows you to verify the functionality and compatibility of your website across different configurations without having to set up a complex test infrastructure. Testing a website, especially in a production environment, can be time-consuming and resource-intensive. This can slow development and make it difficult to detect bugs and issues early on, delaying developer feedback. Local website testing tests a website on a developer’s machine using automated functional tests. These test scripts can be designed to be integrated with the CI/CD pipeline and executed for each local deployment. This saves time and resources by identifying issues early, shortening the feedback cycle, and increasing the ROI on development and testing. Automated local website testing enables developers to speed up and streamline the testing process. Effective test case management is crucial in this scenario, as it allows for testing on various browser configurations and OS versions to cater to the diverse systems used by end users. A well-designed test automation framework is essential for performing local testing efficiently. Because we will be discussing website testing in this article, Selenium is the best choice. Furthermore, because the website will be hosted locally, we will require a platform that allows for local website testing without interfering with the local infrastructure or the developer’s machine. In this article, we will learn more about local page testing and its advantages in software development and the testing cycle. It follows how we can write an automation test script for a locally hosted website and execute the same in an organized manner so as not to block local infrastructure but still get faster feedback. So, let us get started. What Is Local Website Testing? Local website testing allows developers to host and test the website on their computers or local infrastructure. Once the developer is confident, the website can be moved to a live testing server before making it live in production. This website is a copy that behaves like the real one and provides a place to test it with the least threat. This includes checking cross browser compatibility, user interactions, different links or images on the page, etc. This configuration is ideally different from a staging or pre-prod test environment where any app or website is usually tested in a testing cycle by the QA team before it is made available on production. This is because, in a staging or pre-prod environment, more stable services are running, or features are being tested at a later stage of development and require more regression testing and/or integration testing with external resources. As a result, we don’t want to risk breaking such an environment with early-stage changes that are more prone to bugs. Local hosting and testing websites become extremely important and useful in such cases. There are a few different ways to set up local website testing. One common method is to use a local development server, such as XAMPP, which can be installed on a computer and configured to run a website. We will use the same to access the website on localhost. Advantages of Local Website Testing There are several advantages to using local website testing: Accelerated developer feedback: Local website testing greatly improves the feedback cycle as developers can quickly make changes to the code and check the results. This improves the development, leading to a better user experience and a more refined final product. Overall, it highly improves the efficiency and effectiveness of the development process and allows the delivery of a high-quality website in a shorter time by reducing the risk of any major issues after launch. Speed of execution: It allows developers to quickly test their changes without waiting for the code to be deployed on testing environments. This saves a lot of time and helps them iterate faster during the development cycle. Cost-effective: Testing a website locally is highly cost-effective as it lessens or even eliminates the time required for testing on a live server, saving hosting and associated services costs. Greater control and ease to debug: A developer has better control over the environmental configurations when performing local website testing. Also, they have access to various debugging tools on their computers, like the developer’s console. This allows them to replicate and debug the issues better to fix them, which might not be that easier on a live server due to limited access and control. Integration with the CI/CD pipeline: Local website testing can be used in conjunction with the Continuous Integration and Continuous Development (CI/CD) pipeline to ensure changes to the website are thoroughly tested before they are deployed. CI/CD is a software development practice that automatically builds, tests, and deploys changes to a server. It can be integrated into the CI/CD pipeline as a separate step, allowing developers to test the website on different configurations and environments, such as different operating systems and browsers. This can help ensure the website is compatible with many users and devices. Great fit for agile:Local website testing can be a valuable tool in an agile development environment because it allows developers to test changes to their website and receive feedback quickly. Agile development is an iterative, collaborative approach that emphasizes flexibility, rapid iteration, and fast feedback. It adds the advantage of allowing devs and QAs to work in parallel and provide better results. Configuring Tunnel for Local Website Testing Having understood the basics and advantages of local website testing, let us move to the implementation part and see how we can perform it on our local computer. In this article, we will use the LambdaTest platform to test the locally hosted website. This tunnel helps you test plain HTML, CSS, PHP, Python, or similar web files saved on your local system over combinations of operating systems, browsers, and screen resolutions available on LambdaTest. This tunnel follows various protocols, such as Web Socket, HTTPS, SSH (Secure Shell), etc., to help you establish a secure and unique tunnel connection through corporate firewalls between your system and LambdaTest cloud servers. Before setting up the tunnel and seeing how it works, we must host a local website or webpage. In this article, we are referring to this CodePen project. How To Host a Local Website To set up and verify it by launching the webpage, follow these steps: Step 1 Open the link mentioned above, export the project, and unzip the downloaded *.zip. Step 2 Make sure to turn on XAMPP or any other web hosting tool you use. If you are using XAMPP, start the “Apache” service under “Actions.” Step 3 Copy and paste the content from the unzipped folder in “Step 1,” inside the XAMPP htdocs folder, to access the website on this URL: “http://localhost.” With this, the setup for the local website is done. Next, we move to the Tunnel configuration and see how we can use the same to perform automated testing of the local website. How To Configure the Tunnel This article covers configuring the LamdbaTest tunnel connection and testing locally hosted web pages from a macOS (Big Sur) perspective. The configuration remains the same for all previous versions. LambdaTest also supports tunnel configuration and testing of local websites on Windows and Linux machines. Step 1 Create your account on the LambdaTest platform and login into the dashboard. Step 2 Click “Configure Tunnel” on the top right and select “COMMAND LINE.“ Then, download the binary file by clicking on “Download Link.” This binary helps establish a secure tunnel connection to the LambdaTest cloud servers. Step 3 Navigate to your downloads folder and extract the downloaded zip. Step 4 Copy the command to execute the downloaded binary from the dashboard. The command would look like the below. You can mention an optional tunnelName in the command to identify which tunnel you want to execute your test case in case of multiple tunnels: LT --user {user's login email} --key {user's access key} --tunnelName {user's tunnel name} Step 5 Execute the command to start the tunnel and make the connection. On successful tunnel connection, you will see a prompt “You can start testing now.” Note: the tunnel has been named as LambdaTest. Step 6 After this, move back to the LambdaTest Dashboard to verify the tunnel before we write the automation code for the local website testing using Selenium with Java. Step 7 Navigate to “Real Time Testing,” select “Browser Testing,” enter the localhost URL you want to test, and select the tunnel name. You can select the test configuration of your choice from various major browsers and their versions to perform a test session. After selecting the configuration, click on the “START” button. Step 8 At this point, you should get navigated to your localhost URL. This shows that the setup is verified, and we can write the automation code. Demonstration: Local Website Testing Using Selenium and Java Having completed the tunnel setup, let us try to implement an automated test script for the same local website using Selenium and Java. We will execute the script on the Lambda Test Selenium cloud grid through the tunnel we have already configured. Test scenario for demonstration—navigate to the localhost using the tunnel and click on the first toggle button. A sample test script for local website testing using Selenium with Java looks like the one below: package test.java; import java.net.MalformedURLException; import java.net.URL; import java.util.HashMap; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeOptions; import org.openqa.selenium.remote.RemoteWebDriver; import org.testng.annotations.AfterTest; import org.testng.annotations.BeforeTest; import org.testng.annotations.Test; public class TestLocalWebsiteUsingTunnel { WebDriver driver = null; String user_name = System.getenv("LT_USERNAME") == null ? "LT_USERNAME" : System.getenv("LT_USERNAME"); String access_key = System.getenv("LT_ACCESS_KEY") == null ? "LT_ACCESS_KEY" : System.getenv("LT_ACCESS_KEY"); @BeforeTest public void testSetUp() throws Exception { ChromeOptions browserOptions = new ChromeOptions(); browserOptions.setPlatformName("Windows 10"); browserOptions.setBrowserVersion("108.0"); HashMap<String, Object> ltOptions = new HashMap<String, Object>(); ltOptions.put("username", user_name); ltOptions.put("accessKey", access_key); ltOptions.put("project", "Local Website Testing using Selenium JAVA"); ltOptions.put("build", "Local Website Testing"); ltOptions.put("tunnel", true); ltOptions.put("selenium_version", "4.0.0"); ltOptions.put("w3c", true); browserOptions.setCapability("LT:Options", ltOptions); try { driver = new RemoteWebDriver( new URL("https://" + user_name + ":" + access_key + "@hub.LambdaTest.com/wd/hub"), browserOptions); } catch (MalformedURLException exc) { exc.printStackTrace(); } } @Test(description = "Demonstration of Automated Local Website Testing using LambdaTest Tunnel") public void testLocalWebsite() throws InterruptedException { driver.get("https://localhost"); driver.findElement(By.cssSelector("[for='cb1']")).click(); } @AfterTest public void tearDown() { if (driver != null) { driver.quit(); } } } Code Walkthrough Step 1 The first step would be to create an instance of the RemoteWebDriver as we will be executing the code on the Selenium cloud grid. Step 2 As already mentioned, since we are using LambdaTest cloud grid and tunnel for local website testing, we need to add credentials of the same in the environment variables. If you do not have your credentials, navigate to the LambdaTest Dashboard, click on the “Profile” icon in the top-right corner of your screen, then click on “Profile.” You will find your username and access key here. Step 3 Next, we add a function as testSetUp() to set the initial browser capabilities that will be passed on to the LambdaTest grid to define the browser and OS configurations. This method will be annotated with @BeforeTest annotation in TestNG as we want to execute it before each test case run. The most important thing to note here is the following code to set the tunnel as true. This code tells the LambdaTest grid that this automation script is part of the localhost website testing and that tunnel configuration is to be used for execution. Step 4 After setting the initial capabilities and tunnel configuration, we add the test case function testLocalWebsite() inside, telling the driver to navigate to localhost and click on the first toggle button. For this click, we are using the CSS Selector of the web element. Step 5 After executing every test script, we must close the browser. To perform the same, another function as tearDown() is added and annotated with @AfterTest to execute it after every test execution. Test Execution So far, we have understood local website testing and how to make relevant configurations to perform the same on the LambdaTest platform using the tunnel. Now, we will execute the test script and see what the test execution looks like on the LambdaTest Dashboard. Since we have used TestNG annotations, the test script can be executed as a TestNG run. Upon execution, you will see results like below on the dashboard. Navigate to Automation —> Build to see the execution results. To view the details of execution, click on the “Session Name” on the right side. Note: the URL to test navigated is localhost, and Tunnel ID is the same as the tunnelName, i.e., LambdaTest, which we mentioned while starting the tunnel in the configuration section. Conclusion In this article on how to perform local website testing using Selenium and Java, we have learned about local website testing and why it is so important in the software development world and implemented the same on the LambdaTest platform using an automation test script. Overall, local website testing is an effective solution for developers who want to ensure their website is thoroughly tested and free of bugs and issues before it goes to production. Happy Local Testing!
In my previous articles listed below, I have shown how to use Swagger, especially the Springdoc implementation. for doing the code first/bottom-up approach. OpenAPI 3 Documentation With Spring Boot Doing More With Springdoc-OpenAPI Extending Swagger and Spring Doc Open API This time I am writing about the design first/top-down approach. I am not writing about the usual generated Java server, and say, associated Angular TypeScript client code; but first, some background context. Background Some time back I had the opportunity to use PingFederate to solve a business problem for a client of mine(no details due to NDAs). This involved working with the US government’s SSN verification web service and leveraging OIDC for this purpose. The actual code I wrote was just a few Spring Boot classes. The project was more about architecture, integration, infrastructure, etc. When working on this project, I created a side utility. Highlights This is the first time in the PingFed world such a utility has been created. There are some innovative concepts in it. Creating it had some challenges. We will discuss them along with how they were overcome. What Does This Article Offer to the Reader? Speeds up getting the reader started on PingFederate Introduces my utility that helps in meeting this above objective Also showcases two sample applications that demonstrate the Authorization Code Flow: These sample applications are used to demonstrate the effectiveness of our PingFederate configuration. Of particular interest to the reader will be the application that demonstrates my attempt at the authorization code flow using the BFF pattern for the Spring Boot and Angular applications. Note: While these sample applications have been tuned for PingFederate, it should be easy to tweak them for other OIDC providers like Okta, Auth0, etc. Also note: When working on my client's project, there was no front end. It was a machine-to-machine communication project. That said, for most readers, it would be more relevant to have a front end in the examples. Therefore, the two examples do have a front end. A Quick Swagger Recap It supports both the code first/bottom-up and design first/top-down approaches. A Swagger document can be created by using: Swagger Editor Code first libraries like springdoc, SpringFox, Swagger Core, and related libraries that can introspect the actual code The Swagger YAML/JSON document can be visualized using the Swagger UI. This UI is also exposed by the springdoc and SpringFox libraries. Swagger Codegen can be used to generate server/client code. Lastly, there is the SwaggerHub, which leverages all the Swagger tools and offers much more when using the Design First/Top Down approach. What Is PingFederate? PingFederate describes itself as follows: "PingFederate is an enterprise federation server that enables user authentication and single sign-on. It serves as a global authentication authority that allows customers, employees, and partners to securely access all the applications they need from any device. PingFederate easily integrates with applications across the enterprise, third-party authentication sources, diverse user directories, and existing IAM systems, all while supporting current and past versions of identity standards like OAuth, OpenID Connect, SAML, and WS-Federation. It will connect everyone to everything." In my limited context, I used it for OIDC and OAuth purposes. While on the subject of PingFederate, it is not a free product. That said, you can always download and use the latest version of Ping products for free. Trial license files are available. I was able to keep getting new trial license files as needed. I found it very easy to learn. I used PingFederate because, in my client project, some requirements were met better by PingFederate than, say, its cloud-based alternative. What Is the Problem Definition We Are Trying To Solve? Problem Definition: PingFederate Admin API can be used for automating its setup configurations in addition to doing it manually by the admin console. The lack of any programmatic language wrapper makes it hard to administer/configure automatically. Elaborating on the point, just to illustrate the problem: AWS provides SDKs in various programming languages. These SDKs sit on top of the underlying web service API. AWS SDKs It's always easier to use the AWS SDK than work with the underlying web services using Postman/cURL. Similarly for PingFederate A Java Wrapper was achieved: Note: This has been done for the first time in the PingFederate world. :) It is also possible to achieve this in other languages if needed. Is This All That We Did? Is all we did run a Maven-based code generator that reads Swagger specifications of PingFederate Admin API to generate some code and use that? Yes and No. High-Level Solutioning Here, we have 2 flows represented by blue and green arrows. The blue arrows demonstrate: The use of Swagger Core and related code-first annotation-based libraries, causing the automatic generation of the Swagger YAML/JSON Admin API document; this is part of PingFederate itself. This Swagger document is leveraged by the code generator to generate actual code. In our case, we are generating Java REST client code. The green arrows demonstrate: The user interacts with our library: additional convenience code and a particular rest template interceptor. This in turn invokes the generated code. Finally, the PingFederate Admin API is invoked which changes/configures PingFederate. Hurdle in getting this to work: The generated code was not usable in some scenarios. Read more about that and the adopted solution in these Swagger notes on GitHub. In addition to the general approach used, we had to innovate further and resolve the hurdles. That's where the interceptor was leveraged. How To Setup Follow the steps in this GitHub repo. There is a README.md and Setup.md. To summarize, these are the steps. Clone the project. Maven-build the project. Download the ZIP files and license files of PingFederate, PingDirectory. Download a MySQL connector JAR file, also. Verify the downloads. Configure MySQL root user credentials. Install and start PingDirectory and PingFederate using provided Ant script. Launch the PingFederate Admin console for the first time. Maven-build the project with the additional option of generating the Admin API Client code. Use the generated Admin API Client code to administer PingFederate. The code is available on the Git repository. However, let's discuss some code below for better visualization: Java public void setup() throws NoSuchAlgorithmException, KeyManagementException, FileNotFoundException, IOException { String ldapDsId="MyLDAP"; String formAdapterid="HTMLFormAdapter"; String passwordValidatorId="PasswordValidator"; String atmId1="testingATM1"; String policyId1="testingpolicy1"; String ldapAttributeSourceId="mypingfedldapds"; String atmId2="testingATM2"; Properties mySqlProps = PropertiesUtil.loadProps(new File("../mysql.properties")); this.setupDb(mySqlProps); new LdapCreator(core) .createLdap(ldapDsId, "MyLdap", "localhost", "cn=Directory Manager", "manager"); PasswordCredentialValidator passwordCredentialValidator = new PasswordCredentialValidatorCreator(core) .createPasswordCredentialValidator( ldapDsId, passwordValidatorId, passwordValidatorId, "uid=${username}"); IdpAdapter idpAdapter1 = new IdpAdapterCreator(core) .createIdpAdapter( passwordValidatorId, formAdapterid, new String[] {"givenName", "mail", "sn", "uid"}, new String[]{"uid"}, "uid"); IdpAdapterMapping createdIdpAdapterMapping = new IdpAdapterMappingCreator(core).createIdpAdapterGrantMapping(formAdapterid, "username"); new JwtAtmCreator(core) .createJWTATM( atmId1, "jwtatm1", 120, 1, AutomationSharedConstants.AtmOauth_PersistentGrantUserKeyAttrName, "iat", "nbf"); new AtmMappingCreator(core) .createTokenMappings( "jwtatm1mapping", AccessTokenMappingContext.TypeEnum.IDP_ADAPTER, formAdapterid, atmId1, new AccessTokenMappingAttribute(null, AutomationSharedConstants.AtmOauth_PersistentGrantUserKeyAttrName, SourceTypeIdKey.TypeEnum.OAUTH_PERSISTENT_GRANT, "USER_KEY"), new AccessTokenMappingAttribute(null, "iat", SourceTypeIdKey.TypeEnum.EXPRESSION, "#iat=@org.jose4j.jwt.NumericDate@now().getValue()"), new AccessTokenMappingAttribute(null, "nbf", SourceTypeIdKey.TypeEnum.EXPRESSION, "#nbf = @org.jose4j.jwt.NumericDate@now(), #nbf.addSeconds(10), #nbf = #nbf.getValue()") ); new JwtAtmCreator(core) .createJWTATM(atmId2, "jwtatm2", 5, 2, "iss", "sub", "aud", "nbf", "iat"); new AtmMappingCreator(core) .createTokenMappings("jwtatm2mapping", AccessTokenMappingContext.TypeEnum.CLIENT_CREDENTIALS, null, atmId2, new AccessTokenMappingAttribute(null, "iss", SourceTypeIdKey.TypeEnum.EXPRESSION, "#value = #this.get(\"context.HttpRequest\").getObjectValue().getRequestURL().toString(), #length = #value.length(), #length = #length-16, #iss = #value.substring(0, #length)"), new AccessTokenMappingAttribute(null, "sub", SourceTypeIdKey.TypeEnum.TEXT, "6a481348-42a1-49d7-8361-f76ebd23634b"), new AccessTokenMappingAttribute(null, "aud", SourceTypeIdKey.TypeEnum.TEXT, "https://apiauthete.ssa.gov/mga/sps/oauth/oauth20/token"), new AccessTokenMappingAttribute(null, "nbf", SourceTypeIdKey.TypeEnum.EXPRESSION, "#nbf = @org.jose4j.jwt.NumericDate@now(), #nbf.addSeconds(10), #nbf = #nbf.getValue()"), new AccessTokenMappingAttribute(null, "iat", SourceTypeIdKey.TypeEnum.EXPRESSION, "#iat=@org.jose4j.jwt.NumericDate@now().getValue()") ); new ScopesCreator(core).addScopes("email", "foo", "bar"); new ClientCreator(core) .createClient( AutomationSharedConstants.AuthCodeClientId, AutomationSharedConstants.AuthCodeClientId, AutomationSharedConstants.AuthCodeClientSecret, atmId1, true, null, "http://"+AutomationSharedConstants.HOSTNAME+":8080/oidc-hello|http://"+AutomationSharedConstants.HOSTNAME+":8081/login/oauth2/code/pingfed", GrantTypesEnum.AUTHORIZATION_CODE, GrantTypesEnum.ACCESS_TOKEN_VALIDATION); new ClientCreator(core) .createClient( "manual2", "manual2", "secret", atmId2, true, null, "", GrantTypesEnum.CLIENT_CREDENTIALS); Pair<String, String[]>[] scopesToAttributes=new Pair[] { Pair.with("email", new String[] {"email", "family_name", "given_name"}) }; new OpenIdConnectPolicyCreator(core) .createOidcPolicy( atmId1, policyId1, policyId1, false, false, false, 5, new Triplet [] { Triplet.with("email", true, true), Triplet.with("family_name", true, true), Triplet.with("given_name", true, true)}, AttributeSource.TypeEnum.LDAP, ldapDsId, ldapAttributeSourceId, "my pingfed ldap ds", SourceTypeIdKey.TypeEnum.LDAP_DATA_STORE, new Pair[] { Pair.with("sub", "Subject DN"), Pair.with("email", "mail"), Pair.with("family_name", "sn"), Pair.with("given_name", "givenName") }, scopesToAttributes, true, true, "uid=${"+AutomationSharedConstants.AtmOauth_PersistentGrantUserKeyAttrName+"}", "/users?uid=${"+AutomationSharedConstants.AtmOauth_PersistentGrantUserKeyAttrName+"}"); } The above is an actual code snippet used by me to administer the PingFederate. As an example, let's look at what is happening in the LdapCreator class createLdap method. Java public DataStore createLdap(String id, String name, String hostName, String userDn, String password) { DataStoresApi dataStoresApi= new DataStoresApi(core.getApiClient()); core.setRequestTransformBeans(new TransformBean("type",type->TypeEnum.LDAP.name())); core.setResponseTransformBeans(new TransformBean("type",type->type.charAt(0)+type.substring(1) .toLowerCase()+"DataStore")); LdapDataStore ldapDataStore = new LdapDataStore(); List<String> hostNames = addStringToNewList(hostName); ldapDataStore.setHostnames(hostNames); ldapDataStore.setType(TypeEnum.LDAP); ldapDataStore.setId(id); ldapDataStore.setName(name); ldapDataStore.setLdapType(LdapTypeEnum.PING_DIRECTORY); ldapDataStore.setUserDN(userDn); ldapDataStore.setPassword(password); DataStore createdDataStore = dataStoresApi. createDataStore(ldapDataStore, false); return createdDataStore; } LdapCreator is a layer that was written on top of the generated code. The classes DataStoresApi, LdapDataStore, and DataStore are the classes from the generated code. In the createLdap method, the lines below are how we instruct the interceptor to transform the request and response. Java core.setRequestTransformBeans(new TransformBean("type",type->TypeEnum.LDAP.name())); core.setResponseTransformBeans(new TransformBean("type", type->type.charAt(0)+type.substring(1).toLowerCase()+"DataStore")); (Again, you can read more about that from the previous link to the Swagger notes on GitHub.) It did something. How do we know it really worked? Does It Really Work? The code base in the repository also contains example code that demonstrates Authorization Code Flow. The example code projects can be set up and run using their Readme.md. The example code projects also serve the purpose of demonstrating that our PingFederate setup worked, in addition to being hopefully useful. The Example Code Projects There are two examples: simple-oidc-check springboot.oidc.with.angular The example simple-oidc-check is a roll-your-own example. It will demonstrate the Authorization Code Flow and also the Client Credentials grant flow. It can be used to better understand many different concepts including JEE and OIDC. There are some concepts there that might raise your eyebrows and are not so often seen. The example springboot.oidc.with.angular is an Authorization Code Flow BFF pattern implementation. This is often considered the most secure approach because the access token is kept only at the back end. The access token never reaches the JavaScript/HTML layer. This and other approaches are also discussed in the example code Readme.md. Supported Versions The versions of PingFederate supported by this utility are detailed here. Future Vision I created this utility mainly because it helped me stand up my PingFed PoCs rapidly when working on a client project. I will try maintaining it as long as it does not tax me too much and PingFederate itself does not provide similar solutions. I can already think of some more improvements and enhancements. I can be encouraged to maintain and carry on with it with stars, likes, clones, etc. on the Git repository.
Java is a popular programming language used for developing a wide range of applications, including web, mobile, and desktop applications. It provides many useful data structures for developers to use in their programs, one of which is the Map interface. The Map interface is used to store data in key-value pairs, making it an essential data structure for many applications. In this article, we will discuss the use of Map.of() and new HashMap<>() in Java, the difference between them, and the benefits of using Map.of(). What Is Map.of()? Map.of() is a method introduced in Java 9, which allows developers to create an immutable map with up to 10 key-value pairs. It provides a convenient and concise way of creating maps, making it easier to create small maps without having to write a lot of code. Map.of() is an improvement over the previous way of creating small maps using the constructor of the HashMap class, which can be cumbersome and verbose. What Is New HashMap<>()? The new HashMap<>() is a constructor provided by the HashMap class in Java, which allows developers to create a new instance of a HashMap. It is used to create a mutable map, which means that the map can be modified by adding, removing, or updating key-value pairs. It is a commonly used method for creating maps in Java, especially when dealing with larger sets of data. Benchmarking Map.of() and New HashMap<>() To compare the performance of Map.of() and new HashMap<>() in Java, we can use benchmarking tools to measure the time taken to perform various operations on maps created using these methods. In our benchmark, we will measure the time taken to get a value from a map and the time taken to insert values into a map. It's worth noting that our benchmarks are limited to a small set of data, such as ten items. It's possible that the results could differ for larger data sets or more complex use cases. package ca.bazlur; import org.openjdk.jmh.annotations.Benchmark; import org.openjdk.jmh.annotations.*; import org.openjdk.jmh.infra.Blackhole; import java.util.HashMap; import java.util.Map; import java.util.concurrent.TimeUnit; @State(Scope.Benchmark) @Warmup(iterations = 5, time = 1) @Measurement(iterations = 20, time = 1) @Fork(1) @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.NANOSECONDS) @OperationsPerInvocation public class MapBenchmark { private static final int SIZE = 10; private Map<Integer, String> mapOf; private Map<Integer, String> hashMap; @Setup public void setup() { mapOf = Map.of( 0, "value0", 1, "value1", 2, "value2", 3, "value3", 4, "value4", 5, "value5", 6, "value6", 7, "value7", 8, "value8", 9, "value9" ); hashMap = new HashMap<>(); hashMap.put(0, "value0"); hashMap.put(1, "value1"); hashMap.put(2, "value2"); hashMap.put(3, "value3"); hashMap.put(4, "value4"); hashMap.put(5, "value5"); hashMap.put(6, "value6"); hashMap.put(7, "value7"); hashMap.put(8, "value8"); hashMap.put(9, "value9"); } @Benchmark public void testMapOf(Blackhole blackhole) { Map<Integer, String> map = Map.of( 0, "value0", 1, "value1", 2, "value2", 3, "value3", 4, "value4", 5, "value5", 6, "value6", 7, "value7", 8, "value8", 9, "value9" ); blackhole.consume(map); } @Benchmark public void testHashMap(Blackhole blackhole) { Map<Integer, String> hashMap = new HashMap<>(); hashMap.put(0, "value0"); hashMap.put(1, "value1"); hashMap.put(2, "value2"); hashMap.put(3, "value3"); hashMap.put(4, "value4"); hashMap.put(5, "value5"); hashMap.put(6, "value6"); hashMap.put(7, "value7"); hashMap.put(8, "value8"); hashMap.put(9, "value9"); blackhole.consume(hashMap); } @Benchmark public void testGetMapOf() { for (int i = 0; i < 10; i++) { mapOf.get(i); } } @Benchmark public void testGetHashMap() { for (int i = 0; i < SIZE; i++) { hashMap.get(i); } } } Benchmark Mode Cnt Score Error Units MapBenchmark.testGetHashMap avgt 20 14.999 ± 0.433 ns/op MapBenchmark.testGetMapOf avgt 20 16.327 ± 0.119 ns/op MapBenchmark.testHashMap avgt 20 84.920 ± 1.737 ns/op MapBenchmark.testMapOf avgt 20 83.290 ± 0.471 ns/op These are the benchmark results for comparing the performance of using new HashMap<>() and Map.of() in Java. The benchmark was conducted with a limited and small data set (e.g. ,10). These results show that HashMaps have slightly faster get operations compared to immutable Maps created using Map.of(). However, creating an immutable Map using Map.of() is still faster than creating a HashMap. Note that based on your JDK distribution and computer, the benchmark results may slightly differ when you try them. However, in most cases, the results should be consistent. It's always a good idea to run your own benchmarks to ensure you make the right choice for your specific use case. Additionally, remember that micro-benchmarks should always be taken with a grain of salt and not used as the sole factor in making a decision. Other factors, such as memory usage, thread safety, and readability of code, should also be considered. The source code can be found on GitHub. In my opinion, the slight variations in performance may not hold much importance in most cases. It is essential to consider other aspects, such as the particular use case, how concise it is, well-organized code, and preferred features (for example, mutable or immutable nature), when deciding between HashMap and Map.of(). For straightforward scenarios, Map.of() might still have the upper hand regarding simplicity and brevity. So let's see the benefit of using Map.of(). Benefits of Using Map.of() There are several benefits to using Map.of() over the new HashMap<>() in Java: Conciseness: Map.of() provides a concise and convenient way of creating small maps in Java. This makes the code more readable and easier to maintain. Immutability: Map.of() creates immutable maps, which means that once the map is created, it cannot be modified. This provides a degree of safety and security for the data stored in the map. Type Safety: Map.of() provides type safety for the keys and values of the map, which helps prevent type-related errors that can occur when using new HashMap<>(). Conclusion Map.of() is a powerful and useful method introduced in Java 9, which provides a more concise way of creating small maps in Java with added benefits such as immutability and type safety. Our benchmarking shows that the latencies of Map.of() and new HashMap<>() for small maps are close, with overlapping error bars, which makes it difficult to definitively conclude that one method is significantly faster than the other based on this data alone. The benefits of using Map.of() include its conciseness, immutability, and type safety. Although the performance difference may not be significant based on the provided benchmark results, the other advantages of Map.of() make it an appealing option. Developers should consider using Map.of() when creating small maps in Java to take advantage of these benefits.
If you've ever implemented a Java project using a mainstream build system such as Ant, Maven, or Gradle, you've probably noticed that you need to use extra language to describe how to build your project. While this may seem appealing for basic tasks, it can become trickier for more complicated ones. You need to learn a specific soup of XML, write verbose configurations, or write Kotlin DSLs that are intertwined with complex tooling. If you've gone further by writing pipelines for deploying and managing releases, you've probably had to write shell or Groovy scripts to be used in your CI/CD tools. While this may be okay for simple tasks, it can become cumbersome when dealing with complexity. You'd prefer to use all your knowledge and tooling when producing regular code, such as modeling, refactoring, and run/debug in IDEs. This is where JeKa comes in. JeKa is a very thin tool that allows you to execute arbitrary Java source code from the command line or within your IDE. While this may not seem like a big deal at first glance, this capability enables you to: Write any arbitrary script using plain Java code, run and debug it in an IDE, and invoke arbitrary public methods so you can host many scripts in a single class. Invoke this code from the command line or any CI/CD tool without needing to compile it. JeKa handles the compilation for you. Simply use any library available in the Java ecosystem in your scripts. Just declare dependencies in annotations, and JeKa will resolve them behind the scenes. With this capability, you can get rid of cryptic shell scripts and implement powerful and portable scripts without needing additional knowledge. The second stage of JeKa is the utilities it embeds. When writing scripts, you can use any libraries, but JeKa also bundles some utilities that are frequently needed when implementing automation tasks, such as dealing with file sets and zip files, Git, launching OS processes synchronously and retrieving results, Java compilation/testing, Maven dependency/repo management, full JVM project build models, and XML handling. These utilities can help you implement CI/CD pipelines or even build/test entire Java projects. The last stage consists of a plugin and parameterization mechanism that allows JeKa to be a first-class citizen in the build tool space. Each plugin provides methods and configurations to integrate external technology with a minimum of effort or no typing. Currently, there are plugins for JVM projects, Node.js, Spring Boot, SonarQube, JaCoCo, Kotlin, Protocol Buffers, and Nexus repositories. With all these capabilities, JeKa lets you implement an entire Java project with automated delivery using a single language for everything. This language can be Java, or it can be Kotlin, as JeKa provides the same capabilities for both. Additionally, an IntelliJ plugin exists to improve the user experience with JeKa. For a better understanding, check out this GitHub repository that demonstrates numerous projects built with Jeka. Through this, you'll gain insights into how Jeka can be utilized to build projects with popular technologies like Spring Boot, Kotlin, Node.js, SonarQube, and JaCoCo. Jeka also provides detailed documentation describing exactly how it works. You won't be left to your own devices using it. What do you think about this initiative? Do you think JeKa can ease the full development-delivery cycle?
Previously we had a look at the lock class and how it works behind the scenes.In this post, we shall look at LockSupport and its native methods which are one of the building blocks for locks and synchronization classes. As we can see the methods of lock support are static and behind the scenes, they are based on the Unsafe class.The Unsafe class is not intended to be used by your code directly thus only trusted code can obtain instances of it. If we see the methods available we can identify the park method, the unpark method and the get Blocker method. Java public class LockSupport { ... public static void park() public static void park(Object blocker) public static void parkNanos(long nanos) public static void parkNanos(Object blocker, long nanos) public static void parkUntil(long deadline) public static void parkUntil(Object blocker, long deadline) public static Object getBlocker(Thread t) public static void unpark(Thread thread) ... } Each thread that uses the class LockSupport is associated with a permit.When we call to park the thread is disabled for thread scheduling purposes. Provided the permit is available the permit is consumed and the call returns immediately. Java public final class Unsafe { ... public native void unpark(Object thread); public native void park(boolean isAbsolute, long time); ... } We can trace down the permit in the source code implementation and eventually end up on the POSIX implementation: C++ void Parker::park(bool isAbsolute, jlong time) { if (Atomic::xchg(&_counter, 0) > 0) return; JavaThread *jt = JavaThread::current(); if (jt->is_interrupted(false)) { return; } struct timespec absTime; if (time < 0 || (isAbsolute && time == 0)) { // don't wait at all return; } if (time > 0) { to_abstime(&absTime, time, isAbsolute, false); } ThreadBlockInVM tbivm(jt); if (pthread_mutex_trylock(_mutex) != 0) { return; } int status; if (_counter > 0) { // no wait needed _counter = 0; status = pthread_mutex_unlock(_mutex); assert_status(status == 0, status, "invariant"); OrderAccess::fence(); return; } OSThreadWaitState osts(jt->osthread(), false /* not Object.wait() */); assert(_cur_index == -1, "invariant"); if (time == 0) { _cur_index = REL_INDEX; // arbitrary choice when not timed status = pthread_cond_wait(&_cond[_cur_index], _mutex); assert_status(status == 0 MACOS_ONLY(|| status == ETIMEDOUT), status, "cond_wait"); } else { _cur_index = isAbsolute ? ABS_INDEX : REL_INDEX; status = pthread_cond_timedwait(&_cond[_cur_index], _mutex, &absTime); assert_status(status == 0 || status == ETIMEDOUT, status, "cond_timedwait"); } _cur_index = -1; _counter = 0; status = pthread_mutex_unlock(_mutex); assert_status(status == 0, status, "invariant"); OrderAccess::fence(); } In a nutshell _the counter is a permit and if the permit is bigger than 0 it will be consumed and the method will return immediately. If an interrupt is pending the method should return immediately. If Nanos to wait have been supplied then the time to wait on the condition is calculated. By using the mutex provided either the thread will wait forever until is unparked or interrupted or the thread will wait for the Nanos provided. Let’s see an example of parking a thread: Java @Test void park() throws InterruptedException { Thread secondaryThread = new Thread(LockSupport::park); secondaryThread.start(); secondaryThread.join(2000); log.info("Could not join thread is parked"); assertTrue(secondaryThread.isAlive()); LockSupport.unpark(secondaryThread); secondaryThread.join(); assertFalse(secondaryThread.isAlive()); log.info("Thread was unparked"); } We start the thread and park it. When we try to wait for the main thread we use a time limit, if the time limit was not set we would wait forever. Eventually, the time passes and before retrying to join the thread we unpark. As expected the thread is unparked and eventually finishes. If the permit is not available the thread lies dormant and disabled for thread scheduling. Initially, the permit is zero. We can try with one thread. Java @Test void unParkAndPark() { final Thread mainThread = Thread.currentThread(); LockSupport.unpark(mainThread); LockSupport.park(); } By using unpark initially we made a permit available, thus on the park the permit was consumed and we returned immediately. The other element that we see in the LockSupport class is the blocker. The Blocker maps to the parkBlocker object in the Thread implementation of java. Java public class Thread implements Runnable { ... volatile Object parkBlocker; ... } The blocker represents the synchronization object responsible for this thread parking.Thus we can note which object is responsible for the thread being parked, which can help for debugging and monitoring purposes.The unsafe methods are called to set the parkBlocker value to the Thread instance. Now that we saw how LockSupport works behind the scenes we can see the example provided by Javadoc the FIFOMutex. Java import java.util.Queue; import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.locks.LockSupport; public class FIFOMutex { private final AtomicBoolean locked = new AtomicBoolean(false); private final Queue<Thread> waiters = new ConcurrentLinkedQueue<Thread>(); public void lock() { boolean wasInterrupted = false; Thread current = Thread.currentThread(); waiters.add(current); // Block while not first in queue or cannot acquire lock while (waiters.peek() != current || !locked.compareAndSet(false, true)) { LockSupport.park(this); if (Thread.interrupted()) // ignore interrupts while waiting wasInterrupted = true; } waiters.remove(); if (wasInterrupted) // reassert interrupt status on exit current.interrupt(); } public void unlock() { locked.set(false); LockSupport.unpark(waiters.peek()); } } All threads will be added to the queue. Since this is a FIFO queue, the first thread should mark the locked variable as true and should be able to proceed and remove itself from the queue. The rest of the threads should be parked. In case of an interrupt LockSupport will exit thus we should reassert the interrupt on the thread on exit.Also, pay attention that the blocker is set as the instance of the FIFOMutex class. By using unlock the threads that have been in a parked state will resume. We can also check another usage of lock support. Rate limiters can benefit from LockSupport.We can check the source code of the Refill Rate Limiter: Java private boolean waitForPermission(final long nanosToWait) { waitingThreads.incrementAndGet(); long deadline = currentNanoTime() + nanosToWait; boolean wasInterrupted = false; while (currentNanoTime() < deadline && !wasInterrupted) { long sleepBlockDuration = deadline - currentNanoTime(); parkNanos(sleepBlockDuration); wasInterrupted = Thread.interrupted(); } waitingThreads.decrementAndGet(); if (wasInterrupted) { currentThread().interrupt(); } return !wasInterrupted; } In a Rate Limiter, the permits to acquire are limited over time. In case of a thread trying to acquire a permit, there is an option to wait until a permit is available. In this case, we can park the thread. This way our rate limiter will prevent the time busy on spinning. That’s it. Now that we know about lock support we can proceed with more interesting concurrency concepts.
When a method is synchronized, only one thread can enter that object’s method at a given point in time. If any other thread tries to enter the synchronized method, it will NOT be allowed to enter. It will be put into the BLOCKED state. In this post, let’s learn a little more details about the synchronized method. Java Synchronized Method Example It’s always easy to learn with an example. Here is an interesting program that I have put together which facilitates us to understand the synchronized method behavior better: 01: public class SynchronizationDemo { 02: 03: private static class BoyFriendThread extends Thread { 04: 05: @Override 06: public void run() { 07: 08: girlFriend.meet(); 09: } 10: } 11: 12: private static GirlFriend girlFriend = new GirlFriend(); 13: 14: public static void main(String[] args) { 15: 16: for (int counter = 0; counter < 10; ++counter) { 17: 18: BoyFriendThread fThread = new BoyFriendThread(); 19: fThread.setName("BoyFriend-" + counter); 20: fThread.start(); 21: } 22: } 23: } 01: public class GirlFriend { 02: 03: public void meet() { 04: 05: String threadName = Thread.currentThread().getName(); 06: System.out.println(threadName + " meeting started!"); 07: System.out.println(threadName + " meeting ended!!"); 08: } 09: } In this program, there are two classes: 1. SynchronizationDemo In this class, you can notice that we are starting 10 BoyFriendThread from lines #18-20. Each BoyFriendThread is invoking meet() method on the GirlFriend object in line #8. 2. GirlFriend In this class, there is only one meet() method. This method prints the name of the Boyfriend thread name and meeting started! and meeting stopped!!. When the SynchronizationDemo program is executed, it prints the following as the output: 01: BoyFriend-0 meeting started! 02: BoyFriend-8 meeting started! 03: BoyFriend-1 meeting started! 04: BoyFriend-0 meeting ended!! 05: BoyFriend-4 meeting started! 06: BoyFriend-3 meeting started! 07: BoyFriend-3 meeting ended!! 08: BoyFriend-2 meeting started! 09: BoyFriend-8 meeting ended!! 10: BoyFriend-2 meeting ended!! 11: BoyFriend-5 meeting started! 12: BoyFriend-6 meeting started! 13: BoyFriend-6 meeting ended!! 14: BoyFriend-7 meeting started! 15: BoyFriend-7 meeting ended!! 16: BoyFriend-9 meeting started! 17: BoyFriend-1 meeting ended!! 18: BoyFriend-4 meeting ended!! 19: BoyFriend-9 meeting ended!! 20: BoyFriend-5 meeting ended!! You can notice that the meeting started! and meeting stopped!! statements of each thread are not printed consecutively. For example, on line #1, a meeting with BoyFriend-0 started, and while that meeting was in progress, meetings with BoyFriend-8 (line #2) and BoyFriend-1 (line #3) also started. It is only after that that the meeting with BoyFriend-0 ends. These kinds of mixed meetings continue throughout the program. If this happens in the real world, then our GirlFriend object will be in trouble. We should save her from this trouble. Only after her meeting with one boyfriend completes, the next meeting should resume. This is where synchronization comes to help. Let’s change the meet() method in the GirlFriend object to be synchronized, and execute the program once again. 01: public class GirlFriend { 02: 03: public synchronized void meet() { 04: 05: String threadName = Thread.currentThread().getName(); 06: System.out.println(threadName + " meeting started!"); 07: System.out.println(threadName + " meeting ended!!"); 08: } 09: } Notice that in line #3, the synchronized keyword is introduced in the meet() method. When we executed the same program once again, below is the result we got: 01: BoyFriend-2 meeting started! 02: BoyFriend-2 meeting ended!! 03: BoyFriend-0 meeting started! 04: BoyFriend-0 meeting ended!! 05: BoyFriend-5 meeting started! 06: BoyFriend-5 meeting ended!! 07: BoyFriend-8 meeting started! 08: BoyFriend-8 meeting ended!! 09: BoyFriend-9 meeting started! 10: BoyFriend-9 meeting ended!! 11: BoyFriend-6 meeting started! 12: BoyFriend-6 meeting ended!! 13: BoyFriend-7 meeting started! 14: BoyFriend-7 meeting ended!! 15: BoyFriend-4 meeting started! 16: BoyFriend-4 meeting ended!! 17: BoyFriend-3 meeting started! 18: BoyFriend-3 meeting ended!! 19: BoyFriend-1 meeting started! 20: BoyFriend-1 meeting ended!! Bingo!! Now you can see each meeting with a boyfriend started and ended sequentially. Only after completing a meeting with BoyFriend-0 does a meeting with BoyFriend-5 start. This should make the GirlFriend object quite happy, as she can focus on one Boyfriend at a time. Note: Some of you might wonder why BoyFriend threads are not meeting the GirlFriend object in sequential order. The answer to this question will be coming soon in an upcoming post. How Does the Synchronized Method Work in Java? When you make a method synchronized, only one thread will be allowed to execute that method at a given point in time. When a thread enters a synchronized method, it acquires the lock of the object. Only after this thread releases the lock, other threads will be allowed to enter the synchronized method. Say the BoyFriend-0 thread is executing the meet() method; only after this thread exits the method, other threads will be allowed to enter the meet() method, until then all other threads which are trying to invoke the meet() method will be put to the BLOCKED thread state. Threads Behavior When a Method Is Synchronized To confirm this theory, we executed the above program and captured the thread dump using the open-source script yCrash. We analyzed the thread dump using the fastThread tool. Here is the generated thread dump analysis report of this simple program. Below is the excerpt from the thread dump analysis report: Fig 1: fastThread tool reporting 9 threads are in BLOCKED state Fig 2: fastThread tool reporting 9 threads to be in BLOCKED state when accessing GirlFriend Object fastThread tool reports the total number of blocked threads and a transitive graph to indicate where they are BLOCKED. In this example (Fig 1), the tool reported that 9 threads are in a BLOCKED state. From the transitive graph (Fig 2), you can notice that BoyFriend2 is blocking the remaining 9 BoyFriend threads. When clicking on the thread names, you can see its complete stack trace. Below is the stack trace of one of the BoyFriend threads which is BLOCKED: 01: java.lang.Thread.State: BLOCKED (on object monitor) 02: at learn.synchornized.GirlFriend.meet(GirlFriend.java:7) 03: - waiting to lock <0x0000000714173850> (a learn.synchornized.GirlFriend) 04: at learn.synchornized.SynchronizationDemo$BoyFriendThread.run(SynchronizationDemo.java:10) 05: Locked ownable synchronizers: 06: - None You can see that this BoyFriend thread is put into a BLOCKED state when it’s trying to execute the meet() method (as reported in line #2). Video To see the visual walk-through of this post, click below: Conclusion In this post, we learned about the basics of the synchronized method. If you want to learn more details about the synchronized method, stay tuned.
With the Spring 6 and Spring Boot 3 releases, Java 17+ became the baseline framework version. So now is a great time to start using compact Java records as Data Transfer Objects (DTOs) for various database and API calls. Whether you prefer reading or watching, let’s review a few approaches for using Java records as DTOs that apply to Spring Boot 3 with Hibernate 6 as the persistence provider. Sample Database Follow these instructions if you’d like to install the sample database and experiment yourself. Otherwise, feel free to skip this section: 1. Download the Chinook Database dataset (music store) for the PostgreSQL syntax. 2. Start an instance of YugabyteDB, a PostgreSQL-compliant distributed database, in Docker: Shell mkdir ~/yb_docker_data docker network create custom-network docker run -d --name yugabytedb_node1 --net custom-network \ -p 7001:7000 -p 9000:9000 -p 5433:5433 \ -v ~/yb_docker_data/node1:/home/yugabyte/yb_data --restart unless-stopped \ yugabytedb/yugabyte:latest \ bin/yugabyted start \ --base_dir=/home/yugabyte/yb_data --daemon=false 3. Create the chinook database in YugabyteDB: Shell createdb -h 127.0.0.1 -p 5433 -U yugabyte -E UTF8 chinook 4. Load the sample dataset: Shell psql -h 127.0.0.1 -p 5433 -U yugabyte -f Chinook_PostgreSql_utf8.sql -d chinook Next, create a sample Spring Boot 3 application: 1. Generate an application template using Spring Boot 3+ and Java 17+ with Spring Data JPA as a dependency. 2. Add the PostgreSQL driver to the pom.xml file: XML <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>42.5.4</version> </dependency> 3. Provide YugabyteDB connectivity settings in the application.properties file: Properties files spring.datasource.url = jdbc:postgresql://127.0.0.1:5433/chinook spring.datasource.username = yugabyte spring.datasource.password = yugabyte All set! Now, you’re ready to follow the rest of the guide. Data Model The Chinook Database comes with many relations, but two tables will be more than enough to show how to use Java records as DTOs. The first table is Track, and below is a definition of a corresponding JPA entity class: Java @Entity public class Track { @Id private Integer trackId; @Column(nullable = false) private String name; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "album_id") private Album album; @Column(nullable = false) private Integer mediaTypeId; private Integer genreId; private String composer; @Column(nullable = false) private Integer milliseconds; private Integer bytes; @Column(nullable = false) private BigDecimal unitPrice; // Getters and setters are omitted } The second table is Album and has the following entity class: Java @Entity public class Album { @Id private Integer albumId; @Column(nullable = false) private String title; @Column(nullable = false) private Integer artistId; // Getters and setters are omitted } In addition to the entity classes, create a Java record named TrackRecord that stores short but descriptive song information: Java public record TrackRecord(String name, String album, String composer) {} Naive Approach Imagine you need to implement a REST endpoint that returns a short song description. The API needs to provide song and album names, as well as the author’s name. The previously created TrackRecord class can fit the required information. So, let’s create a record using the naive approach that gets the data via JPA Entity classes: 1. Add the following JPA Repository: Java public interface TrackRepository extends JpaRepository<Track, Integer> { } 2. Add Spring Boot’s service-level method that creates a TrackRecord instance from the Track entity class. The latter is retrieved via the TrackRepository instance: Java @Transactional(readOnly = true) public TrackRecord getTrackRecord(Integer trackId) { Track track = repository.findById(trackId).get(); TrackRecord trackRecord = new TrackRecord( track.getName(), track.getAlbum().getTitle(), track.getComposer()); return trackRecord; } The solution looks simple and compact, but it’s very inefficient because Hibernate needs to instantiate two entities first: Track and Album (see the track.getAlbum().getTitle()). To do this, it generates two SQL queries that request all the columns of the corresponding database tables: SQL Hibernate: select t1_0.track_id, t1_0.album_id, t1_0.bytes, t1_0.composer, t1_0.genre_id, t1_0.media_type_id, t1_0.milliseconds, t1_0.name, t1_0.unit_price from track t1_0 where t1_0.track_id=? Hibernate: select a1_0.album_id, a1_0.artist_id, a1_0.title from album a1_0 where a1_0.album_id=? Hibernate selects 12 columns across two tables, but TrackRecord needs only three columns! This is a waste of memory, computing, and networking resources, especially if you use distributed databases like YugabyteDB that scatters data across multiple cluster nodes. TupleTransformer The naive approach can be easily remediated if you query only the records the API requires and then transform a query result set to a respective Java record. The Spring Data module of Spring Boot 3 relies on Hibernate 6. That version of Hibernate split the ResultTransformer interface into two interfaces: TupleTransformer and ResultListTransformer. The TupleTransformer class supports Java records, so, the implementation of the public TrackRecord getTrackRecord(Integer trackId) can be optimized this way: Java @Transactional(readOnly = true) public TrackRecord getTrackRecord(Integer trackId) { org.hibernate.query.Query<TrackRecord> query = entityManager.createQuery( """ SELECT t.name, a.title, t.composer FROM Track t JOIN Album a ON t.album.albumId=a.albumId WHERE t.trackId=:id """). setParameter("id", trackId). unwrap(org.hibernate.query.Query.class); TrackRecord trackRecord = query.setTupleTransformer((tuple, aliases) -> { return new TrackRecord( (String) tuple[0], (String) tuple[1], (String) tuple[2]); }).getSingleResult(); return trackRecord; } entityManager.createQuery(...) - Creates a JPA query that requests three columns that are needed for the TrackRecord class. query.setTupleTransformer(...) - The TupleTransformer supports Java records, which means a TrackRecord instance can be created in the transformer’s implementation. This approach is more efficient than the previous one because you no longer need to create entity classes and can easily construct a Java record with the TupleTransformer. Plus, Hibernate generates a single SQL request that returns only the required columns: SQL Hibernate: select t1_0.name, a1_0.title, t1_0.composer from track t1_0 join album a1_0 on t1_0.album_id=a1_0.album_id where t1_0.track_id=? However, there is one very visible downside to this approach: the implementation of the public TrackRecord getTrackRecord(Integer trackId) method became longer and wordier. Java Record Within JPA Query There are several ways to shorten the previous implementation. One is to instantiate a Java record instance within a JPA query. First, expand the implementation of the TrackRepository interface with a custom query that creates a TrackRecord instance from requested database columns: Java public interface TrackRepository extends JpaRepository<Track, Integer> { @Query(""" SELECT new com.my.springboot.app.TrackRecord(t.name, a.title, t.composer) FROM Track t JOIN Album a ON t.album.albumId=a.albumId WHERE t.trackId=:id """) TrackRecord findTrackRecord(@Param("id") Integer trackId); } Next, update the implementation of the TrackRecord getTrackRecord(Integer trackId) method this way: Java @Transactional(readOnly = true) public TrackRecord getTrackRecord(Integer trackId) { return repository.findTrackRecord(trackId); } So, the method implementation became a one-liner that gets a TrackRecord instance straight from the JPA repository: as simple as possible. But that’s not all. There is one more small issue. The JPA query that constructs a Java record requires you to provide a full package name for the TrackRecord class: SQL SELECT new com.my.springboot.app.TrackRecord(t.name, a.title, t.composer)... Let’s find a way to bypass this requirement. Ideally, the Java record needs to be instantiated without the package name: SQL SELECT new TrackRecord(t.name, a.title, t.composer)... Hypersistence Utils Hypersistence Utils library comes with many goodies for Spring and Hibernate. One feature allows you to create a Java record instance within a JPA query without the package name. Let’s enable the library and this Java records-related feature in the Spring Boot application: 1. Add the library’s Maven artifact for Hibernate 6. 2. Create a custom IntegratorProvider that registers TrackRecord class with Hibernate: Java public class ClassImportIntegratorProvider implements IntegratorProvider { @Override public List<Integrator> getIntegrators() { return List.of(new ClassImportIntegrator(List.of(TrackRecord.class))); } } 3. Update the application.properties file by adding this custom IntegratorProvider: Properties files spring.jpa.properties.hibernate.integrator_provider=com.my.springboot.app.ClassImportIntegratorProvider After that you can update the JPA query of the TrackRepository.findTrackRecord(...) method by removing the Java record’s package name from the query string: Java @Query(""" SELECT new TrackRecord(t.name, a.title, t.composer) FROM Track t JOIN Album a ON t.album.albumId=a.albumId WHERE t.trackId=:id """) TrackRecord findTrackRecord(@Param("id") Integer trackId); It’s that simple! Summary The latest versions of Java, Spring, and Hibernate have a number of significant enhancements to simplify and make coding in Java more fun. One such enhancement is built-in support for Java records that can now be easily used as DTOs in Spring Boot applications. Enjoy!
Nicolas Fränkel
Head of Developer Advocacy,
Api7
Shai Almog
OSS Hacker, Developer Advocate and Entrepreneur,
Codename One
Marco Behler
Ram Lakshmanan
Architect,
yCrash