DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
Evaluating Your Event Streaming Needs the Software Architect Way
Watch Now

JavaScript

JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.

icon
Latest Refcards and Trend Reports
Trend Report
Modern Web Development
Modern Web Development
Refcard #363
JavaScript Test Automation Frameworks
JavaScript Test Automation Frameworks
Refcard #288
Getting Started With Low-Code Development
Getting Started With Low-Code Development

DZone's Featured JavaScript Resources

Create Spider Chart With ReactJS

Create Spider Chart With ReactJS

By Omar Urbano
Hi again! I've been publishing new data visualization content here in DZone but never published anything about React. So, now let's see how to create the following JavaScript Spider Chart using ReactJS and the LightningChart JS (LC JS) library. What Is ReactJS? Well, ReactJS is a framework created by Facebook, developed with great emphasis on implementing user interfaces. With more focus on user interfaces, it is recommended to use ReactJS for the view layer, making use of a model-view-controller pattern. So, for this article, we will do the initial setup of our React project and a brief implementation of LightningChart to show the use of libraries within this project. Let's begin... 1. Installing ReactJS In order to install ReactJS using commands, you will need to have Node JS and the NPM command interface installed. Additionally, you can visit the official NPM documentation page. So, already having NPM installed, we can execute the ReactJS installation command. First, we need to open a command prompt as administrator and run the following command: npm i -g create-react-app The above command will download the complete React library. Once React is installed, we will see a list of React commands that serve as a confirmation of successful installation. Now we will create a React project. We will execute the following command from the command prompt: npx create-react-app lc-react-app The lc-react-app will be the default name of our project, but you can change the name without problems. When the project has been created, the path where the project has been stored will be displayed. I recommend cutting and pasting the project into an easily accessible path. For more information about installing react as either a create-react-app command or as a Webpack and Babe, check out this Angular vs. React vs. Vue article. 2. Configuring the Project Before we begin, we'll need to install the LightningChart JS (@arction/lcjs) library. Download the ReactJS Spider Chart project template and open it with Visual Studio Code: Your project should look the same or similar to the image above. Now open a new terminal so you can install LightningChart JS. The following command npm i @arction/lcjs will install the LightningChart JS library into our project. Now, if we execute the npm start command, we will be able to compile and view our page on a local server: ReactJS project compilation on a local server 3. Creating the Spider Chart Before we start with the code for our chart, we have to understand the files we will work with. Unlike Angular where our views and logic are grouped by component, React starts with a simpler structure. The first thing we will see will be two JS files: index.js and App.js. These files have a default naming but can be renamed according to your needs. The index file will contain the code that allows us to render the view created by the App.js file. App.js contains our logic in charge of building the object that will be rendered. The CSS files will modify the style of the objects generated in their corresponding JS files. When we create a React project, an App.test.js file is generated. This file corresponds to our App.js file and can be used to test our code using the npm test command. The basic idea is that there is a .test file for each generated JS file. For this project, we will create a new file called SpiderChart.js. This file will contain the code that will generate our spider chart. We will do it separately in order to keep our chart code organized, instead of embedding everything within the App.js file. A) Importing Necessary Classes We'll start by importing the necessary LightningChart JS classes. The way to import components is the same as that used in Angular. JavaScript import { lightningChart,LegendBoxBuilders, Themes } from '@arction/lcjs' import React, { useRef, useEffect } from 'react' Now we have to create an object that contains our chart and in turn, can be exported to other instances. JavaScript const Chart = (props) => { const { data, id } = props const chartRef = useRef(undefined) useEffect(() => { Using the Effect Hook allows you to run side effects like fetching, direct DOM updating, and timers. Inside the useEffect function, we'll encapsulate all of our spider chart logic. Now, we will assign the object type "Spider" to the constant "chart". When specifying the chart type, we can assign properties as well. For instance, we can specify the look and feel of the component as well as the container where the chart will be displayed. JavaScript const chart = lightningChart().Spider({ theme: Themes.auroraBorealis, container: id }) .setTitle('Company branch efficiency') .setAxisInterval(100) .setScaleLabelStrategy(undefined) .setPadding({ top: 100 }) const series = [ chart.addSeries() .setName('Sydney'), chart.addSeries() .setName('Kuopio'), chart.addSeries() .setName('New York') ] B) Reviewing Properties setTitle: The title for the chart. The title will be displayed by default at the top of the chart. setAxisInterval: Sets the interval of the Charts Axes. Read more in the setAxisInterval documentation. setScaleLabelStrategy: Sets the strategy for drawing scale labels. It defines which position labels are drawn and whether they are flipped or not. Read more in the setScaleLabelStrategy documentation. addSeries: The addSeries function will allow us to create an independent series of data to be displayed on the chart. These series may have independent visual properties and values. JavaScript series.forEach((value, i) => { value .setPointSize(10) .setCursorResultTableFormatter((builder, series, value, axis) => builder.addRow(`${series.getName()} ${axis}`) ) }) setCursorResultTableFormatter: It allows displaying values in the series when the cursor is positioned over the series. setPointSize: specifies the size in pixels of each point. C) Adding Labels to Each Point JavaScript const categories = ['Pre-planning', 'Customer contacts', 'Meetings', 'Development time', 'Releases',] D) Assigning Values to the Series JavaScript series[0].addPoints( { axis: categories[0], value: 6 }, { axis: categories[1], value: 22 }, { axis: categories[2], value: 61 }, { axis: categories[3], value: 76 }, { axis: categories[4], value: 100 }, ) Depending on the series, the number of indexes must be changed. E) Create LegendBox Create LegendBox as part of SpiderChart. JavaScript const legend = chart.addLegendBox(LegendBoxBuilders.HorizontalLegendBox) // Dispose example UI elements automatically if they take too much space. This is to avoid bad UI on mobile / etc. devices. .setAutoDispose({ type: 'max-width', maxWidth: 0.80, }) // Add SpiderChart to LegendBox legend.add(chart) setAutoDispose: Dispose of sample UI elements automatically if they take up too much space. This is to avoid bad UI on mobile, etc. legend.add: Adding the legend box to the chart. F) Return Function Return function will destroy the graphic when the component is unmounted. The chart will be stored inside a container (id). The class name "chart" will be used to apply the CSS class located in the App.css file. JavaScript return () => { // Destroy chart. console.log('destroy chart') chartRef.current = undefined } }, [id]) return <div id={id} className='chart'></div> } export default Chart G) Rendering Chart To render our chart object, we need to import it into our App.js file: JavaScript import React, { useEffect, useState } from 'react'; import './App.css' import Chart from './SpiderChart' const App = (props) => { return <div className='fill'> <Chart id='chart'/> </div> } export default App The App constant will return the Chart object. Also, we can apply a CSS Class for the body. The CSS class is located in the App.css file. The App constant will be exported to the index.js file. JavaScript import React from 'react'; import ReactDOM from 'react-dom/client'; import './index.css'; import App from './App'; import reportWebVitals from './reportWebVitals'; const root = ReactDOM.createRoot(document.getElementById('root')); root.render( <React.StrictMode> <App /> </React.StrictMode> ); The last step is to import our App.js into index.js. The way to import/export objects between JS files is similar in almost all cases. For Index files, we need to apply some React properties, because here is where we will manipulate the DOM. React Strict Mode: Strict mode checks are run in development mode only. They do not impact the production build. 4. Final Application In conclusion, ReactJS and LightningChart JS are powerful tools that can be used to create visually appealing and interactive spider charts for your web applications. With ReactJS, you can easily manage your UI components and create a smooth user experience, while LightningChart JS provides the necessary data visualization tools to bring your data to life. Spider charts can be used to represent a wide range of data, from comparing multiple variables on a single chart to tracking progress over time. With the ability to customize your spider charts using ReactJS and LightningChart JS, you can tailor your charts to fit your specific needs and make them as informative as possible. By using these two technologies together, you can create stunning spider charts that are both engaging and easy to use. In case you have any questions, leave a comment with any code snippets, and I'll be happy to help! See you in the next article! :) More
Create a CLI Chatbot With the ChatGPT API and Node.js

Create a CLI Chatbot With the ChatGPT API and Node.js

By Phil Nash
ChatGPT has taken the world by storm, and this week, OpenAI released the ChatGPT API. I’ve spent some time playing with ChatGPT in the browser, but the best way to really get on board with these new capabilities is to try building something with it. With the API available, now is that time. This was inspired by Greg Baugues’s implementation of a chatbot command line interface (CLI) in 16 lines of Python. I thought I’d start by trying to build the same chatbot but using JavaScript. (It turns out that Ricky Robinett also had this idea and published his bot code here. It’s pleasing to see how similar the implementations are!) The Code It turns out that Node.js requires a bit more code to deal with command line input than Python, so where Greg’s version was 16 lines, mine takes 31. Having built this little bot, I’m no less excited about the potential for building with this API though. Here’s the full code. I’ll explain what it is doing further down. import { createInterface } from "node:readline/promises"; import { stdin as input, stdout as output, env } from "node:process"; import { Configuration, OpenAIApi } from "openai"; const configuration = new Configuration({ apiKey: env.OPENAI_API_KEY }); const openai = new OpenAIApi(configuration); const readline = createInterface({ input, output }); const chatbotType = await readline.question( "What type of chatbot would you like to create? " ); const messages = [{ role: "system", content: chatbotType }]; let userInput = await readline.question("Say hello to your new assistant.\n\n"); while (userInput !== ".exit") { messages.push({ role: "user", content: userInput }); try { const response = await openai.createChatCompletion({ messages, model: "gpt-3.5-turbo", }); const botMessage = response.data.choices[0].message; if (botMessage) { messages.push(botMessage); userInput = await readline.question("\n" + botMessage.content + "\n\n"); } else { userInput = await readline.question("\nNo response, try asking again\n"); } } catch (error) { console.log(error.message); userInput = await readline.question("\nSomething went wrong, try asking again\n"); } } readline.close(); When you run this code, it looks like this: Let’s dig into how it works and how you can build your own. Building a Chatbot You will need an OpenAI platform account to interact with the ChatGPT API. Once you have signed up, create an API key from your account dashboard. As long as you have Node.js installed, the only other thing you’ll need is the openai Node.js module. Let’s start a Node.js project and create this CLI application. First create a directory for the project, change into it, and initialize it with npm: mkdir chatgpt-cli cd chatgpt-cli npm init --yes Install the openai module as a dependency: npm install openai Open package.json and add the key "type": "module" to the configuration, so we can build this as an ES module which will allow us to use top level await. Create a file called index.js and open it in your editor. Interacting With the OpenAI API There are two parts to the code: dealing with input and output on the command line and dealing with the OpenAI API. Let’s start by looking at how the API works. First we import two objects from the openai module, the Configuration and OpenAIApi. The Configuration class will be used to create a configuration that holds the API key, you can then use that configuration to create an OpenAIApi client. import { env } from "node:process"; import { Configuration, OpenAIApi } from "openai"; const configuration = new Configuration({ apiKey: env.OPENAI_API_KEY }); const openai = new OpenAIApi(configuration); In this case, we’ll store the API key in the environment and read it with env.OPENAI_API_KEY. To interact with the API, we now use the OpenAI client to create chat completions for us. OpenAI’s text-generating models don’t actually converse with you, but are built to take input and come up with plausible-sounding text that would follow that input, a completion. With ChatGPT, the model is configured to receive a list of messages and then come up with a completion for the conversation. Messages in this system can come from one of 3 different entities, the “system”, “user,” and “assistant.” The “assistant” is ChatGPT itself, the “user” is the person interacting, and the system allows the program (or the user, as we’ll see in this example) to provide instructions that define how the assistant behaves. Changing the system prompts for how the assistant behaves is one of the most interesting things to play around with and allows you to create different types of assistants. With our openai object configured as above, we can create messages to send to an assistant and request a response like this: const messages = [ { role: "system", content: "You are a helpful assistant" }, { role: "user", content: "Can you suggest somewhere to eat in the centre of London?" } ]; const response = await openai.createChatCompletion({ messages, model: "gpt-3.5-turbo", }); console.log(response.data.choices[0].message); // => "Of course! London is known for its diverse and delicious food scene..." As the conversation goes on, we can add the user’s questions and assistant’s responses to the messages array, which we send with each request. That gives the bot history of the conversation, the context for which it can build further answers on. To create the CLI, we just need to hook this up to user input in the terminal. Interacting With the Terminal Node.js provides the Readline module which makes it easy to receive input and write output to streams. To work with the terminal, those streams will be stdin and stdout. We can import stdin and stdout from the node:process module, renaming them to input and output to make them easier to use with Readline. We also import the createInterface function from node:readline import { createInterface } from "node:readline/promises"; import { stdin as input, stdout as output } from "node:process"; We then pass the input and output streams to createInterface and that gives us an object we can use to write to the output and read from the input, all with the question function: const readline = createInterface({ input, output }); const chatbotType = await readline.question( "What type of chatbot would you like to create? " ); The above code hooks up the input and output stream. The readline object is then used to post the question to the output and return a promise. When the user replies by writing into the terminal and pressing return, the promise resolves with the text that the user wrote. Completing the CLI With both of those parts, we can write all of the code. Create a new file called index.js and enter the code below. We start with the imports we described above: import { createInterface } from "node:readline/promises"; import { stdin as input, stdout as output, env } from "node:process"; import { Configuration, OpenAIApi } from "openai"; Then we initialize the API client and the Readline module: const configuration = new Configuration({ apiKey: env.OPENAI_API_KEY }); const openai = new OpenAIApi(configuration); const readline = createInterface({ input, output }); Next, we ask the first question of the user: “What type of chatbot would you like to create?” We will use the answer of this to create a “service” message in a new array of messages that we will continue to add to as the conversation goes on. const chatbotType = await readline.question( "What type of chatbot would you like to create? " ); const messages = [{ role: "system", content: chatbotType }]; We then prompt the user to start interacting with the chatbot and start a loop that says while the user input is not equal to the string “.exit” keep sending that input to the API. If the user enters “.exit” the program will end, like in the Node.js REPL. let userInput = await readline.question("Say hello to your new assistant.\n\n"); while (userInput !== ".exit") { // loop } readline.close(); Inside the loop, we add the userInput to the messages array as a “user” message. Then, within a try/catch block, send it to the OpenAI API. We set the model as “gpt-3.5-turbo” which is the underlying name for ChatGPT. When we get a response from the API, we get the message out of the response.data.choices array. If there is a message, we store it as an “assistant” message in the array of messages and output it to the user, waiting for their input again using readline. If there is no message in the response from the API, we alert the user and wait for further user input. Finally, if there is an error making a request to the API we catch the error, log the message and tell the user to try again. while (userInput !== ".exit") { messages.push({ role: "user", content: userInput }); try { const response = await openai.createChatCompletion({ messages, model: "gpt-3.5-turbo", }); const botMessage = response.data.choices[0].message; if (botMessage) { messages.push(botMessage); userInput = await readline.question("\n" + botMessage.content + "\n\n"); } else { userInput = await readline.question("\nNo response, try asking again\n"); } } catch (error) { console.log(error.message); userInput = await readline.question( "\nSomething went wrong, try asking again\n" ); } } Put that all together and you have your assistant. The full code is at the top of this post or on GitHub. You can now run the assistant by passing it your OpenAI API key as an environment on the command line: OPENAI_API_KEY=YOUR_API_KEY node index.js This will start your interaction with the assistant, starting with it asking what kind of assistant you want. Once you’ve declared that, you can start chatting with it. Experimenting Helps Us to Understand Personally, I’m not actually sure how useful ChatGPT is. It is clearly impressive; its ability to return text that reads as if it was written by a human is incredible. However, it returns content that is not necessarily correct, regardless of how confidently it presents that content. Experimenting with ChatGPT is the only way that we can try to understand what it is useful for, thus building a simple chatbot like this gives us grounds for that experiment. Learning that the system commands can give the bot different personalities and make it respond in different ways is very interesting. You might have heard, for example, that you can ask ChatGPT to help you with programming, but you could also specify a JSON structure and effectively use it as an API as well. But as you experiment with that, you will likely find that it should not be an information API, but more likely something you can use to understand your natural text and turn it into a JSON object. To me, this is exciting, as it means that ChatGPT could help create more natural voice assistants that can translate meaning from speech better than the existing crop that expects commands to be given in a more exact manner. I still have experimenting to do with this idea, and having this tool gives me that opportunity. This Is Just the Beginning If experimenting with this technology is the important thing for us to understand what we can build with it and what we should or should not build with it, then making it easier to experiment is the next goal. My next goal is to expand this tool so that it can save, interact with, and edit multiple assistants so that you can continue to work with them and improve them over time. In the meantime, you can check out the full code for this first assistant in GitHub, and follow the repo to keep up with improvements. More
The YAML Document From Hell — JavaScript Edition
The YAML Document From Hell — JavaScript Edition
By Phil Nash
Implementing PEG in JavaScript
Implementing PEG in JavaScript
By Vinod Pahuja
7 Ways of Containerizing Your Node.js Application
7 Ways of Containerizing Your Node.js Application
By Nikunj Shingala
File Uploads for the Web (2): Upload Files With JavaScript
File Uploads for the Web (2): Upload Files With JavaScript

Welcome back to this series about uploading files to the web. If you missed the first post, I recommend you check it out because it’s all about uploading files via HTML. The full series will look like this: Upload files With HTML Upload files With JavaScript Receiving File Uploads With Node.js (Nuxt.js) Optimizing Storage Costs With Object Storage Optimizing Delivery With a CDN Securing File Uploads With Malware Scans In this article, we’ll do the same thing using JavaScript. Previous Article Info We left the project off with the form that looks like this: <form action="/api" method="post" enctype="multipart/form-data"> <label for="file">File</label> <input id="file" name="file" type="file" /> <button>Upload</button> </form> In the previous article, we learned that in order to access a file on the user’s device, we had to use an <input> with the “file” type. To create the HTTP request to upload the file, we had to use a <form> element. When dealing with JavaScript, the first part is still true. We still need the file input to access the files on the device. However, browsers have a Fetch API we can use to make HTTP requests without forms. I still like to include a form because: Progressive enhancement: If JavaScript fails for whatever reason, the HTML form will still work. I’m lazy: The form will actually make my work easier later on, as we’ll see. With that in mind, for JavaScript to submit this form, I’ll set up a “submit” event handler: const form = document.querySelector('form'); form.addEventListener('submit', handleSubmit); /** @param {Event} event */ function handleSubmit(event) { // The rest of the logic will go here. } handleSubmit Function Throughout the rest of this article, we’ll only be looking at the logic within the event handler function, handleSubmit. The first thing I need to do in this submit handler is call the event’s preventDefault method to stop the browser from reloading the page to submit the form. I like to put this at the end of the event handler so if there is an exception thrown within the body of this function, preventDefault will not be called, and the browser will fall back to the default behavior: /** @param {Event} event */ function handleSubmit(event) { // Any JS that could fail goes here event.preventDefault(); } Next, we’ll want to construct the HTTP request using the Fetch API. The Fetch API expects the first argument to be a URL, and a second, optional argument as an Object. We can get the URL from the form’s action property. It’s available on any form DOM node, which we can access using the event’s currentTarget property. If the action is not defined in the HTML, it will default to the browser’s current URL: /** @param {Event} event */ function handleSubmit(event) { const form = event.currentTarget; const url = new URL(form.action); fetch(url); event.preventDefault(); } Relying on the HTML to define the URL makes it more declarative, keeps our event handler reusable, and our JavaScript bundles smaller. It also maintains functionality if the JavaScript fails. By default, Fetch sends HTTP requests using the GET method, but to upload a file, we need to use a POST method. We can change the method using fetch’s optional second argument. I’ll create a variable for that object and assign the method property, but once again, I’ll grab the value from the form’s method attribute in the HTML: const url = new URL(form.action); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, }; fetch(url, fetchOptions); Now the only missing piece is including the payload in the body of the request. If you’ve ever created a Fetch request in the past, you may have included the body as a JSON string or a URLSearchParams object. Unfortunately, neither of those will work to send a file, as they don’t have access to the binary file contents. Fortunately, there is the FormData browser API. We can use it to construct the request body from the form DOM node. And conveniently, when we do so, it even sets the request’s Content-Type header to multipart/form-data; also a necessary step to transmit the binary data: const url = new URL(form.action); const formData = new FormData(form); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, body: formData, }; fetch(url, fetchOptions); Recap That’s really the bare minimum needed to upload files with JavaScript. Let’s do a little recap: Access to the file system using a file type input. Construct an HTTP request using the Fetch (or XMLHttpRequest) API. Set the request method to POST. Include the file in the request body. Set the HTTP Content-Type header to multipart/form-data. Today, we looked at a convenient way of doing that, using an HTML form element with a submit event handler, and using a FormData object in the body of the request. The current handleSumit function should look like this: /** @param {Event} event */ function handleSubmit(event) { const url = new URL(form.action); const formData = new FormData(form); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, body: formData, }; fetch(url, fetchOptions); event.preventDefault(); } GET and POST Requests Unfortunately, the current submit handler is not very reusable. Every request will include a body set to a FormData object and a “Content-Type” header set to multipart/form-data. This is too brittle. Bodies are not allowed in GET requests, and we may want to support different content types in other POST requests. We can make our code more robust to handle GET and POST requests, and send the appropriate Content-Type header. We’ll do so by creating a URLSearchParams object in addition to the FormData, and running some logic based on whether the request method should be POST or GET. I’ll try to lay out the logic below: Is the request using a POSTmethod? Yes: Is the form’s enctype attribute multipart/form-data? Yes: set the body of the request to the FormData object. The browser will automatically set the “Content-Type” header to multipart/form-data. No: set the body of the request to the URLSearchParams object. The browser will automatically set the “Content-Type” header to application/x-www-form-urlencoded. No: We can assume it’s a GET request. Modify the URL to include the data as query string parameters. The refactored solution looks like: /** @param {Event} event */ function handleSubmit(event) { /** @type {HTMLFormElement} */ const form = event.currentTarget; const url = new URL(form.action); const formData = new FormData(form); const searchParams = new URLSearchParams(formData); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, }; if (form.method.toLowerCase() === 'post') { if (form.enctype === 'multipart/form-data') { fetchOptions.body = formData; } else { fetchOptions.body = searchParams; } } else { url.search = searchParams; } fetch(url, fetchOptions); event.preventDefault(); } I really like this solution for a number of reasons: It can be used for any form. It relies on the underlying HTML as the declarative source of configuration. The HTTP request behaves the same as with an HTML form. This follows the principle of progressive enhancement, so file upload works the same when JavaScript is working properly or when it fails. Conclusion So, that’s it. That’s uploading files with JavaScript. I hope you found this useful and plan to stick around for the whole series. In the next article, we’ll move to the back end to see what we need to do to receive files. Thank you so much for reading. If you liked this article, please share it. It's one of the best ways to support me.

By Austin Gil CORE
What Is the Temporal Dead Zone In JavaScript?
What Is the Temporal Dead Zone In JavaScript?

In JavaScript, the Temporal Dead Zone (TDZ) is a behavior that occurs when trying to access a variable that has been declared but not yet initialized. This behavior can cause unexpected errors in your code if you’re not aware of it, so it’s important to understand how it works. In this blog post, we’ll explore what the Temporal Dead Zone is, why it happens, and how to avoid common pitfalls related to it. What Is the Temporal Dead Zone? The Temporal Dead Zone is a behavior that occurs when trying to access a variable before it has been initialized. When a variable is declared using the let or const keyword, it is hoisted to the top of its scope, but it is not initialized until the line where it was declared is executed. This means that any code that tries to access the variable before that line is executed will result in an error. For example, let’s declare a variable called logicSparkMessage using the let keyword: console.log(logicSparkMessage); // ReferenceError: Cannot access 'logicSparkMessage' before initialization let logicSparkMessage = "Welcome to LogicSpark!"; In this example, we’re trying to log the value of logicSparkMessage before it has been initialized, which results in a ReferenceError. This error occurs because we’re trying to access the variable within its Temporal Dead Zone, which is the time between the variable’s declaration and initialization. Why Does the Temporal Dead Zone Happen? The Temporal Dead Zone happens because of the way variables are hoisted in JavaScript. When a variable is declared using let or const, it is hoisted to the top of its scope, but it is not initialized until the line where it was declared is executed. This means that any code that tries to access the variable before that line is executed will result in an error. How to Avoid Common Pitfalls Related to the Temporal Dead Zone To avoid common pitfalls related to the Temporal Dead Zone, it’s important to always declare your variables before using them and to use the var keyword instead of let or const if you need to access the variable before it has been initialized. For example, let’s declare a variable called logicSparkGreeting using the var keyword: console.log(logicSparkGreeting); // undefined var logicSparkGreeting = "Hello, from LogicSpark!"; console.log(logicSparkGreeting); // "Hello, from LogicSpark!" In this example, we’re declaring a variable called logicSparkGreeting using the var keyword, which is hoisted to the top of its scope and initialized to undefined. When we try to log the value of logicSparkGreeting, it returns undefined. However, after we initialize the variable with a value, we can log its value without any errors. Conclusion The Temporal Dead Zone is a behavior that occurs when trying to access a variable before it has been initialized. It happens because of the way variables are hoisted in JavaScript and can cause unexpected errors if you’re not aware of it. To avoid common pitfalls related to the Temporal Dead Zone, it’s important to always declare your variables before using them and to use the var keyword instead of let or const if you need to access the variable before it has been initialized. By understanding this behavior and taking the necessary precautions, you can write cleaner and more reliable code in your LogicSpark projects.

By Kapil Upadhyay
Mocha JavaScript Tutorial With Examples for Selenium Testing
Mocha JavaScript Tutorial With Examples for Selenium Testing

As per StackOverflow insights, JavaScript is the most popular programming language. As the power of web and mobile is increasing day by day, JavaScript and JavaScript frameworks are becoming more popular. It would not be surprising to hear that JavaScript has become a preference for test automation as well. Over the past few years, a lot of development has happened in the open-source JavaScript based test automation framework development and now we have multiple JavaScript testing frameworks that are robust enough to be used professionally. There are scalable frameworks that can be used by web developers and testers to automate even unit test cases and create complete end-to-end automation test suites. Mocha is one JavaScript testing framework that has been well renowned since 2016, as per StateofJS. With that said, when we talk about JavaScript automation testing, we can’t afford not to loop Selenium into the discussion. So I thought coming up with a step-by-step Mocha testing tutorial on the framework will be beneficial for you to kickstart your JavaScript automation testing with Mocha and Selenium. We will also be looking into how you can run it on the LambdaTest automation testing platform to get a better browser coverage and faster execution times. By the end of this Mocha testing tutorial, you will have a clear understanding of the setup, installation, and execution of your first automation script with Mocha for JavaScript testing. What Will You Learn From This Mocha Testing Tutorial? In this article, we are going to deep dive into Mocha JavaScript testing to perform automated browser testing with Selenium and JavaScript. We will: Start with the installation and prerequisites for the Mocha framework and explore its advantages. Execute our first Selenium JavaScript test through Mocha with examples. Execute group tests. Use the assertion library. Encounter possible issues along with their resolutions. Execute some Mocha test scripts on the Selenium cloud grid platform with minimal configuration changes and tests on various browsers and operating systems. What Makes Mocha Prevalent? Mochajs, or simply Mocha, is a feature-affluent JavaScript test framework that runs test cases on Node.js and in the browser, making testing simple and fun. By running serially, Mocha JavaScript testing warrants flexibility and precise reporting, while mapping uncaught exceptions to the correct test cases. Mocha provides a categorical way to write a structured code for testing the applications by thoroughly classifying them into test suites and test cases modules for execution and to produce a test report after the run by mapping errors to corresponding test cases. What Makes Mocha a Better Choice Compared To Other JavaScript Testing Frameworks? Range of installation methods: It can be installed globally or as a development dependency for the project. Also, it can be set up to run test cases directly on the web browser. Various browser support: Can be used to run test cases seamlessly on all major web browsers and provides many browser-specific methods and options. Each revision of Mocha provides upgraded JavaScript and CSS build for different web browsers. Number of ways to offer test reports: It provides users with a variety of reporting options, like list, progress and JSON, to choose from with a default reporter displaying the output based on the hierarchy of test cases. Support for several JavaScript assertion libraries: It helps users cut testing cost and speed-up the process by having compatibility for a set of JavaScript assertion libraries—Express.js, Should.js, Chai. This multiple library support makes it easier for testers to write lengthy complex test cases and use them if everything works fine. Works in TDD and BDD environments: Mocha supports behavior driven development (BDD) and test driven development (TDD), allowing developers to write high quality test cases and enhance test coverage. Support for synchronous and asynchronous testing: Unlike other JavaScript testing frameworks, Mocha is designed with features to fortify asynchronous testing utilizing async/await by invoking the callback once the test is finished. It enables synchronous testing by omitting the callback. Setting Up Mocha and Initial Requirements Before we start our endeavor and explore more of Mocha testing, there are some important prerequisites we need to set up to get started with this Mocha testing tutorial for automation testing with Selenium and JavaScript: Node.js and (npm): The Mocha module requires Node.js to be installed on the system. If it is not already present on the system, it can be installed using the npm manager or by downloading the Windows installer directly from the official Node.js website . Mocha package module: Once we have successfully installed Node.js on the system, we can make use of the node package manager, i.e. npm, to install the required package, which is Mocha. To install the latest version using the npm command line tool, we will first initialize the npm using the below command : $ npm init Next, we will install the Mocha module using npm with the below command: $ npm install -g mocha Here, “g” is for installing the module globally, it allows us to access and use the module like a command line tool and does not limit its use to the current project. The –save-dev command below will place the Mocha executable in our ./node_modules/.bin folder: $ npm install --save-dev mocha We will now be able to run the commands in our command line using the mocha keyword: Java—SDK: Since Mocha is a Selenium test framework and Selenium is built upon Java, we would also be installing the Java Development Kit (preferably JDK 7.0 or above) on the system and configure the Java environment. Selenium WebDriver: We require a Selenium WebDriver and that should be already present in our npm node modules. If it is not found in the module, we can install the latest version of Selenium WebDriver using the below command: $ npm install selenium-webdriver Browser driver: Lastly, we will be installing the driver of the specific browser we are going to use. This executable also needs to be placed inside the same bin folder: $ npm install -g chromedriver Writing Our First Mocha JavaScript Testing Script We will create a project directory named mocha_test and then we will create a subfolder name scripts with a test script name single_test.js inside it. Finally, we will initialize our project by hitting the command npm init. This will create a package.json file in an interactive way, which will contain all our required project configurations. It will be required to execute our test script single_test.js. Finally, we will have a file structure that looks like the below: mocha_test | -- scripts | -- single_test.js | -- package.json { "name": "mocha selenium test sample", "version": "1.0.0", "description": " Getting Started with Our First New Mocha Selenium Test Script and Executing it on a Local Selenium Setup ", "scripts": { "test": "npm run single", "single": "./node_modules/.bin/mocha scripts/single_test.js", }, "author": "rohit", "license": "" , "homepage": "https://mochajs.org", "keywords": [ "mocha", "bdd", "selenium", "examples", "test", "bdd", "tdd", "tap" "framework" ], "dependencies": { "bluebird": "^3.7.2", "mocha": "^6.2.2", "selenium-webdriver": "^3.6.0" } } You have successfully configured your project and are ready to execute your first Mocha JavaScript testing script.You can now write your first test script in the file single_test.js that was created earlier: var assert = require('assert'); describe('IndexArray', function() { describe('#checkIndex negative()', function() { it('the function should return -1 when the value is not present', function(){ assert.equal(-1, [4,5,6].indexOf(7)); }); }); }); Code Walkthrough of Our Mocha JavaScript Testing Script We will now walk through the test script and understand what exactly is happening in the script we just wrote. When writing any Mocha test case in JavaScript, there are two basic function calls we should remember that does the job for us under the hood. These functions are: describe() it() We have used both of them in the test script we wrote above. describe(): Is mainly used to define the creation of test groups in Mocha in a simple way. The describe() function takes in two arguments as the input. The first argument is the name of the test group, and the second argument is a callback function. We can also have a nested test group in our test as per the requirement of the test case. If we look at our test case now, we see that we have a test group named IndexArray, which has a callback function that has inside it a nested test group named #checkIndex negative() and inside of that, is another callback function that contains our actual test. it(): This function is used for writing individual Mocha JavaScript test cases. It should be written in a layman way conveying what the test does. The It() function also takes in two arguments as the input, the first argument is a string explaining what the test should do, and the second argument is a callback function, which contains our actual test. In the above Mocha JavaScript testing script, we see that we have the first argument of the it() function that is written as “the function should return -1 when the value is not present” and the second argument is a callback function that contains our test condition with the assertion. Assertion: The assertion libraries are used to verify whether the condition given to it is true or false. It verifies the test results with the assert.equal(actual, expected) method and makes the equality tests between our actual and expected parameters. It makes our testing easier by using the Node.js built-in assert module. In our Mocha JavaScript testing script, we are not using the entire assert library as we only require the assert module with one line of code for this Mocha testing tutorial. If the expected parameter equals our actual parameter, the test is passed, and the assert returns true. If it doesn’t equal, then the test fails, and the assert returns false. It is important to check whether the below section is present in our package.json file as this contains the configurations of our Mocha JavaScript testing script: "scripts": { "test": "npm run single", "single": "./node_modules/.bin/mocha scripts/single_test.js" }, Finally, we can run our test in the command line and execute from the base directory of the project using the below command: $ npm test or $ npm run single The output of the above test is : This indicates we have successfully passed our test and the assert condition is giving us the proper return value of the function based on our test input passed. Let us extend it further and add one more test case to our test suite and execute the test. Now, our Mocha JavaScript testing script: single_test.js will have one more test that will check the positive scenario of the test and give the corresponding output: var assert = require('assert'); describe('IndexArray', function() { describe('#checkIndex negative()', function() { it('the function should return -1 when the value is not present', function(){ assert.equal(-1, [4,5,6].indexOf(7)); }); }); describe('#checkIndex positive()', function() { it('the function should return 0 when the value is present', function(){ assert.equal(0, [8,9,10].indexOf(8)); }); }); }); The output of the above Mocha JavaScript testing script is : You have successfully executed your first Mocha JavaScript testing script in your local machine for Selenium and JavaScript execution. Note: If you have a larger test suite for cross browser testing with Selenium JavaScript, the execution on local infrastructure is not your best call. Drawbacks of Local Automated Testing Setup As you expand your web application, bring in new code changes, overnight hotfixes, and more. With these new changes, comes new testing requirements, so your Selenium automation testing scripts are bound to go bigger, you may need to test across more browsers, more browser versions, and more operating systems. This becomes a challenge when you perform JavaScript Selenium testing through the local setup. Some of the major pain points of performing Selenium JavaScript testing on the local setup are: There is a limitation that the testing can only be performed locally, i.e., on the browsers that are installed locally in the system. This is not beneficial when there is a requirement to execute cross browser testing and perform the test on all the major browsers available for successful results. The test team might not be aware of all the new browsers versions and the compatibility with them will be tested properly. There is a need to devise a proper cross browser testing strategy to ensure satisfactory test coverage. There arise certain scenarios when it is required to execute tests on some of the legacy browsers or browser versions for a specific set of users and operating systems. It might be necessary to test the application on various combinations of browsers and operating systems, and that is not easily available with the local inhouse system setup. Now, you may be wondering about a way to overcome these challenges. Well, don’t stress too much because an online Selenium Grid is there for your rescue. Executing Mocha Script Using Remote Selenium WebDriver on LambdaTest Selenium Grid Since we know that executing our test script on the cloud grid has great benefits to offer, let us get our hands dirty on the same. The process of executing a script on the LambdaTest Selenium Grid is fairly straightforward and exciting. We can execute our local test script by adding a few lines of code that is required to connect to the LambdaTest platform: It gives us the privilege to execute our test on different browsers seamlessly. It has all the popular operating systems and also provides us the flexibility to make various combinations of the operating system and browsers. We can pass on our environment and config details from within the script itself. The test scripts can be executed parallelly and save on executing time. It provides us with an interactive user interface and dashboard to view and analyze test logs. It also provides us the desired capabilities generator with an interactive user interface, which is used to select the environment specification details with various combinations to choose from. So, in our case, the multiCapabilities class in the single.conf.js and parallel.conf.js configuration file will look similar to the below: multiCapabilities: [ { // Desired Capabilities build: "Mocha Selenium Automation Parallel Test", name: "Mocha Selenium Test Firefox", platform: "Windows 10", browserName: "firefox", version: "71.0", visual: false, tunnel: false, network: false, console: false } Next, the most important thing is to generate our access key token, which is basically a secret key to connect to the platform and execute automated tests on LambdaTest. This access key is unique to every user and can be copied and regenerated from the profile section of the user account as shown below. The information regarding the access key, username, and hub can be alternatively fetched from the LambdaTest user profile page Automation dashboard, which looks like the one as mentioned in the screenshot below. Accelerating With Parallel Testing Using LambdaTest Selenium Grid In our demonstration, we will be creating a script that uses the Selenium WebDriver to make a search, open a website, and assert whether the correct website is open. If the assert returns true, it indicates that the test case passed successfully and will show up in the automation logs dashboard. If the assert returns false, the test case fails, and the errors will be displayed in the automation logs. Now, since we are using LambdaTest, we would like to leverage it and execute our tests on different browsers and operating systems. We will execute our test script as below: Single test: On a single environment (Windows 10) and single browser (Chrome). Parallel test: On a parallel environment, i.e., different operating system (Windows 10 and Mac OS Catalina) and different browsers (Chrome, Mozilla Firefox, and Safari). Here we will create a new subfolder in our project directory, i.e., conf. This folder will contain the configurations that are required to connect to the LambdaTest platform. We will create single.conf.js and parallel.conf.js where we need to declare the user configuration, i.e, username and access key along with the desired capabilities for both our single test and parallel test cases. Now, we will have a file structure that looks like below: LT_USERNAME = process.env.LT_USERNAME || "irohitgoyal"; // Lambda Test User name LT_ACCESS_KEY = process.env.LT_ACCESS_KEY || "1267367484683738"; // Lambda Test Access key //Configurations var config = { commanCapabilities: { build: "Mocha Selenium Automation Parallel Test", // Build Name to be displayed in the test logs tunnel: false // It is required if we need to run the localhost through the tunnel }, multiCapabilities: [ { // Desired Capabilities , this is very important to configure name: "Mocha Selenium Test Firefox", // Test name that to distinguish amongst test cases platform: "Windows 10", // Name of the Operating System browserName: "firefox", // Name of the browser version: "71.0", // browser version to be used visual: false, // whether to take step by step screenshot, we made it false for now network: false, // whether to capture network logs, we made it false for now console: false // whether to capture console logs, we made it false for now }, { name: "Mocha Selenium Test Chrome", // Test name that to distinguish amongst test cases platform: "Windows 10", // Name of the Operating System browserName: "chrome",// Name of the browser version: "79.0", // browser version to be used visual: false, // // whether to take step by step screenshot, we made it false for now network: false, // // whether to capture network logs, we made it false for now console: false // // whether to capture console logs, we made it false for now }, { name: "Mocha Selenium Test Safari", // Test name that to distinguish amongst test cases platform: "MacOS Catalina", // Name of the Operating System browserName: "safari",// Name of the browser version: "13.0", // browser version to be used visual: false, // // whether to take step by step screenshot, we made it false for now network: false, // // whether to capture network logs, we made it false for now console: false // // whether tocapture console logs., we made it false for now } ] }; exports.capabilities = []; // Code to integrate and support common capabilities config.multiCapabilities.forEach(function(caps) { var temp_caps = JSON.parse(JSON.stringify(config.commanCapabilities)); for (var i in caps) temp_caps[i] = caps[i]; exports.capabilities.push(temp_caps); }); var assert = require("assert"),// declaring assert webdriver = require("selenium-webdriver"), // declaring selenium web driver conf_file = process.argv[3] || "conf/single.conf.js"; // passing the configuration file var caps = require("../" + conf_file).capabilities; // Build the web driver that we will be using in Lambda Test var buildDriver = function(caps) { return new webdriver.Builder() .usingServer( "http://" + LT_USERNAME + ":" + LT_ACCESS_KEY + "@hub.lambdatest.com/wd/hub" ) .withCapabilities(caps) .build(); }; // declaring the test group Search Engine Functionality for Single Test Using Mocha in Browser describe("Search Engine Functionality for Single Test Using Mocha in Browser " + caps.browserName, function() { var driver; this.timeout(0); // adding the before an event that triggers before the rest execution beforeEach(function(done) { caps.name = this.currentTest.title; driver = buildDriver(caps); done(); }); // defining the test case to be executed it("should find the required search result in the browser ", function(done) { driver.get("https://www.mochajs.org").then(function() { driver.getTitle().then(function(title) { setTimeout(function() { console.log(title); assert( title.match( "Mocha | The fun simple flexible JavaScript test framework | JavaScript | Automated Browser Test" ) != null ); done(); }, 10000); }); }); }); // adding the after event that triggers to check if the test passed or failed afterEach(function(done) { if (this.currentTest.isPassed) { driver.executeScript("lambda-status=passed"); } else { driver.executeScript("lambda-status=failed"); } driver.quit().then(function() { done(); }); }); }); var assert = require("assert"), // declaring assert webdriver = require("selenium-webdriver"), // declaring selenium web driver conf_file = process.argv[3] || "conf/parallel.conf.js"; // passing the configuration file var capabilities = require("../" + conf_file).capabilities; // Build the web driver that we will be using in Lambda Test var buildDriver = function(caps) { return new webdriver.Builder() .usingServer( "http://" + LT_USERNAME + ":" + LT_ACCESS_KEY + "@hub.lambdatest.com/wd/hub" ) .withCapabilities(caps) .build(); }; capabilities.forEach(function(caps) { // declaring the test group Search Engine Functionality for Parallel Test Using Mocha in Browser describe("Search Engine Functionality for Parallel Test Using Mocha in Browser " + caps.browserName, function() { var driver; this.timeout(0); // adding the before event that triggers before the rest execution beforeEach(function(done) { caps.name = this.currentTest.title; driver = buildDriver(caps); done(); }); // defining the test case to be executed it("should find the required search result in the browser " + caps.browserName, function(done) { driver.get("https://www.mochajs.org").then(function() { driver.getTitle().then(function(title) { setTimeout(function() { console.log(title); assert( title.match( "Mocha | The fun simple flexible JavaScript test framework | JavaScript | Automated Browser Test" ) != null ); done(); }, 10000); }); }); }); // adding the after event that triggers to check if the test passed or failed afterEach(function(done) { if (this.currentTest.isPassed) { driver.executeScript("lambda-status=passed"); } else { driver.executeScript("lambda-status=failed"); } driver.quit().then(function() { done(); }); }); }); }); Finally, we have our package.json that has an additional added configuration for parallel testing and required files: "scripts": { "test": "npm run single && npm run parallel", "single": "./node_modules/.bin/mocha specs/single_test.js conf/single.conf.js", "parallel": "./node_modules/.bin/mocha specs/parallel_test.js conf/parallel.conf.js --timeout=50000" }, { "name": "mocha selenium automation test sample", "version": "1.0.0", "description": " Getting Started with Our First New Mocha Selenium Test Script and Executing it on a Local Selenium Setup", "scripts": { "test": "npm run single && npm run parallel", "single": "./node_modules/.bin/mocha scripts/single_test.js conf/single.conf.js", "parallel": "./node_modules/.bin/mocha scripts/parallel_test.js conf/parallel.conf.js --timeout=50000" }, "author": "rohit", "license": "" , "homepage": "https://mochajs.org", "keywords": [ "mocha", "bdd", "selenium", "examples", "test", "bdd", "tdd", "tap" ], "dependencies": { "bluebird": "^3.7.2", "mocha": "^6.2.2", "selenium-webdriver": "^3.6.0" } } The final thing we should do is execute our tests from the base project directory by using the below command: $ npm test This command will validate the test cases and execute our test suite, i.e., the single test and parallel test cases. Below is the output from the command line. Now, if we open the LambdaTest platform and check the user interface, we will see the test runs on Chrome, Firefox, and Safari browsers on the environment specified, i.e., Windows 10 and Mac OS, and the test is passed successfully with positive results. Below, we see a screenshot that depicts our Mocha code is running over different browsers, i.e Chrome, Firefox, and Safari, on the LambdaTest Selenium Grid Platform. The results of the test script execution along with the logs can be accessed from the LambdaTest Automation dashboard. Alternatively, if we want to execute the single test, we can execute the following command: $ npm run single To execute the test cases in different environments in a parallel way, run the below command: $ npm run parallel Wrap Up! This concludes our Mocha testing tutorial and now, we have a clear idea about what Mocha is and how to set it up. It allows us to automate the entire test suite and get started quickly with the minimal configuration and is well readable and also easy to update. We are now able to perform an end-to-end test using group tests and the assertion library. The test cases results can be fetched directly from the command line terminal.

By Aditya Dwivedi
What Is JavaScript Slice? Practical Examples and Guide
What Is JavaScript Slice? Practical Examples and Guide

If you're new to coding, the term 'slice method' may be daunting. Put simply, the slice method is a powerful JavaScript tool that lets you extract sections of an array or string. It's one of those methods that, once you understand and can use, can make your developer life much easier! To start off with the basics, imagine a JavaScript array as a bunch of books on a shelf. The JavaScript slice method allows you to take out one or more books from the shelf without rearranging the remaining ones. It takes two arguments, a start index and an end index, which determine which part of the array will be returned in the new one. Both these indexes are completely optional, so if you leave them blank, they will default to 0 (the first element) and length (the last element). By using this powerful method, you can easily retrieve part of an array or string, create substrings and arrays from existing ones, and even remove elements from a given array without mutating it. As an example, let's say we have an array called sentenceArray containing a sentence broken down into individual words: const sentenceArray = ['The', 'slice', 'method', 'is', 'super', 'useful'] Using the JavaScript slice method with only one argument--sentenceArray.slice(2)--we can create a new array containing all elements starting from index 2: result = ['method', 'is', 'super', 'useful']. Pretty neat, huh? Stay tuned for more practical examples! Syntax and Parameters for the JavaScript Slice Method The JavaScript slice() method is used to select a portion of an array. It copies the selected elements from the original array and returns them as a new array. This makes it easy to pull out only the data you need without having to iterate through the entire array and select the elements by hand. The syntax for using this method looks like this: array.slice(start, end) where start is the index which specifies where to start slicing (default is 0) and end is the index at which to end slicing (defaults to array length). You can also leave out one of either parameter, in which case start or end will default as described above. For example, let's say you have an array of pets, and you want to select only dogs from it. Using the JavaScript slice() method, you could write something like this: const pets = ["dog", "cat", "hamster", "gerbil", "parakeet"]; const dogs = pets.slice(0, 1); // returns ["dog"] Using the JS slice() in this way, you can quickly and easily organize your data however you need it! 3 Practical Use Cases for the Javascript Slice Method Your JavaScript journey isn't complete until you understand the JS slice method. It's a powerful tool that can do lots of amazing things, so let's jump into three practical use cases for the JS slice method. Extracting a Subarray From an Array const arr = [1, 2, 3, 4, 5]; const subArr = arr.slice(2, 4); // [3, 4] In this example, the JS slice() method is used to extract a subarray of arr starting from index 2 and ending at index 4 (exclusive), which gives us the values [3, 4]. Removing Elements From an Array const arr = [1, 2, 3, 4, 5]; const newArr = arr.slice(0, 2).concat(arr.slice(3)); // [1, 2, 4, 5] In this example, the JS slice() method is used to remove the element at index 2 from arr. We first extract a subarray of arr containing the elements before the one we want to remove (that is, [1, 2]) and another subarray containing the elements after the one we want to remove (that is, [4, 5]). We then concatenate these two subarrays using the concat() method to get the new array [1, 2, 4, 5]. Extracting a Substring From a String const str = "Hello, world!"; const subStr = str.slice(0, 5); // "Hello" In this example, the JS slice() method is used to extract a substring of str starting from index 0 and ending at index 5 (exclusive), which gives us the string "Hello". Difference Between the slice(), splice() and Substring Methods Have you ever wondered what the difference is between the JS slice(), splice() and substring methods? If so, you're not alone. Let's look at a quick comparison of the three methods to help you understand how they differ. The JavaScript slice() method extracts a part of an array from the starting index to the end index but does not change the existing array, while splice() changes the original array by adding/removing elements from it and substring() extracts characters from a string and does not change the original string. slice(): This method takes two arguments; startIndex and endIndex. It returns a shallow copy of an array that starts at startIndex and ends before endIndex. It copies up to but not including endIndex. If startIndex is undefined, this method will copy all elements from beginning to endIndex; if no arguments are provided, then it will return a shallow copy of the entire array. splice(): This method takes two arguments; startIndex and deleteCount (optional). It removes elements from an array from startIndex up to deleteCount items. It returns an array containing deleted elements or an empty array if no elements were deleted. This method changes the original array as it mutates it by adding/removing specified elements. substring(): This method takes two arguments; startIndex (optional) and endIndex (optional). It returns characters in a string starting at startIndex until before endIndex without altering or changing the original string. If no arguments are provided, then this method returns a copy of the entire string. Best Practices for slice() Method The JavaScript slice() method is a powerful tool for manipulating arrays, but there are some best practices you should know about if you want to get the most out of it. Let’s take a look at a few: Use positive numbers when referring to the index If you need to refer to elements in an array, it's always best to use positive numbers rather than negative numbers (which start from the end of the array). This is because if you later change the size of your array, it might throw off your code if you use negative numbers. Use If Statement for modifying data in an array If you’re looking to modify data within an array, use If Statements instead of slice(). Say If you want to delete an element on index two and update all other elements in the array by subtracting one from their index number; use an if statement combined with splice(). This will give you more control over what happens with each element of your array. Convert strings into arrays before using slice(). If your data is stored as a string rather than an array, convert it into an array before using slice(). This will make it easier and faster for browsers to process the data and give you more precise results when performing slices on strings. Conclusion All in all, the JavaScript Slice method is a great way to quickly and efficiently manipulate and extract data from JavaScript arrays. Not only is it relatively straightforward to use, but it also has some great features, like the ability to work with negative values and the “start” and “end” parameters, making it a very powerful tool. It’s important to remember the differences between “slice” and “splice” and to use the right tool for the right job. But with a bit of practice, JavaScript Slice will become an integral part of your web development toolkit, making it easy to control and manipulate data arrays.

By Rahul .
Utilizing Database Hooks Like a Pro in Node.js
Utilizing Database Hooks Like a Pro in Node.js

In this article, I’ll explain how to use database hooks in your Node.js applications to solve specific problems that might arise in your development journey. Many applications require little more than establishing a connection pool between a server, database, and executing queries. However, depending on your application and database deployments, additional configurations might be necessary. For example, multi-region distributed SQL databases can be deployed with different topologies depending on the application use case. Some topologies require setting properties on the database on a per-session basis. Let’s explore some of the hooks made available by some of the most popular database clients and ORMs in the Node.js ecosystem. Laying the Foundation The Node.js community has many drivers to choose from when working with the most popular relational databases. Here, I’m going to focus on PostgreSQL-compatible database clients, which can be used to connect to YugabyteDB or another PostgreSQL database. Sequelize, Prisma, Knex and node-postgres are popular clients with varying feature sets depending on your needs. I encourage you to read through their documentation to determine which best suits your needs. These clients come with hooks for different use cases. For instance: Connection hooks: Execute a function immediately before or after connecting and disconnecting from your database. Logging hooks: Log messages to stdout at various log levels. Lifecycle hooks: Execute a function immediately before or after making calls to the database. In this article, I’ll cover some of the hooks made available by these clients and how you can benefit from using them in your distributed SQL applications. I’ll also demonstrate how to use hooks to hash a user's password before creation and how to set runtime configuration parameters after connecting to a multi-region database with read replicas. Sequelize The Sequelize ORM has a number of hooks for managing the entire lifecycle of your database transactions. The beforeCreate lifecycle hook can be used to hash a password before creating a new user: JavaScript User.beforeCreate(async (user, options) => { const hashedPassword = await hashPassword(user.password); user.password = hashedPassword; }); Next, I’m using the afterConnect connection hook to set session parameters. With this YugabyteDB deployment, you can execute reads from followers to reduce latencies, and eliminate the need to read from the primary cluster nodes: JavaScript const config = { host: process.env.DB_HOST, port: 5433, dialect: "postgres", dialectOptions: { ssl: { require: true, rejectUnauthorized: true, ca: [CERTIFICATE], }, }, pool: { max: 5, min: 1, acquire: 30000, idle: 10000, }, hooks: { async afterConnect(connection) { if (process.env.DB_DEPLOYMENT_TYPE === "multi_region_with_read_replicas") { await connection.query("set yb_read_from_followers = true; set session characteristics as transaction read only;"); } }, }, }; const connection = new Sequelize( process.env.DATABASE_NAME, process.env.DATABASE_USER, process.env.DATABASE_PASSWORD, config ); By using this hook, each database session in the connection pool will set these parameters upon establishing a new connection: set yb_read_from_followers = true;: This parameter controls whether or not reading from followers is enabled. set session characteristics as transaction read only;: This parameter applies the read-only setting to all statements and transaction blocks that follow. Prisma Despite being the ORM of choice for many in the Node.js community, at the time of writing, Prisma doesn’t contain many of the built-in hooks found in Sequelize. Currently, the library contains hooks to handle the query lifecycle, logging, and disconnecting, but offers no help before or after establishing connections. Here’s how you can use Prisma’s lifecycle middleware to hash a password before creating a user: JavaScript prisma.$use(async (params, next) => { if (params.model == 'User' && params.action == 'create') { params.args.data.password = await hashPassword(params.args.data.password); } return next(params) }) const create = await prisma.user.create({ data: { username: 'bhoyer', password: 'abc123' }, }) To set session parameters to make use of our read replicas, we’ll have to execute a statement before querying our database: JavaScript await prisma.$executeRaw(`set yb_read_from_followers = true; set session characteristics as transaction read only;`); const users = await prisma.user.findMany(); If you need to immediately establish a connection in your connection pool to set a parameter, you can connect explicitly with Prisma to forgo the lazy connection typical of connection pooling. Prisma has the log levels of query , error, info, and warn. Queries can be handled as events using event-based logging: JavaScript const prisma = new PrismaClient({ log: [ { emit: 'event', level: 'query', }, { emit: 'stdout', level: 'error', }, { emit: 'stdout', level: 'info', }, { emit: 'stdout', level: 'warn', }, ], }); prisma.$on('query', (e) => { console.log('Query: ' + e.query); console.log('Params: ' + e.params); console.log('Duration: ' + e.duration + 'ms'); }); This can be helpful in development when working on query tuning in a distributed system. Here’s how you can make use of the beforeExit hook to access the database before exiting: JavaScript const prisma = new PrismaClient(); prisma.$on('beforeExit', async () => { // PrismaClient still available await prisma.issue.create({ data: { message: 'Connection exiting.' }, }) }); Knex Knex is a lightweight query builder, but it does not have the query middleware found in more full-featured ORMs. To hash a password, you can process this manually using a custom function: JavaScript async function handlePassword(password) { const hashedPassword = await hashPassword(password); return hashedPassword; } const password = await handlePassword(params.password); knex('users').insert({...params, password}); The syntax required to achieve a connection hook in the Knex.js query builder is similar to that of Sequelize. Here’s how we can set our session parameters to read from YugabyteDB’s replica nodes: JavaScript const knex = require('knex')({ client: 'pg', connection: {/*...*/}, pool: { afterCreate: function (connection, done) { connection.query('set yb_read_from_followers = true; set session characteristics as transaction read only;', function (err) { if (err) { //Query failed done(err, conn); } else { console.log("Reading from replicas."); done(); } }); } } }); node-postgres The node-postgres library is the most low-level of all of the libraries discussed. Under the hood, the Node.js EventEmitter is used to emit connection events. A connect event is triggered when a new connection is established in the connection pool. Let’s use it to set our session parameters. I’ve also added an error hook to catch and log all error messages: JavaScript const config = { user: process.env.DB_USER, host: process.env.DB_HOST, password: process.env.DB_PASSWORD, port: 5433, database: process.env.DB_NAME, min: 1, max: 10, idleTimeoutMillis: 5000, connectionTimeoutMillis: 5000, ssl: { rejectUnauthorized: true, ca: [CERTIFICATE], servername: process.env.DB_HOST, } }; const pool = new Pool(config); pool.on("connect", (c) => { c.query("set yb_read_from_followers = true; set session characteristics as transaction read only;"); }); pool.on("error", (e) => { console.log("Connection error: ", e); }); There aren’t any lifecycle hooks at our disposal with node-postgres, so hashing our password will have to be done manually, like with Prisma: JavaScript async function handlePassword(password) { const hashedPassword = await hashPassword(password); return hashedPassword; } const password = await handlePassword(params.password); const user = await pool.query('INSERT INTO user(username, password) VALUES ($1, $2) RETURNING *', [params.username, password]); Wrapping Up As you can see, hooks can solve a lot of the problems previously addressed by complicated and error-prone application code. Each application has a different set of requirements and brings new challenges. You might go years before you need to utilize a particular hook in your development process, but now, you’ll be ready when that day comes. Look out for more from me on Node.js and distributed application development. Until then, keep on coding!

By Brett Hoyer
Deploy a Node.js App to AWS in an EC2 Server
Deploy a Node.js App to AWS in an EC2 Server

There are multiple ways you can deploy your Nodejs app, be it On-Cloud or On-Premises. However, it is not just about deploying your application, but deploying it correctly. Security is also an important aspect that must not be ignored, and if you do so, the application won’t stand long, meaning there is a high chance of it getting compromised. Hence, here we are to help you with the steps to deploy a Nodejs app to AWS. We will show you exactly how to deploy a Nodejs app to the server using Docker containers, RDS Amazon Aurora, Nginx with HTTPS, and access it using the Domain Name. Tool Stack To Deploy a Node.js App to AWS Nodejs sample app: A sample Nodejs app with three APIs viz, status, insert, and list. These APIs will be used to check the status of the app, insert data in the database and fetch and display the data from the database. AWS EC2 instance: An Ubuntu 20.04 LTS Amazon Elastic Compute Cloud (Amazon EC2) instance will be used to deploy the containerized Nodejs App. We will install Docker in this instance on top of which the containers will be created. We will also install a MySQL Client on the instance. A MySQL client is required to connect to the Aurora instance to create a required table. AWS RDS Amazon Aurora: Our data will be stored in AWS RDS Amazon Aurora. We will store simple fields like username, email-id, and age will be stored in the AWS RDS Amazon Aurora instance.Amazon Aurora is a MySQL and PostgreSQL-compatible relational database available on AWS. Docker: Docker is a containerization platform to build Docker Images and deploy them using containers. We will deploy a Nodejs app to the server, Nginx, and Certbot as Docker containers. Docker-Compose: To spin up the Nodejs, Nginx, and Certbot containers, we will use Docker-Compose. Docker-Compose helps reduce container deployment and management time. Nginx: This will be used to enable HTTPS for the sample Nodejs app and redirect all user requests to the Nodejs app. It will act as a reverse proxy to redirect user requests to the application and help secure the connection by providing the configuration to enable SSL/HTTPS. Certbot: This will enable us to automatically use “Let’s Encrypt” for Domain Validation and issuing SSL certificates. Domain: At the end of the doc, you will be able to access the sample Nodejs Application using your domain name over HTTPS, i.e., your sample Nodejs will be secured over the internet. PostMan: We will use PostMan to test our APIs, i.e., to check status, insert data, and list data from the database. As I said, we will “deploy a Nodejs app to the server using Docker containers, RDS Amazon Aurora, Nginx with HTTPS, and access it using the Domain Name.” Let’s first understand the architecture before we get our hands dirty. Architecture Deploying a Nodejs app to an EC2 instance using Docker will be available on port 3000. This sample Nodejs app fetches data from the RDS Amazon Aurora instance created in the same VPC as that of the EC2 instance. An Amazon Aurora DB instance will be private and, hence, accessible within the same VPC. The Nodejs application deployed on the EC2 instance can be accessed using its public IP on port 3000, but we won’t. Accessing applications on non-standard ports is not recommended, so we will have Nginx that will act as a Reverse Proxy and enable SSL Termination. Users will try to access the application using the Domain Name and these requests will be forwarded to Nginx. Nginx will check the request, and, based on the API, it will redirect that request to the Nodejs app. The application will also be terminated with the SSL. As a result, the communication between the client and server will be secured and protected. Here is the architecture diagram that gives the clarity of deploying a Nodejs app to AWS: Prerequisites Before we proceed to deploying a Nodejs app to AWS, it is assumed that you already have the following prerequisites: AWS account PostMan or any other alternative on your machine to test APIs. A registered Domain in your AWS account. Create an Ubuntu 20.04 LTS EC2 Instance on AWS Go to AWS’ management console sign-in page and log into your account. After you log in successfully, go to the search bar and search for “EC2.” Next, click on the result to visit the EC2 dashboard to create an EC2 instance: Here, click on “Launch instances” to configure and create an EC2 instance: Select the “Ubuntu Server 20.04 LTS” AMI: I would recommend you select t3.small only for test purposes. This will have two CPUs and 2GB RAM. You can choose the instance type as per your need and choice: You can keep the default settings and proceed ahead. Here, I have selected the default VPC. If you want, you can select your VPC. Note: Here, I will be creating an instance in the public subnet: It’s better to put a larger disk space at 30GB. The rest can be the default: Assign a “Name” and “Environment” tag to any values of your choice. You may even skip this step: Allow the connection to port 22 only from your IP. If you allow it from 0.0.0.0/0, your instance will allow anyone on port 22: Review the configuration once, and click on “Launch” if everything looks fine to create an instance: Before the instance gets created, it needs a key-pair. You can create a new key-pair or use the existing one. Click on the “Launch instances” button that will initiate the instance creation: To go to the console and check your instance, click on the “View instances” button: Here, you can see that the instance has been created and is in the “Initiating” phase. Within a minute or two, you can see your instance up and running. Meanwhile, let’s create an RDS instance: Create an RDS Aurora With a MySQL Instance on AWS Go to the search bar at the top of the page and search for “RDS.” Click on the result to visit the “RDS Dashboard.” On the RDS Dashboard, click on the “Create database” button to configure and create the RDS instance: Choose the “Easy create” method, “Amazon Aurora” engine type, and the “Dev/Test” DB instance size as follows: Scroll down a bit and specify the “DB cluster identifier” as “my-Nodejs-database.” You can specify any name of your choice as it is just a name given to the RDS instance; however, I would suggest using the same name so you do not get confused while following the next steps. Also, specify a master username as “admin,” its password, and then click on “Create database.” This will initiate the RDS Amazon Aurora instance creation. Note: For production or live environments, you must not set simple usernames and passwords: Here, you can see that the instance is in the “Creating” state. In around 5-10 minutes, you should have the instance up and running: Make a few notes here: The RDS Amazon Aurora instance will be private by default, which means the RDS Amazon Aurora instance will not be reachable from the outside world and will only be available within the VPC. The EC2 instance and RDS instance belong to the same VPC. The RDS instance is reachable from the EC2 instance. Install Dependencies on the EC2 Instance Now, you can connect to the instance we created. I will not get into details on how to connect to the instance and I believe that you already know it. MySQL Client We will need a MySQL client to connect to the RDS Amazon Aurora instance and create a database in it. Connect to the EC2 instance and execute the following commands from it: sudo apt update sudo apt install mysql-client Create a Table We will need a table in our RDS Amazon Aurora instance to store our application data. To create a table, connect to the Amazon RDS Aurora instance using the MySQL client we installed on the EC2 instance in the previous step. Copy the Database Endpoint from the Amazon Aurora Instance: Execute the following common with the correct values: mysql -u <user-name> -p<password> -h <host-endpoint> Here, my command looks as follows: mysql -u admin -padmin1234 -h (here). Once you get connected to the Amazon RDS Aurora instance, execute the following commands to create a table named “users:” show databases; use main; CREATE TABLE IF NOT EXISTS users(id int NOT NULL AUTO_INCREMENT, username varchar(30), email varchar(255), age int, PRIMARY KEY(id)); select * from users; Refer to the following screenshot to understand command executions: Create an Application Directory Now, let’s create a directory where we will store all our codebase and configuration files: pwd cd /home/ubuntu/ mkdir Nodejs-docker cd Nodejs-docker Clone the Code Repository on the EC2 Instance Clone my Github repository containing all the code. This is an optional step, I have included all the code in this document: pwd cd /home/ubuntu/ git clone cp /home/ubuntu/DevOps/AWS/Nodejs-docker/* /home/ubuntu/Nodejs-docker Note: This is an optional step. If you copy all the files from the repository to the application directory, you do not need to create files in the upcoming steps; however, you will still need to make the necessary changes. Deploying Why Should You Use Docker in Your EC2 Instance? Docker is a containerization tool used to package our software application into an image that can be used to create Docker Containers. Docker helps to build, share and deploy our applications easily. The first step of Dockerization is installing Docker: Install Docker Check Linux Version: cat /etc/issue Update the apt package index: sudo apt-get update Install packages to allow apt to use a repository over HTTPS: sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release Add Docker’s official GPG key: curl -fsSL (here) | sudo gpg –dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg Set up the stable repository: echo “deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] (here) $(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null Update the apt package index: sudo apt-get update Install the latest version of Docker Engine and containerd: sudo apt-get install docker-ce docker-ce-cli containerd.io Check Docker version: docker—version Manage Docker as a non-root user: Create ‘docker’ group: sudo groupadd docker Add your user to the docker group: sudo usermod -aG docker <your-user-name> Exit: exit Login back to the terminal. Verify that you can run Docker commands without sudo: docker run hello-world Upon executing the above run command, you should see the output as follows: 14. Refer to the following screenshot to see the command that I have executed: Dockerize Your Node.js Application in the EC2 Instance Once you have Docker installed, the next step is to Dockerize the app. Dockerizing a Nodejs app means writing a Dockerfile with a set of instructions to create a Docker Image. Let’s create Dockerfile and a sample Nodejs app: pwd cd /home/ubuntu/Nodejs-docker Create Dockerfile and paste the following it it; alternatively, you can copy the content from my GitHub repository here: vim Dockerfile: #Base Image node:12.18.4-alpine FROM node:12.18.4-alpine #Set working directory to /app WORKDIR /app #Set PATH /app/node_modules/.bin ENV PATH /app/node_modules/.bin:$PATH #Copy package.json in the image COPY package.json ./ #Install Packages RUN npm install express --save RUN npm install mysql --save #Copy the app COPY . ./ #Expose application port EXPOSE 3000 #Start the app CMD ["node", "index.js"] Create index.js and paste the following in it; alternatively, you can copy the content from my GitHub repository here. This will be our sample Nodejs app: vim index.js: const express = require('express'); const app = express(); const port = 3000; const mysql = require('mysql'); const con = mysql.createConnection({ host: "my-Nodejs-database.cluster-cxxjkzcl1hwb.eu-west3.rds.amazonAWS.com", user: "admin", password: "admin1234" }); app.get('/status', (req, res) => res.send({status: "I'm up and running"})); app.listen(port, () => console.log(`Dockerized Nodejs Applications is listening on port ${port}!`)); app.post('/insert', (req, res) => { if (req.query.username && req.query.email && req.query.age) { console.log('Received an insert call'); con.connect(function(err) { con.query(`INSERT INTO main.users (username, email, age) VALUES ('${req.query.username}', '${req.query.email}', '${req.query.age}')`, function(err, result, fields) { if (err) res.send(err); if (result) res.send({username: req.query.username, email: req.query.email, age: req.query.age}); if (fields) console.log(fields); }); }); } else { console.log('Something went wrong, Missing a parameter'); } }); app.get('/list', (req, res) => { console.log('Received a list call'); con.connect(function(err) { con.query(`SELECT * FROM main.users`, function(err, result, fields) { if (err) res.send(err); if (result) res.send(result); }); }); }); In the above file, change the values of the following variables with the one applicable to your RDS Amazon Aurora instance: host: (here) user: “admin” password: “admin1234” Create package.json and paste the following in it; alternatively, you can copy the content from my GitHub repository here: vim package.json: { “name”: “Nodejs-docker”, “version”: “12.18.4”, “description”: “Nodejs on ec2 using docker container”, “main”: “index.js”, “scripts”: { “test”: “echo \”Error: no test specified\” && exit 1″ }, “author”: “Rahul Shivalkar”, “license”: “ISC” } Update the AWS Security Group To access the application, we need to add a rule in the “Security Group” to allow connections on port 3000. As I said earlier, we can access the application on port 3000, but it is not recommended. Keep reading to understand our recommendations: 1. Go to the “EC2 dashboard,” select the instance, switch to the “Security” tab, and then click on the “Security groups link:” 2. Select the “Inbound rules” tab and click on the “Edit inbound rules” button: 3. Add a new rule that will allow external connection from “MyIp” on the “3000” port: Deploy the Node.js Server on the EC2 Server (Instance) Let’s build a Docker image from the code that we have: cd /home/ubuntu/Nodejs-docker docker build -t Nodejs: 2. Start a container using the image we just build and expose it on port 3000: docker run –name Nodejs -d -p 3000:3000 Nodejs 3. You can see the container is running: docker ps 4. You can even check the logs of the container: docker logs Nodejs Now we have our Nodejs App Docker Container running. 5. Now, you can access the application from your browser on port 3000: Check the status of the application on /status api using the browser: You can insert some data in the application on /insert API using the Postman app using POST request: You can list the data from your application by using /list API from the browser: Alternatively, you can use the curl command from within the EC2 instance to check status, insert data, list data: curl -XGET “here” curl -XPOST “here” Stop and remove the container: docker stop Nodejs docker rm Nodejs In this section, we tried to access APIs available for the application directly using the Public IP:Port of the EC2 instance. However, exposing non-standard ports to the external world in the Security Group is not at all recommended. Also, we tried to access the application over the HTTP protocol, which means the communication that took place from the “Browser” to the “Application” was not secure and an attacker can read the network packets. To overcome this scenario, it is recommended to use Nginx. Nginx Setup Let’s create an Nginx conf that will be used within the Nginx container through a Docker Volume. Create a file and copy the following content in the file; alternatively, you can copy the content from here as well: cd /home/ubuntu/Nodejs-docker mkdir nginx-conf vim nginx-conf/nginx.conf server { listen 80; listen [::]:80; location ~ /.well-known/acme-challenge { allow all; root /var/www/html; } location / { rewrite ^ https://$host$request_uri? permanent; } } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name Nodejs.devopslee.com www.Nodejs.devopslee.com; server_tokens off; ssl_certificate /etc/letsencrypt/live/Nodejs.devopslee.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/Nodejs.devopslee.com/privkey.pem; ssl_buffer_size 8k; ssl_dhparam /etc/ssl/certs/dhparam-2048.pem; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; ssl_prefer_server_ciphers on; ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5; ssl_ecdh_curve secp384r1; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8; location / { try_files $uri @Nodejs; } location @Nodejs { proxy_pass http://Nodejs:3000; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-XSS-Protection "1; mode=block" always; add_header X-Content-Type-Options "nosniff" always; add_header Referrer-Policy "no-referrer-when-downgrade" always; add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always; } root /var/www/html; index index.html index.htm index.nginx-debian.html; } In the above file, make changes in the three lines mentioned below. Replace my subdomain.domain, i.e., Nodejs.devopslee, with the one that you want and have: server_name: (here) ssl_certificate: /etc/letsencrypt/live/Nodejs.devopslee.com/fullchain.pem; ssl_certificate_key: /etc/letsencrypt/live/Nodejs.devopslee.com/privkey.pem; Why do you need Nginx in front of the node.js service? Our Nodejs application runs on a non-standard port 3000. Nodejs provides a way to use HTTPS; however, configuring the protocol and managing SSL certificates that expire periodically within the application code base, is something we should not be concerned about. To overcome these scenarios, we need to have Nginx in front of it with an SSL termination and forward user requests to Nodejs. Nginx is a special type of web server that can act as a reverse proxy, load balancer, mail proxy, and HTTP cache. Here, we will be using Nginx as a reverse proxy to redirect requests to our Nodejs application and have SSL termination. Why not Apache? Apache is also a web server and can act as a reverse proxy. It also supports SSL termination; however, there are a few things that differentiate Nginx from Apache. Due to the following reasons, mostly Nginx is preferred over Apache. Let’s see them in short: Nginx has a single or a low number of processes, is asynchronous and event-based, whereas Apache tries to make new processes and new threads for every request in every connection. Nginx is lightweight, scalable, and easy to configure. On the other hand, Apache is great but has a higher barrier to learning. Docker-Compose Let’s install docker-compose as we will need it: Download the current stable release of Docker Compose: sudo curl -L “(uname -s)-$(uname -m)” -o /usr/local/bin/docker-compose Apply executable permissions to the docker-composebinary we just downloaded in the above step: sudo chmod +x /usr/local/bin/docker-compose Test to see if the installation was successful by checking the docker-composeversion: docker-compose –version Create a docker-compose.yaml file; alternatively, you can copy the content from my GitHub repository here. This will be used to spin the Docker containers of our application tech stack we have: cd /home/ubuntu/Nodejs-docker vim docker-compose.yml version: '3' services: Nodejs: build: context: . dockerfile: Dockerfile image: Nodejs container_name: Nodejs restart: unless-stopped networks: - app-network webserver: image: nginx:mainline-alpine container_name: webserver restart: unless-stopped ports: - "80:80" - "443:443" volumes: - web-root:/var/www/html - ./nginx-conf:/etc/nginx/conf.d - certbot-etc:/etc/letsencrypt - certbot-var:/var/lib/letsencrypt - dhparam:/etc/ssl/certs depends_on: - Nodejs networks: - app-network certbot: image: certbot/certbot container_name: certbot volumes: - certbot-etc:/etc/letsencrypt - certbot-var:/var/lib/letsencrypt - web-root:/var/www/html depends_on: - webserver command: certonly --webroot --webroot-path=/var/www/html --email my@email.com --agree-tos --no-eff-email --staging -d Nodejs.devopslee.com -d www.Nodejs.devopslee.com #command: certonly --webroot --webroot-path=/var/www/html --email my@email.com --agree-tos --no-eff-email --force-renewal -d Nodejs.devopslee.com -d www.Nodejs.devopslee.com volumes: certbot-etc: certbot-var: web-root: driver: local driver_opts: type: none device: /home/ubuntu/Nodejs-docker/views/ o: bind dhparam: driver: local driver_opts: type: none device: /home/ubuntu/Nodejs-docker/dhparam/ o: bind networks: app-network: driver: bridge In the above file, make changes in the line mentioned below. Replace my subdomain.domain, i.e., Nodejs.devopslee, with the one you want and have. Change IP for your personal email: –email EMAIL: Email used for registration and recovery contact. command: certonly –webroot –webroot-path=/var/www/html –email my@email.com –agree-tos –no-eff-email –staging -d Nodejs.devopslee.com -d www.Nodejs.devopslee.com Update the AWS Security Groups This time, expose ports 80 and 443 in the security group attached to the EC2 instance. Also, remove 3000 since it is not necessary because the application works through port 443: Include the DNS change Here, I have created a sub-domain “here” that will be used to access the sample Nodejs application using the domain name rather than accessing using an IP. You can create your sub-domain on AWS if you already have your domain: Create 2 “Type A Recordsets” in the hosted zone with a value as EC2 instances’ public IP. One Recordset will be “subdomain.domain.com” and the other will be “www.subdomain.domain.com.” Here, I have created “Nodejs.devopslee.com” and “www.Nodejs.devopslee.com,” both pointing to the Public IP of the EC2 instance. Note: I have not assigned any Elastic IP to the EC2 instance. It is recommended to assign an Elastic IP and then use it in the Recordset so that when you restart your EC2 instance, you don’t need to update the IP in the Recordset because public IPs change after the EC2 instance is restarted. Now, copy values of the “Type NS Recordset” we will need these in the next steps: Go to the “Hosted zone” of your domain and create a new “Record” with your “subdomain.domain.com” adding the NS values you copied in the previous step: Now, you have a sub-domain that you can use to access your application. In my case, I can use “Nodejs.devopslee.com” to access the Nodejs application. We are not done yet. Now, the next step is to secure our Nodejs web application. Include the SSL Certificate Let’s generate our key that will be used in Nginx: cd /home/ubuntu/Nodejs-docker mkdir views mkdir dhparam sudo openssl dhparam -out /home/ubuntu/Nodejs-docker/dhparam/dhparam-2048.pem 2048 Deploy Nodejs App to EC2 Instance We are all set to start our Nodejs app using docker-compose. This will start our Nodejs app on port 3000, Nginx with SSL on port 80 and 443. Nginx will redirect requests to the Nodejs app when accessed using the domain. It will also have a Certbot client that will enable us to obtain our certificates. docker-compose up After you hit the above command, you will see some output as follows. You must see a message as “Successfully received certificates.” Note: The above docker-compose command will start containers and will stay attached to the terminal. We have not used the -d option to detach it from the terminal: You are all set, now hit the URL in the browser and you should have your Nodejs application available on HTTPS: You can also try to hit the application using the curl command: List the data from the application: curl (here) Insert an entry in the application: curl -XPOST (here) Again list the data to verify if the data has been inserted or not: curl (here) Check the status of the application: (Here) Hit the URL in the browser to get a list of entries in the database: (Here) Auto-Renewal of SSL Certificates Certificates we generate using “Let’s Encrypt” are valid for 90 days, so we need to have a way to renew our certificates automatically so that we don’t end up with expired certificates. To automate this process, let’s create a script that will renew certificates for us and a cronjob to schedule the execution of this script. Create a script with –dry-runto test our script: vim renew-cert.sh #!/bin/bash COMPOSE="/usr/local/bin/docker-compose --no-ansi" DOCKER="/usr/bin/docker" cd /home/ubuntu/Nodejs-docker/ $COMPOSE run certbot renew --dry-run && $COMPOSE kill -s SIGHUP webserver $DOCKER system prune -af Change permissions of the script to make it executable: chmod 774 renew-cert.sh Create a cronjob: sudo crontab -e */5 * * * * /home/ubuntu/Nodejs-docker/renew-cert.sh >> /var/log/cron.log 2>&1 List the cronjobs: sudo crontab -l 5. Check logs of the cronjob after five mins, as we have set a cronjob to be executed on every fifth minute: tail -f /var/log/cron.lo In the above screenshot, you can see a “Simulating renewal of an existing certificate….” message. This is because we have specified the “–dry-run” option in the script. Let’s remove the “–dry-run” option from the script: vim renew-cert.sh #!/bin/bash COMPOSE="/usr/local/bin/docker-compose --no-ansi" DOCKER="/usr/bin/docker" cd /home/ubuntu/Nodejs-docker/ $COMPOSE run certbot renew && $COMPOSE kill -s SIGHUP webserver $DOCKER system prune -af This time you won’t see such a “Simulating renewal of an existing certificate….” message. This time the script will check if there is any need to renew the certificates, and if required will renew the certificates else will ignore and say “Certificates not yet due for renewal.” What Is Next on How To deploy the Nodejs App to AWS? We are done with setting up our Nodejs application using Docker on AWS EC2 instance; however, there are other things that come into the picture when you want to deploy a highly available application for production and other environments. The next step is to use an Orchestrator, like ECS or EKS, to manage our Nodejs application at the production level. Replication, auto-scaling, load balancing, traffic routing, and monitoring container health does not come out of the box with Docker and Docker-Compose. For managing containers and microservices architecture at scale, you need a container orchestration tool like ECS or EKS. Also, we did not use any Docker repository to store our Nodejs app Docker Image. You can use AWS ECR, a fully managed AWS container registry offering high-performance hosting. Conclusion To deploy Nodejs app to AWS does not mean just creating a Nodejs application and deploying it on the AWS EC2 instance with a self-managed database. There are various aspects like containerizing the Nodejs App, SSL termination, and domain for the app that come into the picture when you want to speed up your software development, deployment, security, reliability, and data redundancy. In this article, we saw the steps to dockerize the sample Nodejs application, using AWS RDS Amazon Aurora and deploying a Nodejs app to EC2 instance using Docker and Docker-Compose. We enabled SSL termination to our sub-domain to be used to access the Nodejs application. We saw the steps to automate domain validation and SSL certificate creation using Certbot along with a way to automate certificate renewal that is valid for 90 days. This is enough to get started with a sample Nodejs application; however, when it comes to managing your real-time applications, 100s of microservices, 1000s of containers, volumes, networking, secrets, egress-ingress, you need a container orchestration tool. There are various tools, like self-hosted Kubernetes, AWS ECS, AWS EKS, that you can leverage to manage the container life cycle in your real-world applications.

By Rahul Shivalkar
32 Best JavaScript Snippets
32 Best JavaScript Snippets

Hi there, my name is Rahul, and I am 18 years old, learning development and designing sometimes. Today, I'd like to share some useful JavaScript code snippets I have saved that I think can help make your life as a developer easier. Let's get started! Generate a random number between two values: const randomNumber = Math.random() * (max - min) + min Check if a number is an integer: const isInteger = (num) => num % 1 === 0 Check if a value is null or undefined: const isNil = (value) => value === null || value === undefined Check if a value is a truthy value: const isTruthy = (value) => !!value Check if a value is a falsy value: const isFalsy = (value) => !value Check if a value is a valid credit card number: JavaScript const isCreditCard = (cc) => { const regex = /(?:4[0-9]{12}(?:[0-9]{3})?|[25][1-7][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})/; return regex.test(cc); } Check if a value is an object: const isObject = (obj) => obj === Object(obj) Check if a value is a function: const isFunction = (fn) => typeof fn === 'function' Remove Duplicated from Array const removeDuplicates = (arr) => [...new Set(arr)]; Check if a value is a promise: const isPromise = (promise) => promise instanceof Promise Check if a value is a valid email address: JavaScript const isEmail = (email) => { const regex = /(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))/; return regex.test(email); } Check if a string ends with a given suffix: const endsWith = (str, suffix) => str.endsWith(suffix) Check if a string starts with a given prefix: const startsWith = (str, prefix) => str.startsWith(prefix) Check if a value is a valid URL: JavaScript const isURL = (url) => { const regex = /(?:http(s)?:\/\/)?[\w.-]+(?:\.[\w\.-]+)+[\w\-\._~:/?#[\]@!\$&'\(\)\*\+,;=.]+/; return regex.test(url); } Check if a value is a valid hexadecimal color code: JavaScript const isHexColor = (hex) => { const regex = /#?([0-9A-Fa-f]{6}|[0-9A-Fa-f]{3})/; return regex.test(hex); } Check if a value is a valid postal code: JavaScript const isPostalCode = (postalCode, countryCode) => { if (countryCode === 'US') { const regex = /[0-9]{5}(?:-[0-9]{4})?/; return regex.test(postalCode); } else if (countryCode === 'CA') { const regex = /[ABCEGHJKLMNPRSTVXY][0-9][ABCEGHJKLMNPRSTVWXYZ] [0-9][ABCEGHJKLMNPRSTVWXYZ][0-9]/; return regex.test(postalCode.toUpperCase()); } else { // Add regex for other country codes as needed return false; } } Check if a value is a DOM element: JavaScript const isDOMElement = (value) => typeof value === 'object' && value.nodeType === 1 && typeof value.style === 'object' && typeof value.ownerDocument === 'object'; Check if a value is a valid CSS length (e.g. 10px, 1em, 50%): JavaScript const isCSSLength = (value) => /([-+]?[\d.]+)(%|[a-z]{1,2})/.test(String(value)); Check if a value is a valid date string (e.g. 2022-09-01, September 1, 2022, 9/1/2022): JavaScript const isDateString = (value) => !isNaN(Date.parse(value)); Check if a value is a number representing a safe integer (those integers that can be accurately represented in JavaScript): const isSafeInteger = (num) => Number.isSafeInteger(num) Check if a value is a valid Crypto address: JavaScript //Ethereum const isEthereumAddress = (address) => { const regex = /0x[a-fA-F0-9]{40}/; return regex.test(address); } //bitcoin const isBitcoinAddress = (address) => { const regex = /[13][a-km-zA-HJ-NP-Z0-9]{25,34}/; return regex.test(address); } // ripple const isRippleAddress = (address) => { const regex = /r[0-9a-zA-Z]{33}/; return regex.test(address); } Check if a value is a valid RGB color code: JavaScript const isRGBColor = (rgb) => { const regex = /rgb\(\s*([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\s*,\s*([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\s*,\s*([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\s*\)/; return regex.test(rgb); } Quickly create an array of characters from a string: JavaScript const string = "abcdefg"; const array = [...string]; Quickly create an object with all the properties and values of another object but with a different key for each property: JavaScript const original = {a: 1, b: 2, c: 3}; const mapped = {...original, ...Object.keys(original).reduce((obj, key) => ({...obj, [key.toUpperCase()]: original[key]}), {})}; Quickly create an array of numbers from 1 to 10: JavaScript const array = [...Array(10).keys()].map(i => i + 1); Quickly shuffle an array: JavaScript const shuffle = (array) => array.sort(() => Math.random() - 0.5); Convert an array-like object (such as a NodeList) to an array: JavaScript const toArray = (arrayLike) => Array.prototype.slice.call(arrayLike); Sort Arrays: JavaScript //Ascending const sortAscending = (array) => array.sort((a, b) => a - b); //Descending const sortDescending = (array) => array.sort((a, b) => b - a); Debounce a function: JavaScript const debounce = (fn, time) => { let timeout; return function(...args) { clearTimeout(timeout); timeout = setTimeout(() => fn.apply(this, args), time); }; }; Open a new tab with a given URL: JavaScript const openTab = (url) => { window.open(url, "_blank"); }; Get the difference between two dates: JavaScript const dateDiff = (date1, date2) => Math.abs(new Date(date1) - new Date(date2)); Generate a random string of a given length: JavaScript const randomString = (length) => { let result = ""; const characters = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; for (let i = 0; i < length; i++) { result += characters.charAt(Math.floor(Math.random() * characters.length)); } return result; }; Get value of cookie: JavaScript const getCookie = (name) => { const value = `; ${document.cookie}`; const parts = value.split(`; ${name}=`); if (parts.length === 2) return parts.pop().split(";").shift(); }; Thank you for Reading. It is important to note that simply copying and pasting code without understanding how it works can lead to problems down the line. It is always a good idea to test the code and ensure that it functions properly in the context of your project. Also, don't be afraid to customize the code to fit your needs. As a helpful tip, consider saving a collection of useful code snippets for quick reference in the future. I am also learning if I am going wrong somewhere, let me know.

By Rahul .
Why and How To Create an Event Bus in Vue.js 3
Why and How To Create an Event Bus in Vue.js 3

Since I’m working on the 2.0 version of my product’s UI (to be released in May), I’m publishing some technical tricks I learned from migrating my application from Vue.js 2 to Vue.js 3. The current version of the Inspector frontend application uses a global event bus. It’s needed to make the root component of the application (App.vue) aware of some particular events fired inside the component chain. Before going into the details of this specific implementation, let me describe the principles of components communication in Vue.js and why we decided to use a global event bus even if this is a not recommended practice. Vue.js Components Communication In Vue.js, each component is responsible to render a piece of UI users are interacting with. Sometimes, components are self-sufficient. They can retrieve data directly from the backend APIs. More often, a component is a child of a more structured view that needs to pass data to child components to render a specific part of the UI. Components can expose props to their parents. Props are the data a component needs from a parent to make it work. In the example below, the component needs the user object to render its name with different styles based on its role: JavaScript <template> <span :class="{'text-danger': user.is_admin}"> {{user.first_name} {{user.last_name} </span> </template> <script> export default { props: { user: { type: Object, required: true } } } </script> This strategy enforces reusability, promotes maintainability, and is the best practice. But, this works only for components with a direct relation in the tree, like a Menu.vue component that uses the User.vue component to show the user name: JavaScript <template> <div class="menu"> <a href="#" class="menu-item"> Home </a> <a href="/user" class="menu-item"> <User :user="user"/> </a> </div> </template> <script> export default { data() { return { user: {} }; }, created() { axios.get('/api/user').then(res => this.user = res.data); } } </script> How Can Unrelated Components Communicate? Vue.js Global State There are two ways of making unrelated components communicate with each other: Vuex Event Bus Vuex is a state management library. At first glance, it seems complicated, and in fact, it is a bit. You can use Vuex to store data that should be used globally in your app. Vuex provides you with a solid API to apply changes to this data and reflect them in all child components that use the Vuex data store. Event bus is related to the classic Events/Listeners architecture. You fire an event that will be treated by a listener function if there is one registered for that event. 98% of the stuff in your typical app is not a function but really a state. So, you should use Vuex for everything as a default and drop back to an Event Bus when Vuex becomes a hurdle. Event Bus Use Case: Our Scenario I have come across a scenario where I need to run a function, not manage a state. So, Vuex doesn’t provide the right solution. In Inspector, the link to access the form for creating a new project is spread over several parts of the application. When the project is successfully created, the application should immediately redirect to the installation instruction page. No matter where you are in the application, we must push the new route in the router to force navigation: JavaScript (project) => { this.$router.push({ name: 'projects.monitoring', params: { id: project.id } }); }; How To Create an Event Bus in Vue3 Vue 2 has an internal event bus by default. It exposes the $emit() and $on() methods to fire and listen for events. So, you could use an instance of Vue as an event bus: JavaScript export const bus = new Vue(); In Vue 3, Vue is not a constructor anymore, and Vue.createApp({}); returns an object that has no $on and $emit methods. As suggested in official docs, you could use the mitt library to dispatch events across components. First, install mitt: Shell npm install --save mitt Next, I created a Vue plugin to make a mitt instance available inside the app: JavaScript import mitt from 'mitt'; export default { install: (app, options) => { app.config.globalProperties.$eventBus = mitt(); } } This plugin simply adds the $eventBus global property to the Vue instance so we can use it in every component calling this.$eventBus. Use the plugin in your app instance: JavaScript import { createApp } from 'vue'; const app = createApp({}); import eventBus from './Plugins/event-bus'; app.use(eventBus); Now, we can fire the event “project-created” from the project form to fire the function defined in the App.vue component: JavaScript this.$eventBus.emit('project-created', project); The global listener is placed in the root component of the app (App.vue): JavaScript export default { created() { this.$eventBus.on('project-created', (project) => { this.$router.push({ name: 'projects.monitoring', params: { id: project.id } }); }); } } Conclusion By now, you should have a better understanding of migrating from Vue.js 2 to Vue.js 3, Vue.js component communications, and the Vue.js global state. I hope this article will help you make better decisions for the design of your application.

By Valerio Barbera
Shallow and Deep Copies in JavaScript: What’s the Difference?
Shallow and Deep Copies in JavaScript: What’s the Difference?

Have you ever tried to create a duplicate of an object in JavaScript, but the output wasn’t what you expected? If so, this article will discuss the different techniques for cloning and how to correctly utilize them. This knowledge will help guarantee that you get the correct results whenever you use the clone command. In JavaScript, we can also create shallow copies and deep copies of objects. Let’s dive into what each of these concepts means in JavaScript. Shallow Copy A shallow copy in JavaScript creates a new object that contains references to the same memory locations as the original object. This means that if any changes are made to the copied object, the original object is also affected. In JavaScript, there are several ways to create a shallow copy: Object.assign() The Object.assign() method copies all enumerable properties of an object to a new object. It takes one or more source objects and a target object. The target object is the object that will be modified and returned. Here’s an example: let originalObj = { name: "John", age: 30 }; let copiedObj = Object.assign({}, originalObj); copiedObj.age = 40; console.log(originalObj.age); // Output: 40 As you can see, changing the age property of the copied object also changes the age property of the original object. Spread Operator The spread operator (...) can also be used to create a shallow copy of an object. This operator spreads the properties of an object into a new object. Here’s an example: let originalObj = { name: "John", age: 30 }; let copiedObj = { ...originalObj }; copiedObj.age = 40; console.log(originalObj.age); // Output: 40 Again, changing the age property of the copied object also changes the age property of the original object. Deep Copy A deep copy creates a new object with all the properties and sub-properties of the original object. This means that any changes made to the copied object will not affect the original object. In JavaScript, there are several ways to create a deep copy: JSON.parse() and JSON.stringify() The easiest way to create a deep copy of an object is to use JSON.parse() and JSON.stringify(). Here’s an example: let originalObj = { name: "John", age: 30, address: { city: "New York", state: "NY" } }; let copiedObj = JSON.parse(JSON.stringify(originalObj)); copiedObj.address.city = "Los Angeles"; console.log(originalObj.address.city); // Output: New York As you can see, changing the city property of the copied object does not affect the city property of the original object. Recursion Another way to create a deep copy is to use recursion to copy all the properties of the original object and any sub-objects. Here’s an example: function deepCopy(obj) { let copiedObj = Array.isArray(obj) ? [] : {}; for (let key in obj) { if (typeof obj[key] === "object" && obj[key] !== null) { copiedObj[key] = deepCopy(obj[key]); } else { copiedObj[key] = obj[key]; } } return copiedObj; } let originalObj = { name: "John", age: 30, address: { city: "New York", state: "NY" } }; let copiedObj = deepCopy(originalObj); copiedObj.address.city = "Los Angeles"; console.log(originalObj.address.city); // Output: New York In this example, the deepCopy() function recursively copies all the properties and sub-properties of the original object. Any changes made to the copied object do not affect the original object. Conclusion In conclusion, understanding the difference between shallow copy and deep copy in JavaScript is important for proper object manipulation and memory management. A shallow copy creates a new object that shares the same memory references as the original object. Any changes made to the shallow copy or original object will affect both objects. A shallow copy can be useful for creating references to existing objects without consuming more memory. On the other hand, a deep copy creates a new object that is entirely independent of the original object. Any changes made to the deep copy will not affect the original object and vice versa. A deep copy can be useful for creating entirely new objects that are not dependent on the original object. The choice between shallow copy and deep copy depends on the requirements of the code and the desired outcome. A shallow copy is useful when dealing with large objects that need to be referenced multiple times, while a deep copy is useful when creating new, independent objects that are not dependent on the original object.

By Janki Mehta
Migrating From MySQL to YugabyteDB Using YugabyteDB Voyager
Migrating From MySQL to YugabyteDB Using YugabyteDB Voyager

In this article, I’m going to demonstrate how you can migrate a comprehensive web application from MySQL to YugabyteDB using the open-source data migration engine YugabyteDB Voyager. Nowadays, many people migrate their applications from traditional, single-server relational databases to distributed database clusters. This helps improve availability, scalability, and performance. Migrating to YugabyteDB allows engineers to use a familiar SQL interface while benefiting from the data-resiliency and performance characteristics of distributed databases. YugaSocial Application I’ve developed an application called YugaSocial, built to run on MySQL. The YugaSocial application is a Facebook clone with the ability to make posts, follow users, comment on posts, and more! Let’s start by deploying and connecting to a Google Cloud SQL database running MySQL. Later, we’ll migrate our data to a multi-node YugabyteDB Managed cluster. Getting Started With MySQL We could run MySQL on our machines using a local installation or in Docker, but I’m going to demonstrate how to migrate a database hosted on the Google Cloud Platform (GCP) to YugabyteDB Managed. Setting Up Google Cloud SQL I’ve deployed a MySQL instance on Google Cloud SQL named yugasocial and set my public IP address to the authorized networks list so I can connect directly from my machine. While beneficial for demonstration purposes, I’d recommend connecting securely from inside a VPC, with SSL certificates to properly secure your data transfers. Connecting YugaSocial to MySQL in Node.js Connecting to our MySQL instance in the cloud is easy with the MySQL driver for Node.js. This is an application code snippet that connects to the MySQL instance: JavaScript // connect.js ... import mysql from "mysql"; if (process.env.DB_TYPE === "mysql") { const pool = mysql.createPool({ host: process.env.DB_HOST, port: process.env.DB_PORT, user: process.env.DB_USER, password: process.env.DB_PASSWORD, database: process.env.DB_NAME, connectionLimit: 100 }); } I’ve created a connection pool with up to 100 established connections. By setting environment variables with our Google Cloud SQL instance configuration, and running the application, we can confirm that our database has been configured properly: Shell > DB_TYPE=mysql DB_USER=admin DB_HOST=[HOST] DB_HOST=[PASSWORD] node index.js Connection to MySQL verified. Server running on port 8800. After verifying our MySQL database running in the cloud, we can start migrating to YugabyteDB Managed. Setting Up YugabyteDB Managed It takes less than five minutes to get started with YugabyteDB Managed. First, create an account then follow the steps to create a YugabyteDB cluster. I’ve chosen to deploy a three-node cluster to GCP, in the us-west-1 region. This configuration will provide fault tolerance across availability zones. Add your IP address to the cluster allow list so you can connect from your machine to the remote database and download the database credentials before creating your cluster. Once our cluster has been deployed, we’re ready to begin migrating with YugabyteDB Voyager. Migrating to YugabyteDB Having verified our MySQL deployment, it’s time to migrate from Cloud SQL to YugabyteDB using the YugabyteDB Voyager CLI. YugabyteDB Voyager is a powerful, open-source, data-migration engine, which manages the entire lifecycle of data migration. After installing YugabyteDB Voyager, we’ll begin by creating users in our source and target databases and granting them roles. I’ve chosen to use the mysqlsh command-line utility to connect to my cloud instance, but Google provides multiple connection options. 1. Create the ybvoyager user in Cloud SQL and grant permissions: SQL > mysqlsh root@CLOUD_SQL_HOST --password='CLOUD_SQL_PASSWORD' > \sql SQL=> \use social SQL=> CREATE USER 'ybvoyager'@'%' IDENTIFIED WITH mysql_native_password BY 'Password#123'; SQL=> GRANT PROCESS ON *.* TO 'ybvoyager'@'%'; SQL=> GRANT SELECT ON social.* TO 'ybvoyager'@'%'; SQL=> GRANT SHOW VIEW ON source_db_name.* TO 'ybvoyager'@'%'; SQL=> GRANT TRIGGER ON source_db_name.* TO 'ybvoyager'@'%'; SQL=> GRANT SHOW_ROUTINE ON *.* TO 'ybvoyager'@'%'; 2. Repeat this process using the YugabyteDB Managed Cloud Shell: SQL // Optionally, you can create a database for import. Otherwise, the target database will default to 'yugabyte'. yugabyte=> CREATE DATABASE social; yugabyte=> CREATE USER ybvoyager PASSWORD 'password'; yugabyte=> GRANT yb_superuser TO ybvoyager; Now, our source and target databases are equipped to use Voyager. In order to export from Cloud SQL, we first need to create an export directory and an associated environment variable: Shell > mkdir ~/export-dir > export EXPORT_DIR=$HOME/export-dir This directory will be used as an intermediary between our source and target databases. It will house schema and data files, as well as logs, metadata, and schema analysis reports. Let’s begin migrating our database. 1. Export the schema from Google Cloud SQL: Shell > yb-voyager export schema --export-dir ~/export-dir \ --source-db-type mysql \ --source-db-host CLOUD_SQL_HOST \ --source-db-user ybvoyager \ --source-db-password 'Password#123' \ --source-db-name social export of schema for source type as 'mysql' mysql version: 8.0.26-google exporting TABLE done exporting PARTITION done exporting VIEW done exporting TRIGGER done exporting FUNCTION done exporting PROCEDURE done Exported schema files created under directory: /export-dir/schema 2. Analyze the exported schema: Shell > yb-voyager analyze-schema --export-dir ~/export-dir --output-format txt -- find schema analysis report at: /export-dir/reports/report.txt By analyzing our schema before exporting data, we have the option to make any necessary changes to our DDL statements. The schema analysis report will flag any statements that require manual intervention. In the case of YugaSocial, Voyager migrated the MySQL schema to PostgreSQL DDL without needing any manual changes. 3. Finally, export the data from Google Cloud SQL: Shell > yb-voyager export data --export-dir ~/export-dir \ --source-db-type mysql \ --source-db-host CLOUD_SQL_HOST \ --source-db-user ybvoyager \ --source-db-password 'Password#123' \ --source-db-name social export of data for source type as 'mysql' Num tables to export: 6 table list for data export: [comments likes posts relationships stories users] calculating approx num of rows to export for each table... Initiating data export. Data export started. Exported tables:- {comments, likes, posts, relationships, stories, users} TABLE ROW COUNT comments 1000 likes 502 posts 1000 relationships 1002 stories 1000 users 1004 Export of data complete ✅ After successfully exporting our schema and data, we’re ready to move our database to YugabyteDB Managed. 1. Import the schema to YugabyteDB Managed: Shell > yb-voyager import schema --export-dir ~/export-dir \ --target-db-host YUGABYTEDB_MANAGED_HOST \ --target-db-user ybvoyager \ --target-db-password 'password' \ --target-db-name yugabyte \ --target-db-schema social \ --target-ssl-mode require \ --start-clean schemas to be present in target database "yugabyte": [social] creating schema 'social' in target database... table.sql: CREATE TABLE comments ( id bigserial, description varchar(200) NOT NULL, crea ... table.sql: ALTER SEQUENCE comments_id_seq RESTART WITH 1; table.sql: ALTER TABLE comments ADD UNIQUE (id); table.sql: CREATE TABLE likes ( id bigserial, userid bigint NOT NULL, postid bigint NOT ... table.sql: ALTER SEQUENCE likes_id_seq RESTART WITH 1; table.sql: ALTER TABLE likes ADD UNIQUE (id); table.sql: CREATE TABLE posts ( id bigserial, description varchar(200), img varchar(200) ... ... As you can see from the terminal output, I’ve chosen to import into the public schema. If you’d like to use a different schema, you can do this using the --target-db-schema option. 2. Import the data to YugabyteDB Managed: Shell > yb-voyager import data --export-dir ~/export-dir \ --target-db-host YUGABYTEDB_MANAGED_HOST \ --target-db-user ybvoyager \ --target-db-password 'password' \ --target-db-name yugabyte \ --target-db-schema social \ --target-ssl-mode require \ --start-clean import of data in "yugabyte" database started Using 2 parallel jobs by default. Use --parallel-jobs to specify a custom value skipping already imported tables: [] Preparing to import the tables: [comments likes posts relationships stories users] All the tables are imported setting resume value for sequences YugabyteDB Voyager handles this data import with parallelism, making quick work of it. 3. To wrap things up, import indexes and triggers: Shell > yb-voyager import schema --export-dir ~/export-dir \ --target-db-host YUGABYTEDB_MANAGED_HOST \ --target-db-user ybvoyager \ --target-db-password ‘password’ \ --target-db-name yugabyte \ --target-db-schema social \ --target-ssl-mode require \ --start-clean \ --post-import-data INDEXES_table.sql: CREATE INDEX comments_postid ON comments (postid); INDEXES_table.sql: CREATE INDEX comments_userid ON comments (userid); INDEXES_table.sql: CREATE INDEX likes_postid ON likes (postid); ... We no longer need the ybvoyager user in YugabyteDB Managed. To change ownership of the imported objects to another user in the YugabyteDB Managed Cloud Shell, run: SQL > REASSIGN OWNED BY ybvoyager TO admin; > DROP OWNED BY ybvoyager; > DROP USER ybvoyager; It’s time to verify that our database was successfully migrated to YugabyteDB Managed, by reconfiguring our YugaSocial application. Connecting YugaSocial to YugabyteDB Managed in Node.js As mentioned, YugaSocial was developed to run on MySQL. However, I also added support for PostgreSQL. Since YugabyteDB is PostgreSQL-compatible, we can use the node-postgres driver for Node.js to connect to our YugabyteDB Managed cluster. In fact, Yugabyte has developed its own smart drivers, which add load-balancing capabilities to native drivers. This can drastically improve performance by avoiding excessive load on any single cluster node. After installing Yugabyte’s fork of node-postgres, we’re ready to connect to our database: JavaScript // connect.js ... const { Pool } = require("@yugabytedb/pg"); if (process.env.DB_TYPE === “yugabyte”) { const pool = new Pool({ user: process.env.DB_USER, host: process.env.DB_HOST, password: process.env.DB_PASSWORD, port: 5433, database: process.env.DB_NAME, min: 5, max: 100, ssl: { rejectUnauthorized: false } }); } This configuration is very similar to the MySQL driver. By restarting our application with the proper environment variables for our connection details, we’re able to confirm that our data was migrated successfully: Shell > DB_TYPE=yugabyte DB_USER=admin DB_HOST=[HOST] DB_HOST=[PASSWORD] node index.js Our application functions just the same as before. This time I replied to Yana, to let her know that YugaSocial had officially been migrated to YugabyteDB Managed! Conclusion As you can see, YugabyteDB Voyager simplifies migration from MySQL to YugabyteDB. I encourage you to give it a try in your next coding adventure, whether you’re migrating from MySQL, or other relational databases, like PostgreSQL or Oracle. Look out for more articles on distributed SQL and Node.js from me in the near future. Until then, don’t hesitate to reach out and keep on coding!

By Brett Hoyer

Top JavaScript Experts

expert thumbnail

Anthony Gore

Founder,
Vue.js Developers

I'm Anthony Gore and I'm here to teach you Vue.js! Through my books, online courses, and social media, my aim is to turn you into a Vue.js expert. I'm a Vue Community Partner, curator of the weekly Vue.js Developers Newsletter, and the founder of vuejsdevelopers.com, an online community for web professionals who love Vue.js. Curious about Vue? Take my free 30-minute "Vue.js Crash Course" to learn what Vue is, what kind of apps you can build with it, how it compares to React & Angular, and more. Enroll for free! https://courses.vuejsdevelopers.com/p/vue-js-crash-course?utm_source=dzone&utm_medium=bio
expert thumbnail

John Vester

Lead Software Engineer,
Marqeta @JohnJVester

Information Technology professional with 30+ years expertise in application design and architecture, feature development, project management, system administration and team supervision. Currently focusing on enterprise architecture/application design utilizing object-oriented programming languages and frameworks. Prior expertise building (Spring Boot) Java-based APIs against React and Angular client frameworks. CRM design, customization and integration with Salesforce. Additional experience using both C# (.NET Framework) and J2EE (including Spring MVC, JBoss Seam, Struts Tiles, JBoss Hibernate, Spring JDBC).
expert thumbnail

Justin Albano

Software Engineer,
IBM

I am devoted to continuously learning and improving as a software developer and sharing my experience with others in order to improve their expertise. I am also dedicated to personal and professional growth through diligent studying, discipline, and meaningful professional relationships. When not writing, I can be found playing hockey, practicing Brazilian Jiu-jitsu, watching the NJ Devils, reading, writing, or drawing. ~II Timothy 1:7~ Twitter: @justinmalbano
expert thumbnail

Swizec Teller

CEO,
preona

I'm a writer, programmer, web developer, and entrepreneur. Preona is my current startup that began its life as the team developing Twitulater. Our goal is to create a set of applications for the emerging Synaptic Web, which would rank real-time information streams in near real time, all along reading its user behaviour and understanding how to intelligently react to it. twitter: @Swizec

The Latest JavaScript Topics

article thumbnail
Understanding Angular Route Resolvers
Better manage your components in a few, simple steps.
April 20, 2023
by Anastasios Theodosiou
· 112,967 Views · 7 Likes
article thumbnail
Benefits of React V18: A Comprehensive Guide
This article will cover three key features of React v18: automatic batching, transition, and suspense on the server.
April 20, 2023
by Beste Bayhan
· 2,095 Views · 1 Like
article thumbnail
Stream File Uploads to S3 Object Storage and Save Money
Learn how to upload files directly to S3-compatible Object Storage from your Node application to improve availability and reduce costs.
April 19, 2023
by Austin Gil CORE
· 2,141 Views · 1 Like
article thumbnail
Start Playwright for Component Testing
This approach helps identify and fix issues early in the development process, leading to a more stable and reliable final product.
April 17, 2023
by Kailash Pathak [Cypress Ambassador]
· 2,764 Views · 2 Likes
article thumbnail
Best Mobile App Frameworks That Use JavaScript, HTML, and CSS
This collection of mobile frameworks are guaranteed to benefit your development.
April 14, 2023
by Nilanchala Panigrahy
· 8,392 Views · 4 Likes
article thumbnail
Angular Component Tree With Tables in the Leaves and a Flexible JPA Criteria Backend
Make a Tree with included Tables fast and build a flexible Backend for it.
April 14, 2023
by Sven Loesekann
· 4,751 Views · 6 Likes
article thumbnail
Bridging WebAssembly Gaps With Components and Wasifills
How wasifills — a component adapter pattern like polyfills, but for components — can help bridge the gap between the rapidly changing standards landscape and the future of interoperable components.
April 13, 2023
by Kevin Hoffman
· 3,484 Views · 1 Like
article thumbnail
Building a Rest API With AWS Gateway and Node.js
With AWS Gateway, you can create RESTful APIs that expose your data and business to developers, who can then build great applications that consume your API.
April 11, 2023
by Derric Gilling CORE
· 5,540 Views · 1 Like
article thumbnail
Streaming in Mule 4: Processing Large Data Sets
The fundamental task of MuleSoft is to integrate different systems. We will explore leveraging Streaming in Mule 4 to process large datasets.
April 11, 2023
by Ishalveer Singh Randhawa
· 2,895 Views · 1 Like
article thumbnail
What Is a Streaming Database?
A streaming database can help make better decisions, identify opportunities, and respond to real-time threats.
April 11, 2023
by Yingjun Wu
· 2,929 Views · 4 Likes
article thumbnail
The Holy Grail of Agile-DevOps Value Stream Hunting: Actualizing DevOps Transition Purpose
In modern product development, understanding value streams is crucial to optimizing our ways of working and delivering value to customers.
April 10, 2023
by Priya Kumari
· 5,205 Views · 1 Like
article thumbnail
Embracing Asynchrony in Java, Python, JavaScript, and Go
The article discusses asynchrony in four languages, emphasizing its role in creating efficient, responsive applications.
April 7, 2023
by Andrei Tetka
· 6,775 Views · 2 Likes
article thumbnail
How to Perform Component Testing Using Cypress
This article explains how to set up the React component and test React component with the help of Cypress.
April 7, 2023
by Kailash Pathak [Cypress Ambassador]
· 5,519 Views · 3 Likes
article thumbnail
My First Firefox Extension
Follow the journey of creating a CFP submission helper in the form of a Firefox extension. It was not a walk in the park.
April 6, 2023
by Nicolas Fränkel CORE
· 4,697 Views · 6 Likes
article thumbnail
The Kappa Architecture: A Cutting-Edge Approach for Data Engineering
In this article, we will explore the Kappa Architecture and its key features that make it a cutting-edge approach for data engineering.
April 6, 2023
by Amlan Patnaik
· 4,768 Views · 1 Like
article thumbnail
Using Environment Variable With Angular
This article will help you use environment variables to replace generic variables you want to use inside any Angular project.
April 5, 2023
by Ömer Faruk Kırlı
· 5,661 Views · 5 Likes
article thumbnail
Monitoring Data Stream Applications in Enterprises To Meet SLAs
In this article, we will discuss in detail the importance of monitoring data stream applications and why it is critical for enterprises.
April 4, 2023
by Kiran Peddireddy
· 4,886 Views · 4 Likes
article thumbnail
Reasons to Use Tailwind CSS in React Native Development Projects
This article explores the reasons to use Tailwind CSS in React Native app development projects.
April 3, 2023
by Parija Rangnekar
· 3,189 Views · 1 Like
article thumbnail
A Developer's Dilemma: Flutter vs. React Native
This article discusses in detail React Native and Flutter, which are the two most popular cross-platform mobile app development frameworks at the moment.
April 3, 2023
by Uday Pitambare
· 5,868 Views · 2 Likes
article thumbnail
How to Identify Locators in Appium (With Examples)
This Appium testing tutorial focuses on the Appium automation tool to automate Android and iOS applications using different locators in Appium.
March 31, 2023
by Wasiq Bhamla
· 3,720 Views · 2 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: