Artificial intelligence (AI) and machine learning (ML) are two fields that work together to create computer systems capable of perception, recognition, decision-making, and translation. Separately, AI is the ability for a computer system to mimic human intelligence through math and logic, and ML builds off AI by developing methods that "learn" through experience and do not require instruction. In the AI/ML Zone, you'll find resources ranging from tutorials to use cases that will help you navigate this rapidly growing field.
ChatGPT is a cutting-edge natural language processing model created by OpenAI, an artificial intelligence research center founded by Elon Musk, Sam Altman, and Greg Brockman. The model was announced in June 2018, and has continuously been modified and upgraded, with the most recent version, GPT-3, being released in 2020. ChatGPT was made from GPT-3, and was released back in 2022. The “GPT” in ChatGPT stands for “Generative Pre-trained Transformer,” a reference to the model’s deep learning architecture. ChatGPT essentially uses massive amounts of text data to “learn” the intricacies of human language, allowing it to create replies that are frequently impossible to differentiate from those of humans. The ChatGPT technology is based on a type of neural network known as a transformer, which is well-suited to natural language processing applications. ChatGPT’s capacity to create high-quality output to a wide range of prompts you give in the chat box is one of its key features. The response you get feels like it has been generated by a human. You can also specify the conditions of your request. The model was trained on a vast corpus of text, so it can rely on a wide range of data to answer questions, provide information, or participate in casual conversation generated from its neural network. It is crucial to remember; however, that ChatGPT is not flawless. When faced with suggestions that are beyond the model’s area of knowledge, the model may provide illogical or unsuitable replies. ChatGPT usage is sometimes high, and, because of this, there can sometimes be a limit to how many queries you can send. Now, since the main purpose of this article is to give you a comprehensive and hands-on example of how ChatGPT can be used for generating code for a REST API, let’s first understand what a REST API is. A REST API is a type of web service that allows two computer systems to communicate with one another over the internet by following a set of standardized rules. It follows the REST architecture, which is known for its stateless and client-server nature. A REST API works by the client sending a request to the server using a specific URL and HTTP method, such as GET, POST, PUT, or DELETE, and the server responding with a message containing the requested data or indicating whether the requested operation was successful. REST APIs are widely used in modern web and mobile applications because they provide a standardized and efficient method of exchanging data between systems. You can usually learn more about a specific API by reading through the documentation. Some APIs are commercial and usually have different pricing plans. This also means that making a request requires a bearer token. Now comes the interesting part—creating a fully functional REST API in C# for managing students with the help of ChatGPT, so keep on reading. Creating the API In this section, we’ll let ChatGPT code a fully functional REST API for managing students, including creation, deletion, and modification of student records. To do this, we should have a “students controller,” which would have the API endpoint, and a “students class,” which would have the needed fields such as “Id,” “name,” and “age.” The only job we have here is to make logical, well-structured queries which would become gradually more complex as we build the API part by part. Let’s begin. Write a REST API for Managing Students in C# Here’s an example of a RESTful API for managing students in C#. This API will use the HTTP methods GET, POST, PUT, and DELETE to perform CRUD (Create, Read, Update, Delete) operations on a list of student resources. First, let’s define a “student class” to represent a student resource: C# public class Student { public int Id { get; set; } public string Name { get; set; } public int Age { get; set; } public string Email { get; set; } } Next, let’s create a StudentsController class to handle the HTTP requests: C# [Route("api/students")] public class StudentsController : Controller { private static List<Student> students = new List<Student> { new Student { Id = 1, Name = "John Smith", Age = 20, Email = "john.smith@example.com" }, new Student { Id = 2, Name = "Jane Doe", Age = 19, Email = "jane.doe@example.com" }, new Student { Id = 3, Name = "Bob Johnson", Age = 21, Email = "bob.johnson@example.com" } }; [HttpGet] public IActionResult GetAllStudents() { return Ok(students); } [HttpGet("{id}")] public IActionResult GetStudent(int id) { var student = students.FirstOrDefault(s => s.Id == id); if (student == null) { return NotFound(); } return Ok(student); } [HttpPost] public IActionResult CreateStudent([FromBody] Student student) { student.Id = students.Max(s => s.Id) + 1; students.Add(student); return CreatedAtAction(nameof(GetStudent), new { id = student.Id }, student); } [HttpPut("{id}")] public IActionResult UpdateStudent(int id, [FromBody] Student student) { var existingStudent = students.FirstOrDefault(s => s.Id == id); if (existingStudent == null) { return NotFound(); } existingStudent.Name = student.Name; existingStudent.Age = student.Age; existingStudent.Email = student.Email; return NoContent(); } [HttpDelete("{id}")] public IActionResult DeleteStudent(int id) { var student = students.FirstOrDefault(s => s.Id == id); if (student == null) { return NotFound(); } students.Remove(student); return NoContent(); } This StudentsController class defines the following endpoints: GET /api/students: Returns a list of all students. GET /api/students/{id}: Returns a single student resource by ID. POST /api/students: Creates a new student resource. PUT /api/students/{id}: Updates an existing student resource by ID. DELETE /api/students/{id}: Deletes an existing student resource by ID. Note: we are using the [FromBody] attribute to deserialize the request body into a “student object” for the CreateStudent and UpdateStudent methods. Also note that the students list is static and initialized with some sample data. In a real-world scenario, this data would likely be stored in a database or some other persistent storage. Finally, we’re using the HTTP status codes 200 OK, 201 Created, 204 No Content, and 404 Not Found to indicate the outcome of each operation. Add More Properties to the Student Class Here’s an updated version of the “student class” with some additional properties: C# public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string ZipCode { get; set; } } In this updated version of the class, we’ve added the following properties: FirstName: The first name of the student. LastName: The last name of the student. Address: The street address of the student. City: The city where the student lives. State: The state where the student lives. ZipCode: The ZIP code of the student’s address. These additional properties can provide more information about the student and can be useful for filtering, sorting, and searching operations in the API. Add Another Property to the Student Class Regarding the Subjects the Student Has Taken Here’s an updated version of the “student class” with an additional property for the subjects the student has taken: C# public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string ZipCode { get; set; } public List<string> SubjectsTaken { get; set; } } In this updated version of the class, we’ve added a property called SubjectsTaken, which is a list of strings representing the names of the subjects the student has taken. This property can be useful for querying and filtering students by the subjects they have studied. You can add subjects to the list like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = "CA", ZipCode = "12345", SubjectsTaken = new List<string> { "Math", "Science", "History" } }; And, you can retrieve the list of subjects for a given student like this: C# var student = GetStudent(1); var subjects = student.SubjectsTaken; Change SubjectTaken to an Object Named Subject Containing the Properties Id, Name, Teacher, and Credits Here’s an updated version of the “student class” where the SubjectsTaken property is replaced with a new object subject containing the Id, Name, Teacher, and Credits properties: C# public class Subject { public int Id { get; set; } public string Name { get; set; } public string Teacher { get; set; } public int Credits { get; set; } } public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string ZipCode { get; set; } public List<Subject> SubjectsTaken { get; set; } } In this updated version of the class, we’ve replaced the SubjectsTaken property with a new object subject containing the Id, Name, Teacher, and Credits properties. This new structure can provide more detailed information about the subjects the student has taken, and it can be useful for filtering and querying students based on specific subject properties. You can add a subject to the SubjectsTaken list like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = "CA", ZipCode = "12345", SubjectsTaken = new List<Subject> { new Subject { Id = 1, Name = "Math", Teacher = "Mr. Smith", Credits = 4 }, new Subject { Id = 2, Name = "Science", Teacher = "Ms. Jones", Credits = 3 } } }; And you can retrieve the list of subjects for a given student like this: C# var student = GetStudent(1); var subjects = student.SubjectsTaken; Change the State Property in the Student Class to Enum C# public enum State { AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY } public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public State State { get; set; } public string ZipCode { get; set; } public List<Subject> SubjectsTaken { get; set; } } In this updated version of the class, we’ve replaced the State property with an enum called State. The State enum contains a list of state abbreviations and provides type safety for the State property. You can set the State property for a student like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = State.CA, ZipCode = "12345", SubjectsTaken = new List<Subject> { new Subject { Id = 1, Name = "Math", Teacher = "Mr. Smith", Credits = 4 }, new Subject { Id = 2, Name = "Science", Teacher = "Ms. Jones", Credits = 3 } } }; And you can retrieve the State property for a given student like this: C# var student = GetStudent(1); var state = student.State; Add Validation for Making the ID, FirstName, and Email Mandatory Here’s an updated version of the student class that includes validation for the Id, FirstName, and Email properties: C# public class Student { private int _id; private string _firstName; private string _email; public int Id { get { return _id; } set { if (value <= 0) { throw new ArgumentException("Id must be greater than zero."); } _id = value; } } public string FirstName { get { return _firstName; } set { if (string.IsNullOrEmpty(value)) { throw new ArgumentException("First name cannot be null or empty."); } _firstName = value; } } public string LastName { get; set; } public int Age { get; set; } public string Email { get { return _email; } set { if (string.IsNullOrEmpty(value)) { throw new ArgumentException("Email cannot be null or empty."); } _email = value; } } public string Address { get; set; } public string City { get; set; } public State State { get; set; } public string ZipCode { get; set; } public List<Subject> SubjectsTaken { get; set; } } In this updated version of the class, we’ve added private fields for the Id, FirstName, and Email properties and added validation logic to their corresponding setter methods. The Id property must be greater than zero, and the FirstName and Email properties cannot be null or empty. If any of these validation checks fail, an ArgumentException will be thrown. You can create a new student object like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = State.CA, ZipCode = "12345", SubjectsTaken = new List<Subject> { new Subject { Id = 1, Name = "Math", Teacher = "Mr. Smith", Credits = 4 }, new Subject { Id = 2, Name = "Science", Teacher = "Ms. Jones", Credits = 3 } } }; If you attempt to create a new student object with invalid data, an ArgumentException will be thrown: C# var invalidStudent = new Student { Id = 0, // throws ArgumentException FirstName = "", // throws ArgumentException LastName = "Doe", Age = 20, Email = "", // throws ArgumentException Address = "123 Main St", City = "Anytown", State = State.CA, ZipCode = "12345", SubjectsTaken = new List<Subject>() }; Make the Students Controller Asynchronous Here’s an updated version of the StudentsController class that makes use of asynchronous methods: C# [ApiController] [Route("api/[controller]")] public class StudentsController : ControllerBase { private readonly IStudentRepository _studentRepository; public StudentsController(IStudentRepository studentRepository) { _studentRepository = studentRepository; } [HttpGet] public async Task<ActionResult<IEnumerable<Student>>> GetAllStudents() { var students = await _studentRepository.GetAllStudentsAsync(); return Ok(students); } [HttpGet("{id}")] public async Task<ActionResult<Student>> GetStudentById(int id) { var student = await _studentRepository.GetStudentByIdAsync(id); if (student == null) { return NotFound(); } return Ok(student); } [HttpPost] public async Task<ActionResult<Student>> CreateStudent(Student student) { if (!ModelState.IsValid) { return BadRequest(ModelState); } await _studentRepository.AddStudentAsync(student); return CreatedAtAction(nameof(GetStudentById), new { id = student.Id }, student); } [HttpPut("{id}")] public async Task<IActionResult> UpdateStudent(int id, Student student) { if (id != student.Id) { return BadRequest(); } if (!ModelState.IsValid) { return BadRequest(ModelState); } var existingStudent = await _studentRepository.GetStudentByIdAsync(id); if (existingStudent == null) { return NotFound(); } await _studentRepository.UpdateStudentAsync(student); return NoContent(); } [HttpDelete("{id}")] public async Task<IActionResult> DeleteStudent(int id) { var existingStudent = await _studentRepository.GetStudentByIdAsync(id); if (existingStudent == null) { return NotFound(); } await _studentRepository.DeleteStudentAsync(existingStudent); return NoContent(); } } In this updated version of the class, we’ve added the async keyword to the action methods that interact with the IStudentRepository. We’ve also added the await keyword before calling the asynchronous methods to wait for the result to complete before continuing. If the action method returns a value, we’ve wrapped it in the ActionResult<T> type, where “T” is the type of the returned value. If an error occurs, we return an appropriate ActionResult, such as BadRequest() or NotFound(). Note: the IStudentRepository interface must also be updated to include asynchronous versions of its methods. Here’s an example of what the updated IStudentRepository might look like: C# public interface IStudentRepository { Task<IEnumerable<Student>> GetAllStudentsAsync(); Task<Student> GetStudentByIdAsync(int id); Task AddStudentAsync(Student student); Task UpdateStudentAsync(Student student); Task DeleteStudentAsync(Student student); } In this version of the interface, we’ve added the async keyword to each method and changed the return types to Task<T>, where “T” is the type of the returned value. This allows the calling code to use the await keyword to wait for the method to complete. Conclusion In this article, we used ChatGPT to build a REST API for managing students using ASP.NET Core and C# programming language. This API is a basic implementation of a web service that implements CRUD (Create, Read, Update, Delete) operations for managing a list of students by providing several distinct endpoints. ChatGPT was able to add basic validation to the student class, as well as make the code asynchronous. Apart from that, this AI tool easily changed the type of the state property to enum and created a whole new class when we needed a new object for the subject property. All in all, it did a great job and created fully functional classes, a controller, and a repository. ChatGPT, in my opinion, should be taken seriously as a programming aid, especially for simple and repetitive tasks, since it can automate them and recommend more efficient and effective programming techniques. As a result, it can enable programmers to save time, and they can use that time to solve more challenging and creative programming tasks. However, I also believe it’s important to use it as a complement to human programming skills rather than a replacement since it is oftentimes incapable of providing optimal solutions in complex situations. As for its power, I am confident this article was enough to prove the power of ChatGPT even to the biggest skeptics out there. If you are a developer, you can use ChatGPT for other various use cases, such as to validate or test code. You can even use it for code generation in Python. Many companies are already looking for ways to integrate ChatGPT to their platform. In summary, ChatGPT is an impressive AI language model that has the potential to revolutionize the way we approach programming, and while it may not be able to completely replace human programmers, it can certainly be considered a valuable resource.
Since the first introduction of the term DevOps, it seems that new 'Ops" related terms pop up as quickly as technology trends. For example: AIOPs: Enhance and automate various IT processes with AI. MLOps: Develop, deploy, and manage machine learning. FinOps: Optimizing and managing cloud cost. DevSecOps: Integrate security into the Software development lifecycle (SDLC). GitOps: Manage and deploy infrastructure and applications (code and configuration) using Git. I bet the next Ops-related term will be ChatGPT-Ops ;-). Finally, an OPs term that has popped up in recent months is APIOps, but what does it mean, especially as APIs are not new and come in many different styles? APIOps is an approach that utilizes the principles of GitOps and DevOps in deploying APIs. Similar to DevOps, APIOps facilitates streamlined modification and automated deployment of API changes into production. Just like DevOps, automation is a key pillar of APIOps, but to be successful at APIOps, you must consider more than just your automation pipelines. You need to adopt the principles associated with CALMS to be successful. Culture You should treat your API as a Product. This means that you need to move away from the technical. An API should not just be seen as a Jira task and should not just be the sole responsibility of software engineers. Your API should have a Product Manager assigned to help make adoption successful. Your product will have a roadmap, a lifecycle, and business success criteria. Automation DevOps teams use DORA metrics (DevOps Research and Assessment) to gauge their performance level and determine if they fall into the category of "low performers" or "elite performers." Adopting DORA metrics will give you an insight into the delivery performance of your CI/CD pipelines to get your API into production. DORA metrics: Deployment Frequency: How often an organization successfully releases to production. Lead Time for Changes: The amount of time it takes a commitment to get into production. Change Failure Rate: The percentage of deployments causing a failure in production. Time to Restore Service: How long it takes an organization to recover from a failure in production. Make sure that when you're looking at the DORA metrics, you include items from an API-centric perspective. For example, introducing a breaking change to your API contract into production should be included in the 'Change Failure Rate' metric, especially if the change is unmanaged. Lean Success means adopting a lean approach to eliminate waste and focusing on delivering value to customers quickly and continuously. If a tree falls in the forest, does it make a sound? Similarly, if an API is not used in production, does anyone care? Don't rush into the implementation of the service of your API; first, make sure the success criteria are known. Implementation should wait until the API has been reviewed and approved by its potential consumers. For example, early feedback from consumers to show an API will address their use case/pain point. Measure Technical KPIs are table stakes for any API program; this would include transactions per second, error rate, latency, and tracking of the SLA of the API that you're providing to your consumers. In addition, you need to include more business-level goals to move to the next level of measuring what really matters. Below are some examples of what can be tracked: RoI (return on investment) KPIs, for example, is your API helping to drive direct or indirect revenue growth, or cost reductions if you hit the nirvana of API reuse, etc. Consumption KPIs: What is the growth trend in your API traffic monthly, or does your API help grow the ecosystem of partners onboarded to your organization? Engagement KPIs: Track the NPS (Net Promoter Score) of your API, or as your API is your product, are you tracking retention and churn? Share Regardless if your API is targeted as private (consumed within an organization), partner (consumed by partners of your organization), or public (consumed by anybody interested in the API), you must have a vehicle to share your APIs and for you to receive feedback from your API consumers. This vehicle would be an internal API developer portal or a public Marketplace where consumers can discover and onboard/register to your APIs in a self-service fashion. Also importantly, for the evolution of your API, you need to be able, as API consumers, to provide feedback on the API so that it can evolve in the appropriate direction. Applying the above DevOps principles to the API lifecycle, APIOps can help organizations improve collaboration, reduce time to market, deliver better customer experiences, and ultimately better business outcomes.
Table of Contents Introduction Installation of the LazyPredict Module Implementing LazyPredict in a Classification Model Implementing LazyPredict in a Regression Model Conclusion Introduction The development of machine learning models is being revolutionized by the state-of-the-art Python package known as LazyPredict. By using LazyPredict, we can quickly create a variety of fundamental models with little to no code, freeing up our time to choose the model that would work best with our data. Model selection may be made easier using the library without requiring considerable parameter adjustment, which is one of its main benefits. LazyPredict offers a quick and effective way to find and fit the best models to our data. Let’s explore and learn more about the usage of this library in this article. Installation of the LazyPredict Module Installation of the LazyPredict library is a pretty easy task. We need to use only one line of code as we usually do for installing any other Python library: Python !pip install lazypredict Implementing LazyPredict in a Classification Model We’ll utilize the breast cancer dataset from the Sklearn package in this example. Now, let’s load the data: Python from sklearn.datasets import load_breast_cancer from lazypredict.Supervised import LazyClassifier data = load_breast_cancer() X = data.data y= data.target To choose the best classifier model, let’s now deploy the “LazyClassifier” algorithm. These characteristics and input parameters apply to the class: Python LazyClassifier( verbose=0, ignore_warnings=True, custom_metric=None, predictions=False, random_state=42, classifiers='all', ) Let’s now apply it to our data and fit it: Python from lazypredict.Supervised import LazyClassifier from sklearn.model_selection import train_test_split # split the data X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3,random_state =0) # build the lazyclassifier clf = LazyClassifier(verbose=0,ignore_warnings=True, custom_metric=None) # fit it models, predictions = clf.fit(X_train, X_test, y_train, y_test) # print the best models print(models) The code above outputs: Then, we may conduct the following to examine these models’ individual details: Python model_dictionary = clf.provide_models(X_train,X_test,y_train,y_test) Next, use the name of the model that interests us (let’s choose the best model) to determine precisely which steps were used: Python model_dictionary['LGBMClassifier'] Here, we can see that a SimpleImputer was used on the entire set of data, followed by a StandardScaler on the numerical features. There are no categorical or ordinal features in this dataset, but if there were, OneHotEncoder and OrdinalEncoder would have been used, respectively. The LGBMClassifier model receives the data after transformation and imputation. LazyClassifier’s internal machine-learning models are evaluated and fitted using the sci-kit-learn toolkit. The LazyClassifier function automatically builds and fits a variety of models, including decision trees, random forests, support vector machines, and others, on our data when it is called. A set of performance criteria, like accuracy, recall, or F1 score, that you provide are then used to evaluate the models. The training set is used for fitting, while the test set is used for evaluation. Following the models’ evaluation and fitting, LazyClassifier offers a summary of the findings (the table above), along with a list of the top models and performance metrics for each model. With no need for manual tweaking or model selection, you can quickly and simply evaluate the performance of many models and choose the best one for our data. Implementing LazyPredict in a Regression Model Using the “LazyRegressor” function, we can, once again, accomplish the same for regression models. Let’s import a dataset that will be suitable for a regression task (Here, we can use the Boston dataset). Let’s now fit our data to the LazyRegressor using it: Python from lazypredict.Supervised import LazyRegressor from sklearn import datasets from sklearn.utils import shuffle import numpy as np # load the data boston = datasets.load_boston() X, y = shuffle(boston.data, boston.target, random_state=0) X = X.astype(np.float32) # split the data X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3,random_state =0) # fit the lazy object reg = LazyRegressor(verbose=0, ignore_warnings=False, custom_metric=None) models, predictions = reg.fit(X_train, X_test, y_train, y_test) # print the results in a table print(models) The code above outputs: The following is a detailed examination of the best regression model: Python model_dictionary = reg.provide_models(X_train,X_test,y_train,y_test) model_dictionary['ExtraTreesRegressor'] Here, we can see that a SimpleImputer was used on the entire set of data, followed by a StandardScaler on the numerical features. There are no categorical or ordinal features in this dataset, but if there were, OneHotEncoder and OrdinalEncoder would have been used, respectively. The ExtraTreesRegressor model receives the data after transformation and imputation. Conclusion The LazyPredict library is a useful resource for anybody involved in the machine learning industry. LazyPredict saves time and effort by automating the process of creating and assessing models, which greatly improves the effectiveness of the model selection process. LazyPredict offers a quick and simple method for comparing the effectiveness of several models and determining which model family is best for our data and problem due to its capacity to fit and assess numerous models simultaneously. The following link has all the programming examples uploaded to the GitHub repository so that you can play around with the codes according to your requirements. I hope that now you got an intuitive understanding of the LazyPredict library, and these concepts will help you in building some really valuable projects.
ChatGPT is an advanced language model created by OpenAI that uses deep learning algorithms to generate human-like responses to a wide range of natural language queries. It is capable of understanding and processing large amounts of text data, including news articles, books, and web pages. The model can learn from this data to generate responses to a wide range of questions and requests. One of the primary uses of ChatGPT is in the field of natural language processing (NLP). NLP involves the use of computer algorithms to understand and process human language. ChatGPT is one of the most advanced NLP models available today, and it has the ability to understand complex language patterns and generate highly accurate responses to a wide range of queries. One of the key advantages of ChatGPT is its ability to learn from large amounts of data. The model is trained on massive amounts of text data, which allows it to develop a deep understanding of language patterns and syntax. This training process enables ChatGPT to generate highly accurate and contextually relevant responses to user queries. Another advantage of ChatGPT is its ability to generate responses in a natural-sounding language. The model is capable of understanding the nuances of language, including idioms, colloquialisms, and cultural references. This means that the responses generated by ChatGPT are often indistinguishable from those generated by a human. ChatGPT can be used for a wide range of applications, including customer service, language translation, and content generation. For example, ChatGPT can be used to develop chatbots that can handle customer inquiries and provide support 24/7. ChatGPT can also be used to generate content for websites and social media platforms. This content can be tailored to the needs and preferences of the target audience, making it highly effective in driving engagement and conversions. In addition to its many advantages, ChatGPT also has some limitations. One of the main challenges of using ChatGPT is the need for large amounts of training data. The model requires a significant amount of data to develop a deep understanding of language patterns, which can make it difficult to train on smaller datasets. Another challenge of using ChatGPT is the potential for bias. Like all machine learning models, ChatGPT is only as good as the data it is trained on. If the training data contains biases, those biases can be reflected in the model's responses. This can lead to unintended consequences, such as perpetuating stereotypes or discrimination. To mitigate these challenges, it is important to carefully curate the training data used to train ChatGPT. This may involve using diverse datasets that are representative of a wide range of perspectives and experiences. It may also involve implementing techniques such as data augmentation, which involves artificially increasing the size of the training data by applying various transformations to the existing data. Here are the pros and cons of ChatGPT: Pros: ChatGPT is highly advanced and can generate highly accurate and contextually relevant responses to a wide range of queries. The model is capable of understanding the nuances of language, including idioms, colloquialisms, and cultural references. ChatGPT can learn from large amounts of data, enabling it to develop a deep understanding of language patterns and syntax. The model can be used for a wide range of applications, including customer service, language translation, and content generation. ChatGPT can generate responses in a natural-sounding language, making it highly effective in driving engagement and conversions. Cons: ChatGPT requires large amounts of training data to develop a deep understanding of language patterns, which can make it difficult to train on smaller datasets. Like all machine learning models, ChatGPT is only as good as the data it is trained on, and if the training data contains biases, those biases can be reflected in the model's responses. ChatGPT can be expensive to use, as it requires significant computing power to generate responses. The model may not be suitable for all applications, as it may not be able to handle highly specialized or technical language. There may be ethical considerations when using ChatGPT, particularly around issues such as bias and privacy. In conclusion, ChatGPT is an advanced language model that has the ability to generate highly accurate and contextually relevant responses to a wide range of queries. Its ability to learn from large amounts of data and generate responses in a natural-sounding language makes it a powerful tool for a wide range of applications, including customer service, language translation, and content generation. However, like all machine learning models, ChatGPT has some limitations, including the need for large amounts of training data and the potential for bias. To effectively leverage ChatGPT, it is important to carefully curate the training data and implement appropriate techniques to mitigate bias.
Let's start with an example; suppose your employer uses an AI system to analyze employee data and make decisions about hiring and promotion. In that case, there's a chance it could use your race or gender as one of its criteria features for making those decisions. If this happens without your knowledge or consent, especially if you don't agree with how they're being used, then there could be legal implications for both the company and the employees. This problem has been at least partially addressed by restricting access to certain types of sensitive information like faces and gender while still allowing access through other channels such as text search terms or GPS coordinates. However, these solutions still don't fully solve all privacy concerns since there will always be ways around them. This problem has been at least partially addressed by restricting access to certain types of sensitive information like faces and gender while still allowing access through other channels such as text search terms or GPS coordinates, but these solutions still don't fully solve all privacy concerns since there will always be ways around them; someone could not take pictures without including any faces at all. Why It Matters Now imagine if Facebook had been able to use facial recognition software on all those photos uploaded over the years; they could easily build up an extensive database containing thousands upon thousands of faces without ever asking permission from any users. Likewise, a government agency or corporation could use an AI system like this one at airports or checkpoints without asking permission from anyone who might pass through those checkpoints or considering how many times we've seen companies like Facebook ask us for access so that "our friends" can see what we're doing (and vice versa). This is particularly concerning because many applications where users can share their data with friends and family members. In these cases, sharing personal information may be voluntary but still allow outsiders access due to default privacy settings in most apps that makes sensitive data available publicly by default or require affirmative action from users before making any changes (such as turning off location services). What Can Be Done? AI solutions should be designed in such a way that they can learn without being able to identify individuals by name or other direct identifiers. This is important because it ensures that your data remains anonymous and private while allowing the machine-learning algorithm to make accurate predictions. An example of this kind of AI system is one that predicts whether someone will be diagnosed with a specific disease in the future based on their genome sequence data. In this case, before making any predictions about an individual's likelihood of developing cancer or Alzheimer's disease based on their genetic makeup, all personal information should be removed from the dataset (e.g., names, religion, nationality, and so on). This allows researchers to study trends across large populations without compromising anyone's privacy. These types of systems are called "de-identified," meaning any information about the individuals in the dataset is removed before it is used. It is a privacy protection measure that allows researchers and other individuals with access to de-identified datasets to analyze the information without compromising the privacy of those represented within it. Now, de-identifying is not the same as anonymizing data: anonymized datasets may still contain indirect identifiers like zip codes, birthdates, or phone numbers, which could be used to identify individuals if they were linked together with other sources (such as census records). For example, your 10-digit mobile number may be changed with a different set of digits, but this would still be the same across the dataset. Approaches for Making AI More Private and Secure Differential Privacy Differential privacy is a widely used technique for privacy-preserving ML. The goal of differential privacy is to protect the privacy of individual data points in a dataset by adding noise to the data. The noise is added to ensure that the ML model's output is not affected significantly. It involves adding a carefully calibrated amount of noise to the data, making it difficult to identify an individual's data. The amount of noise added to the data is determined by a parameter called epsilon. The higher the epsilon value, the more noise is added to the data and the greater the privacy protection. However, as the number of noise increases, the accuracy of the ML model decreases. Therefore, finding an optimal value for epsilon is crucial to achieving a balance between privacy and accuracy. Federated Learning Various applications have used Federated learning, including natural language processing, image classification, and recommendation systems. In addition, federated learning can be used in situations where data privacy is crucial and can also be used in situations where data is not easily transferable to a central location. For example, it can be used for training ML models on mobile devices, where users may not want to share their data with a central server. Homomorphic Encryption Homomorphic encryption allows the system to perform operations on the encrypted data without decrypting it. This technique can be used to train ML models on encrypted data, ensuring that the data remains private. Homomorphic encryption can be applied to various ML models, including linear regression, decision trees, and neural networks. Encryption involves using mathematical operations on encrypted data without decrypting it. This technique is particularly useful when working with sensitive data that cannot be shared, such as medical records or financial data. However, homomorphic encryption is computationally expensive, which makes it less practical for some applications. Libraries to Build Privacy-Preserving Models Differential privacy in Tensorflow Federated Learning in PyTorch MPC in Tensorflow Conclusion Privacy is becoming an increasingly important concern in the field of AI. Fortunately, there are several techniques available for developing privacy-preserving models. Differential privacy, federated learning, and homomorphic encryption are just a few examples of these techniques. With the right approach, we can continue to make progress in the field of machine learning while also addressing privacy concerns. I hope this article gives you a better understanding of why it is essential to have privacy-preserving systems and how best we can protect individual privacy while developing a more robust and advanced Artificial Intelligence system.
Last month I shared my first experience using ChatGPT and GitHub Copilot. I was impressed, but not overwhelmed. AI suggestions can be accurate, but only to the extent that your question is specific and to the point. It can’t read your mind, but it can find solutions to well-defined tasks that have been solved before – and for which there may already exist a perfectly good library solution. Knowing what to build remains the hardest part of all and we are nowhere near automating the entire journey from idea to code. I have two more points to make on the topic. First, since we as programmers are required to stay in the driver’s seat for a while, we had better stay sharp. Secondly, I believe the focus should be less on solutions that convert ideas to code. I personally can’t wait for useable AI that can do the reverse: scan an undocumented tangle of eighty microservices and explain to me in clear English what it is all about, because nobody bothered to keep documentation up to date. The Holy Grail: From Idea to Code New programming languages move ever further away from the hard math of moving bytes. They free up head space to deal with the challenge of modelling the real world in executable code at an ever-higher level of abstraction. But even no-code solutions or graphical workflow engines are still programming languages. They only take a different form of coding skills, which is bad news for math talents. Eventually, you start a dialogue with the AI, explain the task you want to be automated, and it will do it, without you the ‘programmer’ seeing (or caring about) how it is implemented. That perfect stage is the equivalent of a fully autonomous self-driving car. You tell it where you want to go and it takes you there. It’s so reliable you don’t need a steering wheel or accelerator. In fact, it’s only right you can no longer interfere. You wouldn’t know how to anymore. We’re not there yet. The legal minefield surrounding liability is a bigger hurdle than the state-of-the-art, which does a better job than many drivers. Until we can command our own robot chauffeur, we will get increasingly sophisticated assistance that yet requires a fully competent driver. Paradoxically, this doesn’t make driving safer. The more trustworthy the AI, the more tempted you are to disconnect from surrounding traffic. The rare moment that you need to spring into action, you’ll find your skills have atrophied (if you were paying attention at all). The analogy with AI-assisted software writing is obvious. As humans, we can only confidently surrender control if the process is flawless and predictable, like a compilation. The idea as expressed in human language and translated into executable software should be like Java code as it is converted into byte code by the compiler. Few programmers know in detail how that works. Most don’t have a clue. They can afford not to know because it’s a robust, predictable process. Neither Co-Pilot nor Autopilot Not to be pedantic, but GitHub Copilot is really an autopilot. In aviation, a co-pilot is a fully capable human being. The plane’s autopilot is a collection of advanced tools that help but don’t replace the pilot. But they do have the power to make you lazy. Like the driver and pilot, the programmer cannot afford to lose her sharpness. She must have good judgment over the suggestions offered. Traditional autocomplete has made us lazy. New AI flavours will make us even more so. If you are not aware of thread safety and don’t know what a re-entrant lock, concurrent collection or atomic integer is, you can’t instruct the AI to create one or recognize that you need one. I’m still preparing for the OCP-17 exam. I’m thrown between my conviction that as a self-respecting senior developer, you should know this stuff, and my annoyance over the subject matter and line of questioning. What do I learn by poring over convoluted snippets to spot a missing throws declaration? Why should I know the exact signatures of the major static methods in the Files class? You use it, or you lose it, and I know I’ll quickly forget much of it. My brain has a weird efficiency. What really sticks are movie quotes and song lyrics. Maybe I should set the NIO API to music. But the folks at Oracle do have a point with their carefully confusing little train wrecks. It hones your powers of perception, and it’s a recognition of the sobering fact that you spend many hours of your career reading terrible code, some of it your own. Too many books and courses teach you how to write new code. Not many appreciate the value of learning good maintenance skills: making piecemeal improvements and bug fixes that don’t crash the system. Nobody likes doing it, so let the AI have a crack at it. If it’s so smart, let it plough through stack traces in server logs and come back with the root cause for that outage. That would be impressive. From Implementation Back to Intent How many rewrites of legacy systems could have been prevented if we hadn’t lost the overview of how the parts fit together? But nobody bothered to keep the documentation up to scratch. (If you think the documentation for large enterprise systems is expensive, try a full rewrite). The same applies to the regular chore of updating libraries and migrating sources to newer runtime versions. Don’t do it by hand, use an initiative like OpenRewrite. Getting from implementation back to intent (i.e., the documentation) is the next big win. We don’t need more code. We need to better understand what’s already there, with the help of AI. I’m not talking about auto-generated API docs for methods that are already self-documenting. That never conveys the intent of the big picture. I’m talking about user-friendly manuals for large enterprise systems at multiple levels of complexity. Let AI distil the intent of such a web of interconnected services from the code of its constituent parts. Turn it into understandable diagrams and explain it in plain English. Don’t cram everything into one sequence or activity diagram. Use a divide-and-conquer fashion, branching out from the big picture to individual details. I feed it a data model like this: ... and it returns something like this: This data model represents a lending library for books. It stores book titles identified by their ISBN and keeps biographical data on each author. A book title can have zero or more physical copies. A book loan is registered with the id of the member, the book copy, the current date, and the date returned. Trivial, of course. But now imagine it a thousand times more complicated and the human brain is no longer equipped to create a bird's eye view. They say developers love to code and hate writing documentation. Maybe Microsoft should realign priorities with future versions of GitHub Copilot.
ChatGPT has taken the world by storm, and this week, OpenAI released the ChatGPT API. I’ve spent some time playing with ChatGPT in the browser, but the best way to really get on board with these new capabilities is to try building something with it. With the API available, now is that time. This was inspired by Greg Baugues’s implementation of a chatbot command line interface (CLI) in 16 lines of Python. I thought I’d start by trying to build the same chatbot but using JavaScript. (It turns out that Ricky Robinett also had this idea and published his bot code here. It’s pleasing to see how similar the implementations are!) The Code It turns out that Node.js requires a bit more code to deal with command line input than Python, so where Greg’s version was 16 lines, mine takes 31. Having built this little bot, I’m no less excited about the potential for building with this API though. Here’s the full code. I’ll explain what it is doing further down. import { createInterface } from "node:readline/promises"; import { stdin as input, stdout as output, env } from "node:process"; import { Configuration, OpenAIApi } from "openai"; const configuration = new Configuration({ apiKey: env.OPENAI_API_KEY }); const openai = new OpenAIApi(configuration); const readline = createInterface({ input, output }); const chatbotType = await readline.question( "What type of chatbot would you like to create? " ); const messages = [{ role: "system", content: chatbotType }]; let userInput = await readline.question("Say hello to your new assistant.\n\n"); while (userInput !== ".exit") { messages.push({ role: "user", content: userInput }); try { const response = await openai.createChatCompletion({ messages, model: "gpt-3.5-turbo", }); const botMessage = response.data.choices[0].message; if (botMessage) { messages.push(botMessage); userInput = await readline.question("\n" + botMessage.content + "\n\n"); } else { userInput = await readline.question("\nNo response, try asking again\n"); } } catch (error) { console.log(error.message); userInput = await readline.question("\nSomething went wrong, try asking again\n"); } } readline.close(); When you run this code, it looks like this: Let’s dig into how it works and how you can build your own. Building a Chatbot You will need an OpenAI platform account to interact with the ChatGPT API. Once you have signed up, create an API key from your account dashboard. As long as you have Node.js installed, the only other thing you’ll need is the openai Node.js module. Let’s start a Node.js project and create this CLI application. First create a directory for the project, change into it, and initialize it with npm: mkdir chatgpt-cli cd chatgpt-cli npm init --yes Install the openai module as a dependency: npm install openai Open package.json and add the key "type": "module" to the configuration, so we can build this as an ES module which will allow us to use top level await. Create a file called index.js and open it in your editor. Interacting With the OpenAI API There are two parts to the code: dealing with input and output on the command line and dealing with the OpenAI API. Let’s start by looking at how the API works. First we import two objects from the openai module, the Configuration and OpenAIApi. The Configuration class will be used to create a configuration that holds the API key, you can then use that configuration to create an OpenAIApi client. import { env } from "node:process"; import { Configuration, OpenAIApi } from "openai"; const configuration = new Configuration({ apiKey: env.OPENAI_API_KEY }); const openai = new OpenAIApi(configuration); In this case, we’ll store the API key in the environment and read it with env.OPENAI_API_KEY. To interact with the API, we now use the OpenAI client to create chat completions for us. OpenAI’s text-generating models don’t actually converse with you, but are built to take input and come up with plausible-sounding text that would follow that input, a completion. With ChatGPT, the model is configured to receive a list of messages and then come up with a completion for the conversation. Messages in this system can come from one of 3 different entities, the “system”, “user,” and “assistant.” The “assistant” is ChatGPT itself, the “user” is the person interacting, and the system allows the program (or the user, as we’ll see in this example) to provide instructions that define how the assistant behaves. Changing the system prompts for how the assistant behaves is one of the most interesting things to play around with and allows you to create different types of assistants. With our openai object configured as above, we can create messages to send to an assistant and request a response like this: const messages = [ { role: "system", content: "You are a helpful assistant" }, { role: "user", content: "Can you suggest somewhere to eat in the centre of London?" } ]; const response = await openai.createChatCompletion({ messages, model: "gpt-3.5-turbo", }); console.log(response.data.choices[0].message); // => "Of course! London is known for its diverse and delicious food scene..." As the conversation goes on, we can add the user’s questions and assistant’s responses to the messages array, which we send with each request. That gives the bot history of the conversation, the context for which it can build further answers on. To create the CLI, we just need to hook this up to user input in the terminal. Interacting With the Terminal Node.js provides the Readline module which makes it easy to receive input and write output to streams. To work with the terminal, those streams will be stdin and stdout. We can import stdin and stdout from the node:process module, renaming them to input and output to make them easier to use with Readline. We also import the createInterface function from node:readline import { createInterface } from "node:readline/promises"; import { stdin as input, stdout as output } from "node:process"; We then pass the input and output streams to createInterface and that gives us an object we can use to write to the output and read from the input, all with the question function: const readline = createInterface({ input, output }); const chatbotType = await readline.question( "What type of chatbot would you like to create? " ); The above code hooks up the input and output stream. The readline object is then used to post the question to the output and return a promise. When the user replies by writing into the terminal and pressing return, the promise resolves with the text that the user wrote. Completing the CLI With both of those parts, we can write all of the code. Create a new file called index.js and enter the code below. We start with the imports we described above: import { createInterface } from "node:readline/promises"; import { stdin as input, stdout as output, env } from "node:process"; import { Configuration, OpenAIApi } from "openai"; Then we initialize the API client and the Readline module: const configuration = new Configuration({ apiKey: env.OPENAI_API_KEY }); const openai = new OpenAIApi(configuration); const readline = createInterface({ input, output }); Next, we ask the first question of the user: “What type of chatbot would you like to create?” We will use the answer of this to create a “service” message in a new array of messages that we will continue to add to as the conversation goes on. const chatbotType = await readline.question( "What type of chatbot would you like to create? " ); const messages = [{ role: "system", content: chatbotType }]; We then prompt the user to start interacting with the chatbot and start a loop that says while the user input is not equal to the string “.exit” keep sending that input to the API. If the user enters “.exit” the program will end, like in the Node.js REPL. let userInput = await readline.question("Say hello to your new assistant.\n\n"); while (userInput !== ".exit") { // loop } readline.close(); Inside the loop, we add the userInput to the messages array as a “user” message. Then, within a try/catch block, send it to the OpenAI API. We set the model as “gpt-3.5-turbo” which is the underlying name for ChatGPT. When we get a response from the API, we get the message out of the response.data.choices array. If there is a message, we store it as an “assistant” message in the array of messages and output it to the user, waiting for their input again using readline. If there is no message in the response from the API, we alert the user and wait for further user input. Finally, if there is an error making a request to the API we catch the error, log the message and tell the user to try again. while (userInput !== ".exit") { messages.push({ role: "user", content: userInput }); try { const response = await openai.createChatCompletion({ messages, model: "gpt-3.5-turbo", }); const botMessage = response.data.choices[0].message; if (botMessage) { messages.push(botMessage); userInput = await readline.question("\n" + botMessage.content + "\n\n"); } else { userInput = await readline.question("\nNo response, try asking again\n"); } } catch (error) { console.log(error.message); userInput = await readline.question( "\nSomething went wrong, try asking again\n" ); } } Put that all together and you have your assistant. The full code is at the top of this post or on GitHub. You can now run the assistant by passing it your OpenAI API key as an environment on the command line: OPENAI_API_KEY=YOUR_API_KEY node index.js This will start your interaction with the assistant, starting with it asking what kind of assistant you want. Once you’ve declared that, you can start chatting with it. Experimenting Helps Us to Understand Personally, I’m not actually sure how useful ChatGPT is. It is clearly impressive; its ability to return text that reads as if it was written by a human is incredible. However, it returns content that is not necessarily correct, regardless of how confidently it presents that content. Experimenting with ChatGPT is the only way that we can try to understand what it is useful for, thus building a simple chatbot like this gives us grounds for that experiment. Learning that the system commands can give the bot different personalities and make it respond in different ways is very interesting. You might have heard, for example, that you can ask ChatGPT to help you with programming, but you could also specify a JSON structure and effectively use it as an API as well. But as you experiment with that, you will likely find that it should not be an information API, but more likely something you can use to understand your natural text and turn it into a JSON object. To me, this is exciting, as it means that ChatGPT could help create more natural voice assistants that can translate meaning from speech better than the existing crop that expects commands to be given in a more exact manner. I still have experimenting to do with this idea, and having this tool gives me that opportunity. This Is Just the Beginning If experimenting with this technology is the important thing for us to understand what we can build with it and what we should or should not build with it, then making it easier to experiment is the next goal. My next goal is to expand this tool so that it can save, interact with, and edit multiple assistants so that you can continue to work with them and improve them over time. In the meantime, you can check out the full code for this first assistant in GitHub, and follow the repo to keep up with improvements.
This is the fifth piece in a series on developing XR applications and experiences using Oracle and focuses on XR applications of healthcare, vision AI, training and collaboration, and messaging, including other topics such as multi-platform development, etc. Find the links to the first four articles below: Develop XR With Oracle, Ep 1: Spatial, AI/ML, Kubernetes, and OpenTelemetryDevelop XR With Oracle, Ep 2: Property Graphs and Data VisualizationDevelop XR With Oracle, Ep 3: Computer Vision AI, and MLDevelop XR With Oracle, Ep 4: Digital Twins and Observability As with the previous posts, I will specifically show applications developed with Oracle database and cloud technologies using HoloLens 2, Oculus, iPhone, and PC and written using the Unity platform and OpenXR (for multi-platform support), Apple Swift, and WebXR. Throughout the blog, I will reference the corresponding demo video below: Extended Reality (XR) and Healthcare I will refer the reader to the first article in this series (again, the link is above) for an overview of XR, and I will not go in-depth into the vast array of technology involved in the healthcare sector but will instead focus on the XR-enablement of these topics and the use of Oracle tech to this end particularly as Oracle has an increased focus in this area via the Cerner acquisition and other endeavors. It is well known that telehealth has grown tremendously since the pandemic, peaking at 78 times the rate just a month before the pandemic and even now still leveling off at 38 times that rate. These and other numbers and their impact are well documented in numerous publications, such as this McKinsey report and the VR/AR Associations Healthcare Forum, which will be referenced in this article. One only needs to have heard of the use of XR for live surgeries by Johns Hopkins back in 2021 to understand the extent to which XR will aid in this industry, and so again, I will not delve too deeply into attempting to prove this, but proceed to give some examples and ideas involving Oracle database and cloud technologies. Vision AI and Contextual Intelligence: Real-Time Analysis and Diagnosis The first example involves the use of the Hololens mixed-reality headset (though other headsets/devices could be used) and Oracle’s Vision AI service. It should not be much of a stretch of the imagination to envision healthcare workers wearing XR devices that provide additional functionality and information to them (and indeed, many already are). Today doctors, dentists, etc. constantly cross-reference a computer screen with a patient to check for information and orient what they see (being x-ray, MRI, etc.) and often where head-mounted lights, monocular magnifying glasses, etc. The XR solutions being discussed here are mere evolutions of that. More details on this application can be found in the research blog here, but the basic details of how it works are described here. The application running on the Hololens takes pictures with its built-in camera using the wearer’s field of vision as a reference and does so at regular intervals. This provides a hands-free experience that is, at the very least, more convenient. Still, in many situations, like in the operating room, it’s necessary for users’ hands to interact with the real (or XR) world, thus making it a more optimal solution. This also means the system can pick up contextual information that the practitioner may not be aware of or have access to while quickly gathering and processing such information without the practitioner having to explicitly instruct it, thus saving time as well. This leads to the optimization that mixed reality provides, an exciting byproduct of immersion. The Hololens then sends these pictures to OCI Object Storage via secure REST calls, where they can be conveniently accessed by the OCI Vision AI Service directly and also stored and/or accessed in the database. There are several different approaches and architectures that can be used from this point to conduct the logic and calls to the Oracle Vision AI APIs for processing the images sent by the XR device. For example, as far as language used, initial versions were written using the OCI CLI, Java, and Python, with a final Java GraalVM-native image version being used. Also, as the Java GraalM native image starts almost instantaneously when the service conducts a particular short-lived routine, it is a good candidate for a serverless function. Optionally, the OCI Notification Service can listen to Object Store changes and call serverless functions as well. The Java service receives a notification of image upload to object storage and conducts a series of actions as follows: Conducts an OCI Vision AI service API call backed by an X-ray object detection model and provides the location of the image sent by the Hololens. Receives a reply from the object detection model with the percentage chance of the X-ray being in the image and the bounding coordinates of it. Crops the original image using the bounding coordinates. Conducts another OCI Vision AI service API call backed by an X-ray classification model and provides it the cropped image of the X-ray. Receives a reply from the image classification model with percentage change of X-ray containing signs of abnormalities/pneumonia. The Hololens application receives this reply and notifies the wearer with an audible notification — this is configurable and can also be visual. In the case of this application, the information includes the picture of the cropped x-ray with its discovered details listed and stored in a virtual menu located on the wrist and viewable by the wearer only. This approach prevents the interruption of the wearer though it is also possible to overlay the results on the real-life X-ray from which they were derived. Vision AI and XR are a natural match for solutions in a number of areas, including both healthcare workers, as shown here, and people with conditions such as Parkinson’s, autism, Alzheimer’s, vision and hearing impairment, etc. (e.g., applications/solutions that can be used in this space is described in this blog). Motion and Sentiment Study: Real-Time Face and Body Tracking Analytics Today, 95% of healthcare facilities provide remote treatment and rehabilitation. XR technologies can be used to help patients better understand their conditions and treatment options. This can help patients feel more informed and empowered in their healthcare decisions. The next example was implemented using the iPhone and Swift using Apple’s RealityKit for body tracking movements. The joint coordinates of a model movement are recorded and sent to the Oracle database via Rest calls to ORDS. A person (whether it be a patient, athlete, etc.) using the application attempts to conduct the same movement, and their joint coordinates are compared to those of the model movement stored in the database. If the allowable deviation/delta of the movements is exceeded, the corresponding joints and bones are shown in red rather than green. This feedback is given in real-time to measure progress (e.g., after a rotator cuff or other surgery) and/or to allow the user to modify their movement to match the control model as well as work on balance and coordination. These movements can, in turn, be analyzed (more on XR and Oracle Analytics in an upcoming blog), replayed, manipulated to carry out simulations, etc. These use cases, of course, extend into the sports, entertainment, etc. sectors as well. Digital Twins + Doubles and Multiplayer/Participant Messaging: Training and Collaboration A recent study reported by Unity shows that approximately 94% of companies using real-time 3D technology find it valuable for staff training and are using it to create interactive guided learning experiences. This applies across industries, including manufacturing, transportation, healthcare, retail, and many more, and some trends in the area (many of which coincide with what is shown in this blog) are discussed here. XR technologies can be used to create realistic simulations of medical scenarios, allowing healthcare professionals to practice procedures and techniques in a safe and controlled environment. For example, medical students and surgeons can use XR simulations to practice surgeries with tactile sensations giving the feedback of real surgery, and nurses can use AR simulations to practice administering medications. This can help healthcare professionals gain valuable experience without risking patient safety. In addition to teaching, XR provides a unique ability for collaboration between individuals in different locations and specialties, creating a shared virtual space where multiple users can interact with each other and with virtual objects. There are several different techniques and software options available for creating XR multiplayer/participant training and collaboration, including XR conferencing software and metaverses, Photon and others like it, Rest, WebSockets, and different types of messaging. XR conferencing software and platforms, such as Alakazam, are becoming increasingly popular and allow multiple users to participate in virtual meetings, events, training sessions, etc. Photon is a platform that allows developers to create multiplayer games and applications using Unity, Unreal Engine, and other game engines and can easily be installed on and take advantage of Oracle Cloud compute (including NVIDIA GPU). It is perhaps the most famous of such platforms, but there are others. WebSockets are a protocol for real-time communication between web clients and servers and can be a faster and more efficient method for such use cases than Rest though Rest calls are a simple, viable option in many cases as well, and more APIs are available via Rest in general than any of the other methods. Messaging is another technique that can be used to create XR training and collaboration experiences. Systems such as Kafka and JMS have both pub/sub (multi-consumer topics) and producer/consumer (single-consumer topics) and are very flexible for different training and collaboration needs. In the training application shown in the video, I used a number of the techniques above but focused on using Oracle’s TxEventQ messaging engine (formerly known as AQ). This is a very powerful and unique offering for a number of reasons, one key one being its ability to conduct database work and messaging work in the same local transaction. This is unique and perfectly suited for microservices as it provides a transaction outbox pattern and also exactly-once message delivery so that there is no message loss and no need for the developer to write deduplication logic. This may not be necessary for conventional gaming or movie streaming but is a must for mission-critical systems and provides some extremely interesting and unique aspects for XR where the ability to reliably store a shared (3D) object (especially those that are dynamic or created via generative AI, e.g.) and the interactions/messages that have been made upon it by various participants is a very powerful tool. That is precisely what is done in the app shown in the video. The collaborative session is not only recorded in 3d for viewing as a video, but the actual objects and their interactions by participants are recorded in the database and available for playback so that they can be intercepted and manipulated later on. This allows for the ability to do deeper learning and run further models (AI or otherwise), simulations, scenarios, etc., by tweaking the objects and interactions like/with playbooks. With the development of more advanced technology and more powerful devices, XR training will become more prevalent in the near future, making the training experience more immersive and interactive. One other small note is that the app in the video has healthcare workers in full 3d volumetric video captures made professionally in a studio but also one that is a simple 2d video with the green screen alpha-channel removed via a Unity shader providing a similar holographic effect with minimal effort (the quality could be better with more effort re lighting, etc.). A Zoom virtual greenscreen could be used to this end as well as free assets with animation done in Blender. Unity, Zoom, and Blender all being free and use the Oracle always-free Autonomous database, making creating the solution very accessible as far as software and cloud costs. Data EcoSystem: Telehealth and Virtual Health Centers A virtual healthcare center or hospital is a healthcare facility that provides medical services through digital channels such as video conferencing, online chat, remote monitoring, and, increasingly, XR technologies. These digital tools allow patients to access healthcare services from the comfort of their homes or remote locations without the need to physically visit a hospital or clinic. The concept of a virtual healthcare center is becoming increasingly popular due to the growing demand for telemedicine services, especially in the wake of the COVID-19 pandemic. A virtual healthcare center can provide a wide range of medical services such as primary care, specialty consultations, diagnostic tests, and prescription refills and offers convenience and flexibility to patients who can access medical services from anywhere and at any time. This can be especially beneficial for patients with mobility issues, those living in rural areas, or patients who need to consult with a specialist who is not available in their local area. It can also reduce healthcare costs for both patients and healthcare providers. By using remote consultations and monitoring, healthcare providers can reduce the need for expensive in-person consultations and hospital stays. Patients can also save money on transportation costs and time off from work. A virtual healthcare center can improve the quality of care for patients. With remote monitoring, patients can receive personalized and continuous care, with healthcare providers able to monitor their health in real-time and intervene quickly if necessary. This can result in better health outcomes and reduced hospital readmissions. However, there are also some challenges to implementing virtual healthcare centers. These include issues around data privacy and security, as well as the need for adequate internet access and digital literacy among patients. In addition, some patients may still prefer traditional in-person care, and, though decreasing in number, there are limitations to what medical services can be provided remotely. The concept of a virtual healthcare center has the potential to revolutionize the way healthcare is delivered, with benefits for both patients and healthcare providers. However, careful consideration and planning are needed to ensure that the implementation of virtual healthcare centers is safe, effective, and equitable for all patients. Oracle, with its acquisition of Cerner and increased focus on the future of healthcare, is in a one-of-a-kind position to facilitate such XR solutions by its ability to “Deliver better health insights and human-centric experiences for patients, providers, payers, and the public. Oracle Health offers the most secure and reliable healthcare solutions, which connect clinical, operational, and financial data to improve care and advance decision-making around health and well-being.” (Oracle Health page). Larry Ellison made this clear at his Oracle OpenWorld presentation, where healthcare was the priority for Oracle’s future, and earlier in the year when he stated, “Together, Cerner and Oracle have all the technology required to build a revolutionary new health management information system in the cloud.” Mental Health Over 20% of American adults experience mental illness, while over 2.5M youth struggle with severe depression, and 800,000 people commit suicide each year in the world due to mental illness. Simultaneously, there is a growing deficit of mental health professionals, which is a nationwide issue but is particularly severe for minors and adolescents. According to the U.S. Department of Health and Human Services, the country is expected to have a shortage of 10,000 mental health professionals by 2025. Mental health is one of the areas where XR has been most widely researched and proven extremely effective as a way to deliver therapies and treatments for mental health conditions such as anxiety, depression, and PTSD. For example, VR exposure therapy can be used to help patients confront and overcome their fears by exposing them to simulations of the things they are afraid of in a controlled and safe environment. Cognitive Behavioral Therapy (CBT), XR can be used to create immersive, meditative experiences to promote mindfulness, relaxation, and stress reduction. For example, VR environments can simulate peaceful and calming natural environments, such as beaches, forests, mountains, or space. Children, the elderly, and others that may spend prolonged periods in hospitals, etc., are using XR to explore the world and socialize. Using XR and teletherapy, therapists can create virtual environments that can simulate in-person therapy sessions, providing a more immersive and personalized experience while reducing the inhibitions of the patient and even allowing the use of AR lenses and avatars for privacy. NLP (Natural Language Processing) sentiment analysis, such as that provided by the corresponding Oracle AI service, can interpret emotions and intent from verbal communication, and with advances in facial recognition and facial sentiment analysis, it is now possible with great accuracy to detect the emotions a person is feeling and/or expressing non-verbally (the same follows for body/gesture tracking). Neuro-technology and neural interfaces in coordination with XR can now interpret and give insight into human intent and measure emotion, providing even greater insight into intent and emotion. And, of course, the world is quite familiar with the greater and simpler accessibility of AI interfaces such as ChatGPT, etc., to facilitate this. As with the X-ray example given earlier, these can, in some situations, provide greater and more consistent accuracy and speed than a human but can also act as assisting technologies for healthcare workers. More examples of this will be in upcoming blogs. Additional Thoughts I have given some ideas and examples of how healthcare and XR can be used together and facilitated by Oracle. I look forward to putting out more blogs on this topic and other areas of XR with Oracle Cloud and Database soon. Please see my other publications for more information on XR and Oracle cloud and database, as well as various topics around microservices, observability, transaction processing, etc., as well as this recent blog about what AR is. Also, please feel free to contact me with any questions or suggestions for new blogs and videos, as I am very open to suggestions. Thanks for reading and watching.
This is an article from DZone's 2023 DevOps Trend Report.For more: Read the Report Monitoring and managing a DevOps environment is complex. The volume of data generated by new distributed architectures (such as Kubernetes) makes it difficult for DevOps teams to effectively respond to customer requests. The future of DevOps must therefore be based on intelligent management systems. Since humans are not equipped to handle the massive volumes of data and computing in daily operations, artificial intelligence (AI) will become the critical tool for computing, analyzing, and transforming how teams develop, deliver, deploy, and manage applications. What Are Machine Learning Operations? Machine learning operations (MLOps) refers to the lifecycle management of machine learning (ML) projects. It is a key concept of modern machine learning application development, and its purpose is to make the training, deploying, and maintaining of machine learning applications seamless and efficient. MLOps is not a set of specific technologies but rather an umbrella term for activities focused on building reliable and well-functioning machine learning models. It includes both development work practices and ways of working as a project team — essentially functioning as a set of best practices for machine learning application development. With the application of MLOps principles, data scientists can focus on the core development of machine learning models while the MLOps practices take care of tasks such as data cleaning, quality control, and model versioning. Applying MLOps equally benefits business owners and clients. Automation increases the velocity of development, leading to faster results and more reliable machine learning models. This leads to shorter development times that, in turn, bring faster end-result delivery and cost-effectiveness. Finally, automated quality control ensures more reliable solutions that are ensured and tested to function as intended, reducing the risk of faulty deployments. Lifecycle of a Machine Learning Model The lifecycle of a machine learning project differs from a traditional application. The diagram in Figure 1 details the steps to deploy a machine learning project in production: Figure 1: Machine learning model lifecycle Data Extraction – ingesting data from various sources Exploratory Data Analysis – understanding the data format Data Preparation – cleaning and processing the data for easy processing Model Training – creating and training a model to process the data Model Validation and Evaluation – evaluating the model on test data to validate the performances Model Versioning – releasing a version of the model Model Deployment – deploying the model in production Core Elements of MLOps There are several machine learning frameworks that allow you to deploy, manage, and monitor models — for example, KubeFlow is a toolkit that simplifies model management on the Kubernetes platform. A toolkit should be composed of: A version control to keep track of any changes in the datasets or the models A feature store to centralized data and frequently used features A tracker to monitor the performance of models in training A tool to train models using a set of optimal hyperparameters automatically A platform to deploy models in production A monitoring tool to track and govern machine learning models deployed in production What Are Artificial Intelligence Operations? Gartner, creator of the term "artificial intelligence operations" (AIOps), defines it as the utilization of big data and machine learning to automate IT operations tasks, such as event correlation, identifying unusual events, and determining cause and effect. AIOps uses big data, analytics, and AI algorithms to analyze vast amounts of data generated by IT systems and applications in real-time. This data includes log files, performance metrics, and security events, among others. The AI algorithms process this data to identify patterns, detect anomalies, and generate insights that can help IT teams resolve incidents quickly and prevent potential problems before they occur. AIOps solutions can also automate manual tasks such as event correlation, root cause analysis, and incident resolution, freeing IT teams to focus on more strategic initiatives. AIOps can also help organizations achieve faster problem resolution, reduced downtime, and improved overall IT operations efficiency. It helps teams to work faster and smarter by unleashing the power of AI. The core capabilities of AIOps that enable efficient digitization of workflows are: Process optimization – Enhances efficiency throughout the enterprise by giving a comprehensive understanding of the connections and effects between systems. After identifying a problem, it facilitates refinement and ongoing monitoring of processes. Performance analytics – Anticipates performance bottlenecks by examining trends and making necessary improvements as needed. Predictive intelligence – Utilizes machine learning to categorize incidents, suggest solutions, and proactively alert critical issues. AI search – Offers precise, personalized answers through semantic search capabilities. Configuration management database – Enhances decision-making with visibility into the IT environment by connecting products throughout the digital lifecycle, allowing teams to comprehend impact and risk. Figure 2: AIOps lifecycle Core Element of AIOps AIOps definitions vary among enterprises as each enterprise has unique needs and approaches to implementing AI solutions in IT operations. The primary objective of AIOps is to identify and respond to real-time issues efficiently. Some core components of AIOps can assist in the implementation of AI in IT operations: ML-based pattern discovery – AIOps or IT analytics involves identifying patterns. Machine learning leverages the computational capability of computers to identify these patterns in IT data. Anomaly detection – Unusual system behavior, such as downtime or poor customer experience, can occur from changes in normal behavior. AIOps enables the detection of any deviations from typical activities. Predictive insights – AIOps introduces predictability in IT operations, enabling IT staff to proactively address issues before they occur, ultimately reducing the number of service desk tickets. Automated root cause analysis – Simply having insights isn't enough. It's important to take action. In traditional IT management, staff monitor systems and take action as needed. However, with the growing volume of IT infrastructure issues, it can be difficult for staff to manage and resolve issues in a timely manner, especially when multiple systems are involved and root cause analysis can be time-consuming. AIOps automates this process in the background. AIOps Toolset AIOps tools gather data from multiple sources to provide a comprehensive view of IT operations. They collect data such as application logs and measure system performance, breaking down silos of IT information and bridging the gap between software, hardware, and cloud issues. AIOps solutions aid IT operations by providing tools for root cause analysis, event correlation, and cloud mapping to support automation: Intelligent observability – AIOps employs advanced monitoring techniques with the use of contextual information, AI, and automation to gain a complete understanding of IT issues. Precise root cause analysis with actionable insights is provided. Continuous automation – Reduces manual effort in deployment, configuration, and management and automatically identifies and assesses the severity of issues in terms of user and business impact. Achieving continuous discovery, effortless deployments, and automatic dependency mapping is made possible. AI-assistance – Performs efficient and error-free root cause analysis. Precise and reproducible results are achieved with the AI engine integrated into every aspect. What Is the Difference Between MLOps and AIOps? Coupled with the increasing complexity of architectures of modern applications, the demands of this digital economy have made the role of IT operations much more complex. As a result, ML and AI have emerged to automate some manual business processes to increase efficiency. MLOps and AIOps both aim to serve the same end goal: business automation. While MLOps bridges the gap between model building and deployment, AIOps focus on supporting and reacting to issues in real time and providing analytics to the operations team. AIOps combines big data and machine learning to automate performance monitoring, event analysis, correlation, and IT automation. Figure 3: AIOps vs MLOps vs DevOps MLOps, on the other hand, focuses on managing training and testing data that is needed to create machine learning models effectively. It is all about monitoring and managing ML models. In other words, MLOps standardizes processes whereas AIOps automates machine monitoring. There are parallels in the teams and abilities needed to properly execute AIOps and MLOps, despite the obvious distinctions. It is worthwhile to consider where they intersect to determine which resources can support both disciplines. Conclusion Organizations throughout the world are increasingly looking to automation technologies as a means of improving operational efficiency. This indicates that tech leaders are becoming more and more interested in MLOps and AIOps. Machine learning systems can simplify data collection from various parts of the DevOps system like velocity, defects found, and burn rate. MLOps takes care of the continuous integration and deployment of the models. It allows users to shed light on important patterns and exploit data to extract meaningful information. It also implies surveillance and continuous model training in production in order to ensure the reliability and stability of those models. AIOps can play a crucial role in accelerating DevOps efficiency. It is defined as the usage of big data and machine learning to automate operations such as event correlation, determining cause and effect, and identifying unusual events. In other words, MLOps and AIOps can work together. Artificial intelligence will help boost performance by enabling instant development and operations cycles, and by delivering a compelling customer experience on these features. Machine learning will enable companies to gather metrics such as the number of integrations, the time between them, their success rate, and defects per integration, which are only valuable when they are accurately evaluated and correlated. This is an article from DZone's 2023 DevOps Trend Report.For more: Read the Report
The concept of generative AI describes machine learning algorithms that can create new content from minimal human input. The field has rapidly advanced in the past few years, with projects such as the text authorship tool ChatGPT and realistic image creator DALL-E2 attracting mainstream attention. Generative AI isn't just for content creators, though. It's also poised to transform technical work in the software engineering and DevOps fields. For example, GitHub Copilot, the controversial "AI pair programmer," is already prompting reconsideration of how code is written, but collaborative AI's potential remains relatively unexplored in the DevOps arena. In this article, we'll look toward a future where generative AI empowers DevOps teams to eliminate tedious repetition, strengthen their automation, and condense complex workflows into simple conversational actions. But before all that, let's dive into the DevOps issues that generative AI can improve. What's Wrong With DevOps? DevOps is far from being a solved problem. While the adoption of DevOps mentalities is growing rapidly year-over-year, the process remains dependent on many tools, a limited talent pool, and repetitive tasks that are only partially automated. DevOps engineers can spend too much time on menial work that doesn't contribute significant business value, such as approving deployments, checking the status of environments, and scaffolding basic config files. Although unavoidable, these jobs are chores that don't directly contribute to the final product. They're also great candidates for generative AI to handle, with ChatGPT and Copilot (or OpenAI Codex upon which Copilot is built) all potentially able to alleviate some of the stress: They can populate common config files and templates, so engineers don't have to. They help team members gain new skills by suggesting contextually relevant snippets. This provides assistance when it's needed, lessening the learning curve during upskilling. They reduce the time taken to scaffold new assets and improve their consistency, helping to improve maintainability. However, existing systems are limited by their narrow focus on content generation. DevOps assistants are more powerful if they also offer intent- and action-based experiences to trigger workflow steps and apply state changes. For example, imagine the experience if you merged Copilot's code authorship with a bi-directional conversational interface: You could ask the assistant to start processes on-demand, then be prompted to supply inputs when required. Developers would have self-service access to potentially sensitive tasks, such as requesting a deployment to production. AI would safely perform the action on their behalf, minimizing the risk of errors and establishing a safety barrier between the developer and the infrastructure. The AI assistant could also request a review from relevant team members before committing to the procedure to ensure everyone's informed of platform changes. AI could alert you in real time as monitoring metrics change. For example, you'd receive a message with a choice of immediate actions when deployments fail, a security breach is detected, or performance deviates from the baseline. Importantly, these capabilities aren't replacing humans or fundamentally changing their role. This form of AI augments engineering abilities by handling the mundane and consistently enforcing safety mechanisms. It frees up DevOps teams to complete more meaningful work in less time. The Future of DevOps With Generative AI There's huge potential for generative AI to redefine how DevOps works. Here are three specific areas where it will dominate. 1. Automatic Failure Detection, With Suggested Remedies Failures are a constant problem for developers and operators alike. They're unpredictable interruptions that force an immediate context switch to prioritize a fix. Unfortunately, this hinders productivity, slows release schedules, and causes frustration when remedial work doesn't go as planned. AI agents can detect faults and investigate their causes. Moreover, they can combine their analysis with generative capabilities and knowledge of past failures to suggest immediate actions within the context where the alert's displayed. Consider a simple Kubernetes example: The assistant notices that production is down; realizes the Pod has been evicted due to resource constraints; and provides action buttons to restart the Pod, scale the cluster, or terminate other disused resources. The team can resolve the incident with a single click instead of spending several minutes manually troubleshooting. 2. On-Demand Code/Config Generation and Deployment Generative AI's ability to author code provides incredible value. Layering in conversational intents makes it more accessible and convenient. For example, you can ask an AI agent to set up a new project, config file, or Terraform state definition by writing a brief message into a chat interface. The agent can prompt you to supply values for any template placeholders, then notify appropriate stakeholders that the content's ready for review. After approval's been obtained, AI can inform the original developer, launch the project into a live environment, and provide a link to view the deployment and start iterating upon it. This condenses several distinct sequences into one self-service action for developers. Ops teams don't need to manually provision the project's resources beforehand, allowing them to stay focused on their own tasks. 3. Prompt-Driven On-Demand Workflow Management The next generation of AI agents goes beyond simple text and photo creation to support fully automated prompt-driven workflows. For example, Bi-directional AI lets you start processes using natural language, such as "restart the production cluster" to interact with your AWS ECS resources. AI doesn't need to be told which platform you're using or the specific steps it should run. At Kubiya.ai, for example, we are already taking full advantage of this and now offer our customers the option to create any DevOps workflow via natural language prompts. These agents' language models are trained against the vocabularies of your cloud services. When you ask for a cluster to be restarted, the agent interprets your words using its domain knowledge. For example, it knows that your "production" cluster runs on AWS and that it must retrieve the cluster's details, then make the correct API calls to restart it, such as ecs.UpdateService, etc. Your words are directly translated into fully functioning workflows. Furthermore, the bi-directional aspect means the AI agent becomes even more capable over time. Once you've started running your workflows, the agent trains against them, too, allowing it to suggest similar processes for future scenarios and describe what each workflow actually does. This approach lets devs do more without involving ops teams. The AI agent mediates between humans and infrastructure platforms, allowing anyone to initiate workflows consistently and without compromising security. As part of the workflow, the agent can prompt for input at relevant points, such as requesting you to select a cloud account, data center region, machine type, and pricing tier when you ask it to "add a new virtual machine." The Takeaway: Generative AI Safely Accelerates Your Work DevOps use cases for generative AI accelerate primary tasks while increasing accessibility, security, and reliability. In addition, they empower developers to focus on moving forwards with new functionality instead of repeatedly running familiar processes and waiting for results. Agents that are intelligent enough to sustain a conversation act like another member of your team. They support developers who could be unfamiliar with certain tools while ensuring that the organization's security and compliance policies are fully adhered to. These safeguards protect the codebase and give developers the confidence that they can initiate any workflow. In addition, reducing the number of interactions with the DevOps team enhances efficiency, tightening the feedback loop. Generative AI isn't a static experience either. It gets better over time as it analyzes interactions to more accurately establish user intent. For example, if recommendations aren't suitable the first time you type a query, you can expect them to be improved as you and others repeat the request and take different courses of action. AI agents support missing human knowledge too. They let developers start processes even when they're unfamiliar with some of the steps, tools, or terms involved. AI can fill the gaps in questions such as "Which instances have failed?" to work out that you're referring to the Kubernetes Pods in your production cluster. These capabilities let AI effectively supplement human abilities, rendering it a source of supportive hints for the team. ROI Is Critical With Generative AI Organizations that use AI regularly will likely have the best results because their agents will become more adept at anticipating their requirements. However, it's also important not to overreach as you add AI to your workflows. The most successful adoptions will be focused on solving a genuine business need. First, assess your processes to identify bottlenecks between dev and ops teams, then target those repetitive use cases with AI. The solution you select should help you reach your KPIs, such as closing more issues or resolving incidents faster. Otherwise, the AI agent will be underused, hindering your natural operating procedures. Summary Generative AI is one of today's most quickly maturing technologies. As a result, ChatGPT has attained a degree of virality as more researchers, consumers, and organizations begin exploring its capabilities. DALL-E2 has delivered similarly spectacular results, while over 1.2 million developers used GitHub Copilot during its first 12 months. All three technologies demonstrate clear revolutionary potential, but it's the mixed and highly complex workflows of DevOps that could benefit the most in the long term. For example, DevOps combines the creation of new assets, such as code and configs, with sequential processes like deployment approvals and review requests. Contrary to some outsider projections, generative AI for DevOps will go beyond mere templating of common file snippets to offer full workflow automation. Using simple conversational phrases, you can instruct your agent to take specific actions on your behalf, from provisioning new cloud resources to checking performance in production. As a result, the agent will provide a real-time bi-directional feedback loop that improves collaboration, boosts productivity, and reduces the everyday pressures faced by devs.
Tuhin Chattopadhyay
CEO and Professor,
Tuhin AI Advisory
Thomas Jardinet
IT Architect,
Rhapsodies Conseil
Sibanjan Das
Zone Leader,
DZone
Tim Spann
Principal Developer Advocate,
Cloudera