Unleashing the Power of Large Language Models: A Quick Overview of LLM Integration with Agent and REST API Magic!

Large Language Models in Modern Applications Large Language Models (LLMs) play a pivotal role in modern business applications by significantly enhancing natural language understanding and generation capabilities. These LLM models enable more sophisticated interactions, allowing applications to comprehend user inputs, generate contextually relevant responses, and automate complex language-related tasks. Integrating LLMs into Java applications empowers developers to create more intelligent, user-friendly systems, fostering a seamless interaction between software and users. It opens avenues for enhanced language processing, enabling innovative solutions and improved user experiences in diverse domains. This post will show how the Large Language Model can be customized to leverage the custom REST APIs to empower your AI Assistant. LangChain for Java LangChain for Java(Langchain4j) is a  Java library designed to facilitate the seamless integration of LLM into Java applications. Langchain4j

A Real-World Kafka Client Implementation

This post is the continuation of my previous post on building Kafka client: In case you have not yet read it, would encourage you to take a look at it before proceeding with this post. In this post, I will share a simple real-life Kafka client implementation.  In case you have not yet built a Kafka client of production quality, please read this article The client implementation used in this example is heavily inspired by the concepts discussed in the above-mentioned article. Some features that you may find interesting in the Kafka client example shared along with this post are: Listen to Kafka rebalancing and handle it as appropriate Error handler for async Kafka message processing task Rate limiting the incoming messages by waiting for the currently running Kafka record processing threads to finish. About this example Here is a quick over

How to get your Apache Kafka Client code right?

Recently I have got an opportunity to work on a Kafka client implementation for an interesting use case. Till then my assumption was writing a Kafka client was as easy as we see in many examples on the net :) Although it's true for many see cases, we cannot say the same simple client work for all. Depending upon the complexity of the use case that you may deal with, the client's implementation might change and complexity may increase.  Please note that this is not an introductory article on Kafka, also expected you to have some basic understanding of Kafka's client. I found the following article very useful while learning the basics of Kafka client:  .  This post will refer to various topics from this article as we move forward.  In this post, I am sharing three common scenarios that you may need to deal with while using Kafka client in an message heavy system and possible solutions or patterns followed

A Simple Apache Spark Demo

Apache Spark is a data processing framework that can quickly perform processing tasks on very large data sets, and can also distribute data processing tasks across multiple computers, either on its own or in tandem with other distributed computing tools About this example In this post I am sharing a simple Apache Spark example project. The source code used for this example is available here: Here is the quick overview of the modules that you may find in this project spark-job-common :  All common classes that you need for building a Spark job are parked here. This approach may help you to avoid boilerplate code in your Spark job implementation spark-job-impl  : A classic word count Spark  example is available here.   This class may help you to understand the structuring of the source and usage of common classes from spark-job-common module spark-job-launcher : The SparkLauncher helps you to start Spark applications programmatical

Tracing the API calls in Your Helidon Application with Jaeger

While building applications that comprises of multiple microservices, it is essential to have a mechanism in place to collect and analyze the details of API calls, timing data needed to troubleshoot latency problems and logging error generated from API calls. Jaeger is one such solution and used for monitoring and troubleshooting application built following microservice based architecture with following capabilities: Distributed context propagation Distributed transaction monitoring Root cause analysis Service dependency analysis Performance / latency optimization Jaeger is hosted by the Cloud Native Computing Foundation (CNCF) as the 7th top-level project (graduated in October 2019). Eclipse MicroProfile, OpenTracing and Jaeger As there are multiple tracing solutions out there similar to Jaeger(such as Zipkin), it is really good to avoid vendor lock-in by having some standardization around APIs that work with different providers.The  OpenTracing  addresses this part of the problem, i

Integrating Redis with a Helidon MicroProfile Application for a Publish-Subscribe Usecase

Redis is an open source, in-memory data structure store, used as a database, cache and message broker. In this short post I am sharing a simple application that show cases classic integration of Redis message broker(pub/sub) APIs with a Helidon MicroProfile application for message publish-subscribe use case. You can checkout the source form here: What is the usecase exercised in this example ? It is simple :) We use the a simple greeting REST API  to exercise the Redis Publish Subscribe feature . When a client updates the greeting message, the Greeting resource implementation will publish the new greeting message to Redis channel topic for use by interested parties(consumers) Who does what? Here is quick summary of the classes that you find in the source: Lettuce : This example use  Lettuce  client library to connect to Redis server. If you are new to Lettuce, take a look a

A Simple gRPC CRUD Example Running on Helidon SE

This is in continuation of the my previous posts on gRPC CRUD Example with Java. In case you have not read them yet, here are the links - Pause here, take a look at those posts and then resume ;) This post shares a simple and complete gRPC CRUD example that runs as Helidon SE microservice.  The gRPC service generated using stdndrad approach as we discussed in previous posts. However the this example runs the gRPC APIs on Helidon SE service ;) If you are interested to see how the Helidon SE embraces gRPC service implementation that we have, see this class:  in the example project. As a bonus we can leverage the built-in Helidon SE offerings for monitoring and tracing the gRPC APIs. For instance the following URL gives you health check info for our example: http://localhost:8080/health You can find the complete source here: