Application Development Don’t Build Your Lambda Function (FaaS) Application in Java

Application Development Don’t Build Your Lambda Function (FaaS) Application in Java

Application Development STOP. Don't build your FaaS apps in Java. Amazon championed the serverless concep

Application Development Getting Started with Cloud Native Development on OpenShift Container Platform
Application Development Microsoft’s new VS Code extension allows you to easily create full-stack web applications
Application Development Building a Workplace for the Next 100 Years

Application Development

STOP. Don’t build your FaaS apps in Java.
Amazon championed the serverless concept. Their R&D team delivered another great innovation for cloud computing in the form of AWS Lambda. As per Amazon’s definition:
“AWS Lambda is a compute service that makes it easy for you to build applications that respond to new information. Information in this context represents events. An event can be anything, including HTTP request or change to data that you are monitoring.”

You may also like:
Serverless FaaS With AWS Lambda and Java

This is, by no means, the real benefit of computes services such as Lambda, Google Cloud Functions, or Azure Functions. Engineers developed event-driven applications for decades, and so they are well-versed in the intricacies of the architecture. As we move to a commodified environment, such as the cloud infrastructure, organizations are forever looking for ways to cut costs.
First, it was the cost of the infrastructure, and then came the operational cost. In a traditional approach, you write your code, then set-up the required cloud resources, deploy, set up security, and high availability features. The problem with that approach is that resources were occurring cost even they were not being used or because too many resources were provisioned; the system is now underutilized.
The Cold Start War
So, introducing compute services such as AWS Lambda.
AWS Lambda, for example, allows short-running functions to be executed on demand and then shut down when no longer required. Amazon built an elasticity feature to support the functions as to scale up as demand ramps up then to scale-to-zero when demands die down. The elasticity is automatically managed by the computes services platform. To take advantage of the function as a service, the application architecture must consider the following:
Cold start-up time: How fast can the application go from zero to ready?
Memory footprint: How much memory is required by the application to fulfill its tasks?
Disk footprint: How many KB/MB/GB is the final deployment artifact?
Execution time: How long does the application take to complete its tasks?
Additionally, there are other criteria that need to be considered, but that is beyond the scope of this article. Nonetheless, the above should give an excellent start to get your project going.
So, Why Not Java?
Let’s explore this question by going over the considerations mentioned previously.
Cold Start-Up Time
“Cold start” is a term used to describe an application that started from dead. A simple analogy would be a lighting system.
How long does it take for the lights to come on when you flip the switch? What about if you switch off again and switch it on? Now repeat it ten more times, is there a visible change in the time?
Now, let’s imagine that your Java application is operated similar to a light switch. How long does it take for your app to be fully operational? We can make an educated guess and say the time is much longer than the lighting system. To understand some of the limitations with Java applications, one needs to dive deeper into the architecture of the JVM. At the high level, we can say that it comes down to the Run-time Interpreter and Dynamic Linking.
The Run-time Interpreter converts the application bytecode into native machine code, and as its name suggests, this process happens when the application is running. The process has an impact on the performance of the service.    
Dynamic linking is the process of loading and linking your classes and interface together, in doing so, the JVM verifies if the loaded types are compatible. This happens every time the application is running.
There is good work out of the Java community in improving the JVM. The cold start is seen by most developers as the most significant barrier to developing FaaS applications in Java.
Memory Footprint
The memory footprint of an application depends on the tasks it has to perform. Understanding a Java application’s architecture can help determine the memory requirement. In a FaaS environment, such as AWS Lambda, there is a restriction on how much memory you can use. This is not as critical as the application can be decomposed into multiple granular services. It is important to remember that the JVM needs a minimum memory to run. And who never came across the OutOfMemoryError and StackOverflowError exceptions?
Disk Footprint
The disk footprint of a Java application has never been a problem before, and until, well you guessed it, the rise of FaaS. When deploying your FaaS application, the JVM is bundled with it. The size can increase if the app is containerized. For example, AWS Lambda has a limit on the size of the application that can be uploaded/deployed.
Execution Time
The execution time is not specific to Java applications. It is more to do with implementing better algorithms. AWS Lambda limits application execution to fifteen minutes. Developers have to ensure that their application development follows the SOLID principles.
Verdict
Java, actually the JVM, seems to lack the necessary characteristics of a FaaS application. The cold-start time and high memory footprint of Java applications are barriers to the adoption of FaaS. There are workarounds aplenty, but they used warm-up techniques that still consume resources even when the function is not in use, thus defeating the purpose.
Even with the increasing performance added to the JVM in recent times, the C2 compiler is still not adequate enough for the task. This is not to say that developers should not build their FaaS on the Java platform, but it should not be recommended in its current state. Don’t make your FaaS application using Java.
Wait. Wait. Wait.
It is very inaccurate to suggest that developers avoid using the Java programming languages when embarking on FaaS projects. As previously mentioned, the community has made great strides in optimizing Java application’s runtime environment by redesigning the compilers and actual virtual machines.
We were not big fans of Oracle’s handling of Java, but credit needs to be given where it is due. The Oracle R&D team developed a compiler that brings Java applications closer to the underlying operating system. GraalVM is making Java developers have faith again.
For the purpose of FaaS, SubtrateVM (formally known as Graal) could be the answer. SubtrateVM, part of GraalVM, uses ahead-of-time AOT compilation to compile Java byte code to native code, bringing Java closer to the bare metal.
By compiling the application to native, SubtrateVM directly addresses the critical blockers to Java in the FaaS world — cold start and memory footprint. As of writing, Oracle adopted an open-source license for development and production use. SubtrateVM, and by association GraalVM, is the possible game-changer that Java developers have been waiting for. Organizations can leverage their in-house Java capabilities to increase agility and velocity.
The Holy GraalVM
The world is gone microservices-mad, so it is only natural that we mention it too. Microprofile, one of the coolest open-source projects out there, brought some of the sought-after cloud-native features to Java EE. As such, Red Hat developed Quarkus – Supersonic Subatomic Java. Red Hat positioned the framework as a Kubernetes-Native Java stack tailored for OpenJDK and HotSpot, but it is much more than that.
As well as inheriting all the features of GraalVM, Quarkus is also an implementation of Microprofile. What it means is: Developers can develop microservices using Java and Microprofile on Quarkus and deploy the end application as a FaaS on say Amazon AWS. No more worries about cold start-up time and a high memory footprint. As microservices turn into the go-to choice for system integration over ESB, frameworks such as Microprofile and Jakarta EE as well as Quarkus and GraalVM will reduce the complexity of the development of such services.
Conclusion
This article started off by discussing some of the valid points on why Java should not be considered for FaaS projects. With the restrictions put in place by the cloud providers, developers are compelled to look for alternative frameworks with a lower footprint.
The Java community has been aware of these performance issues almost since its inception. And they have been working on its improvement ever since. Each new release of Java brought performance improvements, but the underlying problem still resided. The JVM C2 compiler was architected so that it could no longer be improved. GraalVM was developed as a replacement for the current compiler and JVM.
Developers will still be able to deploy their applications on the JVM and the choice of creating native images for environments such as mobile, IoT, and FaaS. The role of a framework is to facilitate tasks, thus Quarkus helps engineers develop cloud-native services ready to be deployed to FaaS, Kubernetes, and the JVM.
If your organization already has Java capabilities, use them to develop your next FaaS service.
Further Reading
Introducing Functions as a Service (FaaS)
Serverless FaaS With AWS Lambda and Java

Topics:
java
,cloud
,programming
,faas platform
,quarkus
,jvm architecture
,serverless
,serverless adoption
,serverless architecture
,microservice architecture

Opinions expressed by DZone contributors are their own.
Read More

COMMENTS

WORDPRESS: 0
DISQUS: 0