Pages

Sunday, December 23, 2012

Enterprise Integration : Web Services, SOA, ESB and JBI

Web Service Specifications
These define standards-based implementation technologies for many of the Integration patterns, but use the same architectural principles and guidelines.

Service Oriented Architectures 
Describe ways to build loosely-coupled systems composed from individual services. Because much of this loose coupling is a result of message-based communication between services, our patterns are extremely relevant in the SOA space.

ESB (Enterprise Service Bus)
The core of a typical service bus incorporates messaging, routing, transformation, endpoints.

Finally, the JBI (Java Business Integration)
JBI is the new integration API introduced in the J2EE world. It is a great enabler for SOA because it defines ESB architecture, which can facilitate the collaboration between services. It provides for loosely coupled integration by separating out the providers and consumers to mediate through the bus.



JSR 208 is an extension of J2EE, but it is specific for JBI Service Provider Interfaces (SPI). SOA and SOI are the targets of JBI and hence it is built around WSDL.
Case: Web Services without being SOA: Very much Possible. SOA is an Architecture and an application as such can have Web Services without really being SOA.

Case: Web Services & SOA without being an ESB: This can be a case where in you may commit to the concept of SOA but not necessarily implement ESB.

Case: Web Services, SOA & ESB
The JBI (Java Business Integration) case, following meta-model describes the constitution of the Architecture in context

Case: ESB and SOA without Web Services: Other Protocols such as JMS or Other Native Queuing forming the core messaging implementation you use.

Case: ESB without SOA: ESB implemented as Application Integration Technique without the consideration of SOA

Thursday, December 22, 2011

OutOfMemory with Maven Compiler ?

When using Maven with Jdk 6 (1.6.0_14) within eclipse, OutOfMemory error is encountered
Setting MAVEN_OPTS as per the suggestions was of no help.
The following configuration setting inside the ma ven compile plugin worked like a wonder!

<configuration>

<fork>true</fork>
<meminitial>256m</meminitial>
<maxmem>512m</maxmem>
</configuration>

Upon investigation, there is a note on the attribute <fork>
<fork> attribute, when set to true, allows running the compiler in a separate process and use defined heap value as against using default built-in compiler. (By default, fork is false)

Hence, the solution is having the compiler use a separate process as against the default. Another important observation is that i haven't encountered this exception when i used jdk1.6.0_23 + version.

Sunday, October 9, 2011

Java Memory Manager

I am yet to see a Java Application that hasn't suffered Memory Issues in its journey to Production. Do we always get to build applications where we know of a static data / memory requirements? I am sure we  are talking about something that is perceived to be next to impossible.

When you have an application that has been built or in development process, it is very important to ensure the code is optimally written to control the Java memory Usage. We are talking about following Standards!


What-if we had an option where we can ensure the application does not to use memory anything more than 60% of the total max heap (-Xmx) defined? (w.r.t the cache of the objects, probably static) Wouldn't that be really awesome?


Memory Governance is what we need here. There are 3 ways we can hold objects in Cache, In Memory- In heap, In-Memory - Off Memory and Disk based storage. 


As an Application Architect, I would usually or rather definitely need to define the memory requirements. To avoid memory contentions, I should be able to define a cap on the memory occupancy, ex: 60% of memory is what i would want to use for object storage / caching and the rest i would like to be used for processing requirements. The number i define here mostly depends on how processing intensive my application would be. 


For most of Java Applications, Data increases over time and handling the processing of this data gets tougher and there comes a stage where your JVM tuning options are all tried but in vain.


I have built a wrapper over EHCache 2.5.6 wrapping the base CacheManager, that has the capability of controlling the maximum memory usage across all the caches within the application scope.

I have introduced new attributes that can be use to define the memory occupancy threshold and just by configuring this, Developers can seamlessly manage memory without having to programmatically manage the same.

All the in-memory processing data needs to be held within this cache, that optimally uses a % of Heap and spills over the rest of objects into either disk or "off" heap (this is a feature of Big Memory that is a licenced extention to Core EHCache, Terracotta).


This works well for any in-memory processing and UI Data display requirements, where an existing / optimal query returns couple of millions of rows and the application is unable to handle all of this data in-memory while displaying on the screen or writing into a file for offline access.


Note: Spring API for EHCache can be used to further improve the usage with IoC.