On the LinkedIn Mountain View campus, Anirudh Pandit, a senior architect at Oracle, spoke last night at the SVForum’s Software Architecture and Platform SIG on the topic of Extreme Transaction Processing in SOA. Pandit proposed an architectural pattern for improving SOA performance and reviewed several case studies in which the pattern dramatically improved performance in enterprise SOA implementations.
While no longer generating the buzz that it once enjoyed, SOA (Services Oriented Architecture) remains an important integration strategy at large enterprises, perhaps now enjoying more real success than it did at the peak of it is hype cycle. Though implementations often prove costly and difficult, ultimately SOA provides the most conceptually compelling means to cut through the complexity and diversity of enterprise systems—systems encumbered by a mix of legacy and web-based applications, of behind-the-firewall systems with the external systems of business partners, and of applications accumulated through long histories of mergers and acquisitions.
Despite its conceptual advantages, Pandit pointed out that many SOA implementations suffer from poor performance and a lack of scalability. As messages move from service to service through an orchestration, the work of serializing and deserializing XML messages creates processing bottlenecks and maintaining transactions via relational database and data persistence creates disk IO bottlenecks.
Pandit proposed a solution based on two changes to the traditional SOA architecture. First, rather than passing XML messages with repeated serializations and deserializations, pass a token that each service may use to retrieve the message. Second, instead of persisting the message as it moves from service to service, cache the message to memory and assure message integrity by synchronizing the cache across multiple machines, preferably across machines in different data centers.
The solution, Pandit explained, does not work in all circumstances. By avoiding persistence to disk, the solution might fail to meet compliance requirements. By synchronizing the cache across many machines and data centers, the solution might introduce network bottlenecks and latency issues. But where it does fit, Pandit concluded, the solution overcomes performance problems in a plug-and-play manner without a need for the costly redesign of services.
While most recent conversations on scalability revolve around NoSQL, Pandit’s presentation was a reminder that the intelligent use of caching remains a viable option.
Stay tuned to the Software Architecture and Platform SIG for future events and the slide deck of Pandit’s presentation (not yet available at time of writing). The next meeting will be on the use of Hadoop at LinkedIn.
Leave a Reply