In-memory Data Grids Ensure Optimal User Experience

by Sanjay Gupta Vice President, Oracle, India

In today’s anytime, anywhere access world, customer demands instant information. How can technology address the growing demand for real-time data?
Many industries rely on the quick processing of data feeds as a competitive advantage. Financial services companies that need to provide real time prices and risk information to their traders and customers exemplify real-time data. Outdated information may negatively impact profits. Calculating price and risk information is more than a matter of moving one piece of data from one place to another; instead, complex calculations may need to be performed based on an incoming stream of data (in this case, on market events) and sent downstream to trading applications in real time.
In another example, the marketplace offers multiple wearable fitness devices that tie into mobile applications to track exercise patterns and workout routines. Many of these devices calculate multiple data points simultaneously, such as your heart rate, distance travelled, route taken, and GPS position. This data is streamed back to a data centre where an application further processes and enriches the data, for example by comparing it to previous workouts or those of a group of peers and then sending those results to a downstream application to update social sites or leader boards in real time. Because of the large volume of streaming calculations that must be performed, a scalable platform that can perform calculations in parallel is needed.
Technology solutions like In-memory data grids (IMDG) provide a shared data cache across distributed servers to bring data out of remote databases and onto the application tier where it is closer to the user and closer to the application procession engines. This ensures that the data is available on demand for the application. Caching and replication algorithms ensure that the data cached is current and copies of data are effectively managed between distributed cache servers.

Scalability is a significant challenge for most of the organizations. How can it be addressed?
Nothing is more frustrating to customers than trying to use an application or make a purchase only to see that applications move slowly, freeze, time-out, and ultimately fail. Customers have been conditioned to expect quick response time, so if an application is slow, the customer may leave the site. In this context the question IT managers have to answer is - Is it sufficient to plan only for the average daily workload and just endure the ramifications of periodic spikes? Or is it better to over-allocate processing capacity, knowing that on most days you’re paying for unused capacity?
For many industries, neither option is good. A better solution is to use technology that can dynamically scale to absorb workload spikes (such as during Black Friday or the annual online shopping tradition of Cyber Monday) but shrink after the workload reduces to normal levels. Rapid provisioning of IMDGs and elastic cloud computing allow businesses to strike the right balance, allocating appropriate capacity for normal workload with being able to capitalize on surges in demand. When a company’s sudden exposure creates a demand on its website that it can’t meet, a vertically scalable design sets maximum limits to the number of customers that one system can handle. While a horizontal design like IMDG allows you to scale beyond that single server. Horizontal design considers how to scale all resources, including compute, network, and memory, so they don’t become bottlenecks.

How can IMDG help improve performance, reliability, and scalability of applications?
The unique benefits of IMDGs relate to their application tier deployment. Caching occurs at the application tier where it has a greater and more noticeable impact on the user. In most modern application architectures, application processing is done in the application server tier. Even with a distributed cache, the network between the application server and the distributed cache can be overwhelmed by excessive data movement. With IMDGs, instead of bringing data objects back to the application server, you can do the processing within the memory grid itself rather than moving to and from the application server tier.

Under what real-world circumstances would IMDGs be a good choice?
The ideal use cases are situations where a larger number of distributed, concurrent users access an online system and repeatedly access similar data, in which users won’t tolerate slowness or failures. Take for example an online travel and hotel reservation application. Prospective travelers know where they want to vacation, but the details are often in flux while they evaluate their options for the best price before committing to a purchase.
In such contexts IMDGs are a natural fit. They enable seamless access to data to a growing user population, often driven by mobile or multiple-device access. Cost-effective IMDG solutions are leveraged to ensure that customers have an optimal user experience, which directly translates to increased sales and repeat business.

 

Facebook