Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

New Test

Originally posted on: http://geekswithblogs.net/edmundzhao/archive/2019/09/03/244807.aspx

What is the “environment”

The Application environment consists in 3 main areas:

  • Infrastructure
  • Configuration
  • Dependencies

Infrastructure is the most important element of the environment, as it defines where the application will run, the specific configuration needs and how dependencies need to interact with the application.

Configuration is the next most important aspect of the application environment. Configuration dictates both how the application behaves in a given infrastructure and how the infrastructure behaves in relation to the underlying application.

Dependencies are all the different modules or systems an application dependes on, from libraries to services or other applications.

 

From

 

CAP theorem

 

From

 

No distributed system is safe from network failures, thus network partitioning generally has to be tolerated. In the presence of a partition, one is then left with two options: consistency or availability. When choosing consistency over availability, the system will return an error or a time-out if particular information cannot be guaranteed to be up to date due to network partitioning. When choosing availability over consistency, the system will always process the query and try to return the most recent available version of the information, even if it cannot guarantee it is up to date due to network partitioning.

 

Database systems designed with traditional ACID guarantees in mind such as RDBMS choose consistency over availability, whereas systems designed around the BASE philosophy, common in the NoSQL movement for example, choose availability over consistency.

 

Many NoSQL stores compromise consistency (in the sense of the CAP theorem) in favor of availability, partition tolerance, and speed. Barriers to the greater adoption of NoSQL stores include the use of low-level query languages (instead of SQL, for instance the lack of ability to perform ad-hoc joins across tables), lack of standardized interfaces, and huge previous investments in existing relational databases. Most NoSQL stores lack true ACID transactions. 

 

 

What is distributed Caching and when is it used?

Today’s web, mobile and IoT applications need to operate at web scale, anticipating millions of users, terabytes of data and submillisecond response times, as well as operating on multiple devices around the world.

Distributed caching solves many common problems with data access, improving performance, manageability and scalability, but what is it and how can it benefit businesses?

What is distributed caching?

Caching has become the de facto technology to boost application performance as well as reduce costs. The primary goal of caching is to alleviate bottlenecks that come with traditional databases. By caching frequently used data in memory – rather than making database round trips – application response times can be dramatically improved.

Distributed caching is simply an extension of this concept, but the cache is configured to span multiple servers. It's commonly used in cloud computing and virtualised environments, where different servers give a portion of their cache memory into a pool which can then be accessed by virtual machines. This also means it’s a much more scalable option.

The data stored in a distributed cache is quite simply whatever is accessed the most, and can change over time if a piece of data hasn't been requested in a while.

Distributed caching can also substantially lower capital and operating costs by reducing workloads on backend systems and reducing network usage. In particular, if the application runs on a relational database such as Oracle, which requires high-end, costly hardware in order to scale, distributed caching that runs on low-cost commodity servers can reduce the need to add expensive resources.

What makes distributed caching effective?

The requirements for effective distributed caching are fairly straightforward. Enterprises generally factor six key criteria into their evaluation, but how important they are depends on the specific situation.

Performance: Specific performance requirements are driven by the underlying application. For a given workload, the cache must meet and sustain the application’s required steady-state performance targets for latency and throughput. Efficiency of performance is a related factor that impacts cost, complexity and manageability.

Scalability: As the workload increases, the cache must continue to deliver the same performance. The cache must be able to scale linearly, easily, affordably and without adversely impacting application performance and availability.

Availability: Data needs to always be available during both planned and unplanned interruptions, so the cache must ensure availability of data 24/7.

Manageability: The use of a cache should not place undue burden on the operations team. It should be reasonably quick to deploy and easy to monitor and manage.

Simplicity: Adding a cache to a deployment should not introduce unnecessary complexity, or make more work for developers.

Affordability: Cost is always a consideration with any IT decision, both upfront implementation as well as ongoing costs. An evaluation should consider total cost of ownership, including license fees as well as hardware, services, maintenance and support.

 

From https://www.itpro.co.uk/virtualisation/30271/our-5-minute-guide-to-distributed-caching>What is the “environment”

The application environment consists in 3 main areas:

  • Infrastructure
  • Configuration
  • Dependencies

Infrastructure is the most important element of the environment, as it defines where the application will run, the specific configuration needs and how dependencies need to interact with the application.

Configuration is the next most important aspect of the application environment. Configuration dictates both how the application behaves in a given infrastructure and how the infrastructure behaves in relation to the underlying application.

Dependencies are all the different modules or systems an application dependes on, from libraries to services or other applications.

 

From

 

CAP theorem

 

From

 

No distributed system is safe from network failures, thus network partitioning generally has to be tolerated. In the presence of a partition, one is then left with two options: consistency or availability. When choosing consistency over availability, the system will return an error or a time-out if particular information cannot be guaranteed to be up to date due to network partitioning. When choosing availability over consistency, the system will always process the query and try to return the most recent available version of the information, even if it cannot guarantee it is up to date due to network partitioning.

 

Database systems designed with traditional ACID guarantees in mind such as RDBMS choose consistency over availability, whereas systems designed around the BASE philosophy, common in the NoSQL movement for example, choose availability over consistency.

 

Many NoSQL stores compromise consistency (in the sense of the CAP theorem) in favor of availability, partition tolerance, and speed. Barriers to the greater adoption of NoSQL stores include the use of low-level query languages (instead of SQL, for instance the lack of ability to perform ad-hoc joins across tables), lack of standardized interfaces, and huge previous investments in existing relational databases. Most NoSQL stores lack true ACID transactions. 

 

 

What is distributed caching and when is it used?

Today’s web, mobile and IoT applications need to operate at web scale, anticipating millions of users, terabytes of data and submillisecond response times, as well as operating on multiple devices around the world.

Distributed caching solves many common problems with data access, improving performance, manageability and scalability, but what is it and how can it benefit businesses?

What is distributed caching?

Caching has become the de facto technology to boost application performance as well as reduce costs. The primary goal of caching is to alleviate bottlenecks that come with traditional databases. By caching frequently used data in memory – rather than making database round trips – application response times can be dramatically improved.

Distributed caching is simply an extension of this concept, but the cache is configured to span multiple servers. It's commonly used in cloud computing and virtualised environments, where different servers give a portion of their cache memory into a pool which can then be accessed by virtual machines. This also means it’s a much more scalable option.

The data stored in a distributed cache is quite simply whatever is accessed the most, and can change over time if a piece of data hasn't been requested in a while.

Distributed caching can also substantially lower capital and operating costs by reducing workloads on backend systems and reducing network usage. In particular, if the application runs on a relational database such as Oracle, which requires high-end, costly hardware in order to scale, distributed caching that runs on low-cost commodity servers can reduce the need to add expensive resources.

What makes distributed caching effective?

The requirements for effective distributed caching are fairly straightforward. Enterprises generally factor six key criteria into their evaluation, but how important they are depends on the specific situation.

Performance: Specific performance requirements are driven by the underlying application. For a given workload, the cache must meet and sustain the application’s required steady-state performance targets for latency and throughput. Efficiency of performance is a related factor that impacts cost, complexity and manageability.

Scalability: As the workload increases, the cache must continue to deliver the same performance. The cache must be able to scale linearly, easily, affordably and without adversely impacting application performance and availability.

Availability: Data needs to always be available during both planned and unplanned interruptions, so the cache must ensure availability of data 24/7.

Manageability: The use of a cache should not place undue burden on the operations team. It should be reasonably quick to deploy and easy to monitor and manage.

Simplicity: Adding a cache to a deployment should not introduce unnecessary complexity, or make more work for developers.

Affordability: Cost is always a consideration with any IT decision, both upfront implementation as well as ongoing costs. An evaluation should consider total cost of ownership, including license fees as well as hardware, services, maintenance and support.

 

From



This post first appeared on Read Dive, please read the originial post: here

Share the post

New Test

×

Subscribe to Read Dive

Get updates delivered right to your inbox!

Thank you for your subscription

×