Tuesday, February 23, 2016

F5 Load Balancers: LTM vs GTM

  1 comment    
F5® BIG-IP® Global Traffic Manager™ (GTM) distributes DNS and user application requests based on business policies, data center and cloud service conditions, user location, and application performance.

BIG-IP GTM delivers F5’s high-performance DNS Services with visibility, reporting, and analysis; hyper-scales and secures DNS responses geographically to survive DDoS attacks; delivers a complete, real-time DNSSEC solution and ensures global application high availability in all hybrid environments.

Fig 1.1 F5 Load Balancers (NB)
Fig 1.1 F5 Load Balancers (NB)

Most stuff you do on the internet or non-public networks begin with call decision – so it makes experience if you’re going to load balance an software it might begin at this deposit – resolving names to IPs primarily based on availability, overall performance, and even patience.

It’s crucial to note, traffic does now not “direction” via the GTM, the GTM truly tells you the first-class IP to route to primarily based on metrics for the URL in question. That IP may be nearly whatever really, however generally it’s an actual server, or a digital IP that fronts more than one servers. Like a traditional DNS architecture you usually have a couple of GTMs on your architecture, this is for redundancy/availability.

The primary configuration element in a GTM is known as a wide IP or WIP for short, or as my widespread other likes to name it a “Wipey”. there are many configuration factors that paintings in concert with a WIP, however at the bottom of it all is the extensive IP.

A WIP equates to the not unusual URL you’re load balancing, as an example www.networksbaseline.in . A pool or pools are commonly attached to a WIP which incorporate the IPs it’s intelligently resolving. Like your run of the mill DNS server, the GTM does no longer inform the requester any records approximately ports. though, the monitors associated with the pool contributors can certainly reveal availability or overall performance on ports.

Fig 1.2 F5 Load Balancers (NB)
Fig 1.2 F5 Load Balancers (NB)

Unmatched DNS Performance
BIG‑IP GTM delivers DNS performance that can handle even the busiest sites. When sites have a volume spike in DNS query volumes due to legitimate requests or distributed denial-of-service

 (DDoS) attacks, BIG-IP GTM manages requests with multicore processing and F5 DNS Express™, dramatically increasing authoritative DNS performance to up to 20 million RPS in version 11.5 to quickly respond to all queries. This helps your organization provide the best quality of service (QoS) for your users while eliminating poor application performance. DNS Express improves standard DNS server functions by offloading DNS responses as an authoritative DNS server.

 BIG-IP GTM accepts zone transfers of DNS records from the primary DNS server and answers DNS queries authoritatively.

Benefits and features of multicore processing and DNS Express include:
• High-speed response and DDoS attack protection with in-memory DNS
• Authoritative DNS replication in multiple BIG-IP or DNS service deployments
for faster responses
• Authoritative DNS and DNSSEC in virtual clouds for disaster recovery and fast,
secure responses
• Scalable DNS performance for quality of app and service experience
• The ability to consolidate DNS servers and increase ROI
In cases of very high volumes for apps and services or a DNS DDoS attack, BIG-IP GTM hyper-scales in Rapid Response Mode (RRM) up to 40 million RPS. It extends availability
with unmatched performance and security—absorbing and responding to queries at up to 200 percent of the normal limits. See page 13 for performance metrics and details.

Fig 1.3 F5 Load Balancers (NB)
Fig 1.3 F5 Load Balancers (NB)

LTM – Local Traffic Manager Overview
The Local Traffic Manager, aka LTM: is the maximum popular module provided on F5 Networks large-IP platform. The actual power of the LTM is it’s a complete Proxy, allowing you to augment consumer and server side connections.

All at the same time as making knowledgeable load balancing decisions on availability, performance, and persistence. “nearby” inside the name is essential, opposed to the GTM, traffic without a doubt flows via the LTM to the servers it balances visitors to. normally the servers it’s load balancing take a seat “domestically” within the same information center as the LTM, though that isn't a demand. With SNAT configured at the VIP, if you may course to it you could load balance it – so it’s feasible to have servers in exclusive statistics facilities be part of the equal pool in an LTM VIP.

The primary configuration detail on an LTM is the virtual IP or VIP for short. There are a plethora of configuration elements that paintings with VIPs, but on the coronary heart of the technology it’s a VIP they are all part of. Like a WIP, VIPs equate to the URL you’re load balancing, but at its lowest level. Like a WIP it typically carries a pool with the servers it’s load balancing & screen(s) to degree availability / performance.

Some of the Key differences of the GTM vs. LTM
  • The biggest difference between the GTM and LTM, as mentioned earlier, is traffic doesn’t actually flow through the GTM to your servers.
  • The GTM is an intelligent name resolver, intelligently resolving names to IP addresses.
  • Once the GTM provides you with an IP to route to you’re done with the GTM until you ask it to resolve another name for you.
  • Similar to a usual DNS server, the GTM does not provide any port information in its resolution.
  • The LTM doesn’t do any name resolution and assumes a DNS decision has already been made.
  • When traffic is directed to the LTM traffic flows directly through its’ full proxy architecture to the servers it’s load balancing.
  • Since the LTM is a full proxy it’s easy for it to listen on one port but direct traffic to multiple hosts listening on any port specified.
How do the GTM & LTM work together?
The GTM and LTM can work together or they can be completely impartial. in case your enterprise owns both modules it’s commonly using them collectively, and that’s where the actual electricity is available in.. They try this through a proprietary protocol referred to as iQuery.

iQuery, functioning on TCP port 4353, reviews VIP availability / overall performance again to the GTMs. The GTMs can then dynamically resolve VIPs that stay on an LTM(s).

whilst a GTM has LTMs as servers in its’ configuration there may be no want to screen the real VIP(s) with utility video display units, because the LTM is doing that & iQuery reports the records returned to the GTM.


Post a Comment

Popular Posts