Google Search

Search Result

Tuesday, July 27, 2010

Multiprotocol Label Switching (MPLS)


a mechanism in high-performance telecommunications networks which directs and carries data from one network node to the next. MPLS makes it easy to create "virtual links" between distant nodes. It can encapsulate packets of various network protocols.
MPLS is a highly scalable, protocol agnostic, data-carrying mechanism. In an MPLS network, data packets are assigned labels. Packet-forwarding decisions are made solely on the contents of this label, without the need to examine the packet itself. This allows one to create end-to-end circuits across any type of transport medium, using any protocol. The primary benefit is to eliminate dependence on a particular Data Link Layer technology, such as ATM, frame relay, SONET or Ethernet, and eliminate the need for multiple Layer 2 networks to satisfy different types of traffic. MPLS belongs to the family of packet-switched networks.
MPLS operates at an OSI Model layer that is generally considered to lie between traditional definitions of Layer 2 (Data Link Layer) and Layer 3 (Network Layer), and thus is often referred to as a "Layer 2.5" protocol. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames.
A number of different technologies were previously deployed with essentially identical goals, such as frame relay and ATM. MPLS technologies have evolved with the strengths and weaknesses of ATM in mind. Many network engineers agree that ATM should be replaced with a protocol that requires less overhead, while providing connection-oriented services for variable-length frames. MPLS is currently replacing some of these technologies in the marketplace. It is highly possible that MPLS will completely replace these technologies in the future, thus aligning these technologies with current and future technology needs.
In particular, MPLS dispenses with the cell-switching and signaling-protocol baggage of ATM. MPLS recognizes that small ATM cells are not needed in the core of modern networks, since modern optical networks (as of 2008) are so fast (at 40 Gbit/s and beyond) that even full-length 1500 byte packets do not incur significant real-time queuing delays (the need to reduce such delays — e.g., to support voice traffic — was the motivation for the cell nature of ATM).
At the same time, MPLS attempts to preserve the traffic engineering and out-of-band control that made frame relay and ATM attractive for deploying large-scale networks.
While the traffic management benefits of migrating to MPLS are quite valuable (better reliability, increased performance), there is a significant loss of visibility and access into the MPLS cloud for IT departments.

Friday, July 23, 2010

Physical Layout


A typical server rack, commonly seen in colocation.
A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers to large freestanding storage silos which occupy many tiles on the floor. Some equipment such as mainframe computers and storage devices are often as big as the racks themselves, and are placed alongside them. Very large data centers may use shipping containers packed with 1,000 or more servers each[6]; when repairs or upgrades are needed, whole containers are replaced (rather than repairing individual servers).[7]
Local building codes may govern the minimum ceiling heights.


A bank of batteries in a large data center, used to provide power until diesel generators can start.
The physical environment of a data center is rigorously controlled:
• Air conditioning is used to control the temperature and humidity in the data center. ASHRAE's "Thermal Guidelines for Data Processing Environments"[8] recommends a temperature range of 16–24 °C (61–75 °F) and humidity range of 40–55% with a maximum dew point of 15°C as optimal for data center conditions.[9] The electrical power used heats the air in the data center. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help control humidity by cooling the return space air below the dew point. Too much humidity, and water may begin to condense on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapor if the humidity is too low, which can result in static electricity discharge problems which may damage components. Subterranean data centers may keep computer equipment cool while expending less energy than conventional designs.
• Modern data centers try to use economizer cooling, where they use outside air to keep the data center cool. Washington state now has a few data centers that cool all of the servers using outside air 11 months out of the year. They do not use chillers/air conditioners, which creates potential energy savings in the millions.[10].
• Backup power consists of one or more uninterruptible power supplies and/or diesel generators.
• To prevent single points of failure, all elements of the electrical systems, including backup system, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve N+1 Redundancy in the systems. Static switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
• Data centers typically have raised flooring made up of 60 cm (2 ft) removable square tiles. The trend is towards 80–100 cm (31–39 in) void to cater for better and uniform air distribution. These provide a plenum for air to circulate below the floor, as part of the air conditioning system, as well as providing space for power cabling. Data cabling is typically routed through overhead cable trays in modern data centers. But some are still recommending under raised floor cabling for security reasons and to consider the addition of cooling systems above the racks in case this enhancement is necessary. Smaller/less expensive data centers without raised flooring may use anti-static tiles for a flooring surface. Computer cabinets are often organized into a hot aisle arrangement to maximize airflow efficiency.
• Data centers feature fire protection systems, including passive and active design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a developing fire by detecting particles generated by smoldering components prior to the development of flame. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. A fire sprinkler system is often provided to control a full scale fire if it develops. Fire sprinklers require 18 in (46 cm) of clearance (free of cable trays, etc.) below the sprinklers. Clean agent fire suppression gaseous systems are sometimes installed to suppress a fire earlier than the fire sprinkler system. Passive fire protection elements include the installation of fire walls around the data center, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems, or if they are not installed. For critical facilities these firewalls are often insufficient to protect heat-sensitive electronic equipment, however, because conventional firewall construction is only rated for flame penetration time, not heat penetration. There are also deficiencies in the protection of vulnerable entry points into the server room, such as cable penetrations, coolant line penetrations and air ducts. For mission critical data centers fireproof vaults with a Class 125 rating are necessary to meet NFPA 75[11] standards.
• Physical security also plays a large role with data centers. Physical access to the site is usually restricted to selected personnel, with controls including bollards and mantraps.[12] Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information on any of the systems within. The use of finger print recognition man traps is starting to be commonplace.

source : http://en.wikipedia.org/wiki/Data_center

Friday, July 16, 2010

Data Center Classification

The TIA-942:Data Center Standards Overview describes the requirements for the data center infrastructure. The simplest is a Tier 1 data center, which is basically a server room, following basic guidelines for the installation of computer systems. The most stringent level is a Tier 4 data center, which is designed to host mission critical computer systems, with fully redundant subsystems and compartmentalized security zones controlled by biometric access controls methods. Another consideration is the placement of the data center in a subterranean context, for data security as well as environmental considerations such as cooling requirements.[2]
The four levels are defined, and copyrighted, by the Uptime Institute, a Santa Fe, New Mexico-based think tank and professional services organization. The levels describe the availability of data from the hardware at a location. The higher the tier, the greater the accessibility. The levels are:

Tier Level Requirements
1
• Single non-redundant distribution path serving the IT equipments
• Non-redundant capacity components
• Basic site infrastructure guaranteeing 99.671% availability
2
• Fulfils all Tier 1 requirements
• Redundant site infrastructure capacity components guaranteeing 99.741% availability
3
• Fulfils all Tier 1 & Tier 2 requirements
• Multiple independent distribution paths serving the IT equipments
• All IT equipments must be dual-powered and fully compatible with the topology of a site's architecture
• Concurrently maintainable site infrastructure guaranteeing 99.982% availability
4
• Fulfils all Tier 1, Tier 2 and Tier 3 requirements
• All cooling equipment is independently dual-powered, including chillers and Heating, Ventilating and Air Conditioning (HVAC) systems
• Fault tolerant site infrastructure with electrical power storage and distribution facilities guaranteeing 99.995% availability

source : http://en.wikipedia.org/wiki/Data_center

Thursday, July 15, 2010

Requirements for modern data centers

Racks of telecommunications equipment in part of a data center.
IT operations are a crucial aspect of most organizational operations. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of both fiber optic cables and power, which includes emergency backup power generation.

source : http://en.wikipedia.org/wiki/Data_center

Thursday, July 8, 2010

Data Center

Definition :

A data center (or datacentre) is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices.

Data centers have their roots in the huge computer rooms of the early ages of the computing industry. Early computer systems were complex to operate and maintain, and required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised, such as standard racks to mount equipment, elevated floors, and cable trays (installed overhead or under the elevated floor). Also, old computers required a great deal of power, and had to be cooled to avoid overheating. Security was important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised.

During the boom of the microcomputer industry, and especially during the 1980s, computers started to be deployed everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, companies grew aware of the need to control IT resources. With the advent of client-server computing, during the 1990s, microcomputers (now called "servers") started to find their places in the old computer rooms. The availability of inexpensive networking equipment, coupled with new standards for network cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center," as applied to specially designed computer rooms, started to gain popular recognition about this time.

The boom of data centers came during the dot-com bubble. Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide businesses with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results.

As of 2007[update], data center design, construction, and operation is a well-known discipline. Standard documents from accredited professional groups, such as the Telecommunications Industry Association, specify the requirements for data center design. Well-known operational metrics for data center availability can be used to evaluate the business impact of a disruption. There is still a lot of development being done in operation practice, and also in environmentally-friendly data center design. Data centers are typically very expensive to build and maintain.

source : http://en.wikipedia.org/wiki/Data_center

Tuesday, July 6, 2010

Trunking

Etymology

How the term came to apply to communications is unclear, but its previous use in railway track terminology (e.g., India's Grand Trunk Road, Canada's Grand Trunk Railway) was based on the natural model of a tree trunk and its branches. It is likely that the same analogy drove the communications usage.

An alternative explanation is that, from an early stage in the development of telephony, the need was found for thick cables (up to around 10 cm diameter) containing many pairs of wires. These were usually covered in lead. Thus, both in colour and size they resembled an elephant's trunk.[citation needed] This leaves open the question of what term was applied to connections among exchanges during the years when only open wire was used.

Radio communications

In two-way radio communications, trunking refers to the ability of transmissions to be served by free channels whose availability is determined by algorithmic protocols. In conventional (i.e., not trunked) radio, users of a single service share one or more exclusive radio channels and must wait their turn to use them, analogous to the operation of a group of cashiers in a grocery store, where each cashier serves his/her own line of customers. The cashier represents each radio channel, and each customer represents a radio user transmitting on their radio.

Trunked radio systems (TRS) pool all of the cashiers (channels) into one group and use a store manager (site controller) that assigns incoming shoppers to free cashiers as determined by the store's policies (TRS protocols).

In a TRS, individual transmissions in any conversation may take place on several different channels, much as if a family of shoppers checked out all at once, they may be assigned different cashiers by the traffic manager. Similarly, if a single shopper checks out more than once, they may be assigned a different cashier each time.

Trunked radio systems provide greater efficiency at the cost of greater management overhead. The store manager's orders must be conveyed to all the shoppers. This is done by assigning one or more radio channels as the "control channel". The control channel transmits data from the site controller that runs the TRS, and is continuously monitored by all of the field radios in the system so that they know how to follow the various conversations between members of their talkgroups (families) and other talkgroups as they hop from radio channel to radio channel.

TRS's have grown massively in their complexity since their introduction, and now include multi-site systems that can cover entire states or groups of states. This is similar to the idea of a chain of grocery stores. The shopper generally goes to the nearest grocery store, but if there are complications or congestion, the shopper may opt to go to a neighboring store. Each store in the chain can talk to each other and pass messages between shoppers at different stores if necessary, and they provide backup to each other: if a store has to be closed for repair, then other stores pick up the customers.

TRS's have greater risks to overcome than conventional radio systems in that a loss of the store manager (site controller) would cause the system's traffic to no longer be managed. In this case, most of the time the TRS automatically reverts to conventional operation. In spite of these risks, TRS's usually maintain reasonable uptime.

TRS's are more difficult to monitor via radio scanner than conventional systems; however, larger manufacturers of radio scanners have introduced models that, with a little extra programming, are able to follow TRS's quite efficiently.

Telecommunications

Trunk line

A trunk line is a circuit connecting telephone switchboards (or other switching equipment), as distinguished from local loop circuit which extends from telephone exchange switching equipment to individual telephones or information origination/termination equipment.[1][2]

When dealing with a private branch exchange (PBX), trunk lines are the phone lines coming into the PBX from the telephone provider [3]. This differentiates these incoming lines from extension lines that connect the PBX to (usually) individual phone sets. Trunking saves cost, because there are usually fewer trunk lines than extension lines, since it is unusual in most offices to have all extension lines in use for external calls at once. Trunk lines transmit voice and data in formats such as analog, T1, E1, ISDN or PRI. The dial tone lines for outgoing calls are called DDCO (Direct Dial Central Office) trunks.

Trunk call

In the UK and the Commonwealth countries, a trunk call was a long distance one as opposed to a local call. See Subscriber trunk dialling and Trunk vs Toll.

Telephone exchange

Trunking also refers to the connection of switches and circuits within a telephone exchange.[4] Trunking is closely related to the concept of grading. Trunking allows a group of inlet switches at the same time. Thus the service provider can provide a lesser number of circuits than might otherwise be required, allowing many users to "share" a smaller number of connections and achieve capacity savings.[5][6]

Computer networks

Link aggregation

In computer networking, trunking is a slang term referring to the use of multiple network cables or ports in parallel to increase the link speed beyond the limits of any one single cable or port. This is called link aggregation. These aggregated links may be used to interconnect switches.

VLANs

In the context of VLANs, Avaya and Cisco uses the term "trunking" to mean "VLAN multiplexing" - carrying multiple VLANs through a single network link through the use of a "trunking protocol". To allow for multiple VLANs on one link, frames from individual VLANs must be identified. The most common and preferred method, IEEE 802.1Q adds a tag to the Ethernet frame header, labeling it as belonging to a certain VLAN. Since 802.1Q is an open standard, it is the only option in an environment with multiple-vendor equipment.

Cisco also has a proprietary trunking protocol called Inter-Switch Link which encapsulates the Ethernet frame with its own container, which labels the frame as belonging to a specific VLAN.

source : http://en.wikipedia.org/wiki/Trunking

Friday, July 2, 2010

Virtual Private Network


A virtual private network (VPN) is a network that uses a public telecommunication infrastructure, such as the Internet, to provide remote offices or individual users with secure access to their organisation's network. It aims to avoid an expensive system of owned or leased lines that can only be used by one organisation. The goal of a VPN is to provide the organisation with the same, secure capabilities, but at a much lower cost.

It encapsulates data transfers between two or more networked devices not on the same private network so as to keep the transferred data private from other devices on one or more intervening local or wide area networks. There are many different classifications, implementations, and uses for VPNs.

History

Until the end of the 1990s the computers in computer networks connected through very expensive leased lines and/or dial-up phone lines. It could cost thousands of dollars for 56kbps lines or tens of thousands for T1 lines, depending on the distance between the sites.

Virtual Private Networks reduce network costs because they avoid a need for many leased lines that individually connect to the Internet. Users can exchange private data securely, making the expensive leased lines redundant.[1].

VPN technologies have myriad protocols, terminologies and marketing influences that define them. For example, VPN technologies can differ in:

  • The protocols they use to tunnel the traffic
  • The tunnel's termination point, i.e., customer edge or network provider edge
  • Whether they offer site-to-site or remote access connectivity
  • The levels of security provided
  • The OSI layer they present to the connecting network, such as Layer 2 circuits or Layer 3 network connectivity

Some classification schemes are discussed in the following sections.

Security Mechanisms

Secure VPNs use cryptographic tunneling protocols to provide confidentiality by blocking intercepts and packet sniffing, allowing sender authentication to block identity spoofing, and provide message integrity by preventing message alteration.

Secure VPN protocols include the following:

Authentication

Tunnel endpoints must authenticate before secure VPN tunnels can establish.

User-created remote access VPNs may use passwords, biometrics, two-factor authentication or other cryptographic methods.

Network-to-network tunnels often use passwords or digital certificates, as they permanently store the key to allow the tunnel to establish automatically and without intervention.

Routing

Tunneling protocols can be used in a point-to-point topology that would theoretically not be considered a VPN, because a VPN by definition is expected to support arbitrary and changing sets of network nodes. But since most router implementations support software-defined tunnel interface, customer-provisioned VPNs often are simply defined tunnels running conventional routing protocols.

On the other hand provider-provided VPNs (PPVPNs), need to support coexisting multiple VPNs, hidden from one another, but operated by the same service provider.

[edit] PPVPN Building blocks

Depending on whether the PPVPN runs in layer 2 or layer 3, the building blocks described below may be L2 only, L3 only, or combine them both. Multiprotocol Label Switching (MPLS) functionality blurs the L2-L3 identity.

RFC 4026 generalized the following terms to cover L2 and L3 VPNs, but they were introduced in RFC 2547.[6]

Customer edge device. (CE)

a device at the customer premises, that provides access to the PPVPN. Sometimes it's just a demarcation point between provider and customer responsibility. Other providers allow customers to configure it.

Provider edge device (PE)

A PE is a device, or set of devices, at the edge of the provider network, that presents the provider's view of the customer site. PEs are aware of the VPNs that connect through them, and maintain VPN state.

Provider device (P)

A P device operates inside the provider's core network, and does not directly interface to any customer endpoint. It might, for example, provide routing for many provider-operated tunnels that belong to different customers' PPVPNs. While the P device is a key part of implementing PPVPNs, it is not itself VPN-aware and does not maintain VPN state. Its principal role is allowing the service provider to scale its PPVPN offerings, as, for example, by acting as an aggregation point for multiple PEs. P-to-P connections, in such a role, often are high-capacity optical links between major locations of provider.

User-visible PPVPN services

This section deals with the types of VPN considered in the IETF; some historical names were replaced by these terms.

OSI Layer 1 services

Virtual private wire and private line services (VPWS and VPLS)

In both of these services, the provider does not offer a full routed or bridged network, but provides components to build customer-administered networks. VPWS are point-to-point while VPLS can be point-to-multipoint. They can be Layer 1 emulated circuits with no data link structure.

The customer determines the overall customer VPN service, which also can involve routing, bridging, or host network elements.

An unfortunate acronym confusion can occur between Virtual Private Line Service and Virtual Private LAN Service; the context should make it clear whether "VPLS" means the layer 1 virtual private line or the layer 2 virtual private LAN.

OSI Layer 2 services

Virtual LAN

A Layer 2 technique that allows for the coexistence of multiple LAN broadcast domains, interconnected via trunks using the IEEE 802.1Q trunking protocol. Other trunking protocols have been used but have become obsolete, including Inter-Switch Link (ISL), IEEE 802.10 (originally a security protocol but a subset was introduced for trunking), and ATM LAN Emulation (LANE).

Virtual private LAN service (VPLS)

Developed by IEEE, VLANs allow multiple tagged LANs to share common trunking. VLANs frequently comprise only customer-owned facilities. The former[clarification needed] is a layer 1 technology that supports emulation of both point-to-point and point-to-multipoint topologies. The method discussed here extends Layer 2 technologies such as 802.1d and 802.1q LAN trunking to run over transports such as Metro Ethernet.

As used in this context, a VPLS is a Layer 2 PPVPN, rather than a private line, emulating the full functionality of a traditional local area network (LAN). From a user standpoint, a VPLS makes it possible to interconnect several LAN segments over a packet-switched, or optical, provider core; a core transparent to the user, making the remote LAN segments behave as one single LAN.[7]

In a VPLS, the provider network emulates a learning bridge, which optionally may include VLAN service.

Pseudo wire (PW)

PW is similar to VPWS, but it can provide different L2 protocols at both ends. Typically, its interface is a WAN protocol such as Asynchronous Transfer Mode or Frame Relay. In contrast, when aiming to provide the appearance of a LAN contiguous between two or more locations, the Virtual Private LAN service or IPLS would be appropriate.

IP-only LAN-like service (IPLS)

A subset of VPLS, the CE devices must have L3 capabilities; the IPLS presents packets rather than frames. It may support IPv4 or IPv6.

OSI Layer 3 PPVPN architectures

This section discusses the main architectures for PPVPNs, one where the PE disambiguates duplicate addresses in a single routing instance, and the other, virtual router, in which the PE contains a virtual router instance per VPN. The former approach, and its variants, have gained the most attention.

One of the challenges of PPVPNs involves different customers using the same address space, especially the IPv4 private address space[8]. The provider must be able to disambiguate overlapping addresses in the multiple customers' PPVPNs.

BGP/MPLS PPVPN

In the method defined by RFC 2547, BGP extensions advertise routes in the IPv4 VPN address family, which are of the form of 12-byte strings, beginning with an 8-byte Route Distinguisher (RD) and ending with a 4-byte IPv4 address. RDs disambiguate otherwise duplicate addresses in the same PE.

PEs understand the topology of each VPN, which are interconnected with MPLS tunnels, either directly or via P routers. In MPLS terminology, the P routers are Label Switch Routers without awareness of VPNs.

Virtual router PPVPN

The Virtual Router architecture,[9][10] as opposed to BGP/MPLS techniques, requires no modification to existing routing protocols such as BGP. By the provisioning of logically independent routing domains, the customer operating a VPN is completely responsible for the address space. In the various MPLS tunnels, the different PPVPNs are disambiguated by their label, but do not need routing distinguishers.

Virtual router architectures do not need to disambiguate addresses, because rather than a PE router having awareness of all the PPVPNs, the PE contains multiple virtual router instances, which belong to one and only one VPN.

Plaintext Tunnels

Some virtual networks may not use encryption to protect the data contents. While VPNs often provide security, an unencrypted overlay network does not neatly fit within the secure or trusted categorization. For example a tunnel set up between two hosts that used Generic Routing Encapsulation (GRE) would in fact be a virtual private network, but neither secure nor trusted.

Besides the GRE example above, native plaintext tunneling protocols include Layer 2 Tunneling Protocol (L2TP) when it is set up without IPsec and Point-to-Point Tunneling Protocol (PPTP) when it does not use Microsoft Point-to-Point Encryption (MPPE).

Trusted delivery networks

Trusted VPNs do not use cryptographic tunneling, and instead rely on the security of a single provider's network to protect the traffic.

From the security standpoint, VPNs either trust the underlying delivery network, or must enforce security with mechanisms in the VPN itself. Unless the trusted delivery network runs only among physically secure sites, both trusted and secure models need an authentication mechanism for users to gain access to the VPN.

VPNs in mobile environments

Mobile VPNs handle the special circumstances when an endpoint of the VPN is not fixed to a single IP address, but instead roams across various networks such as data networks from cellular carriers or between multiple Wi-Fi access points.[14] Mobile VPNs have been widely used in public safety, where they give law enforcement officers access to mission-critical applications, such as computer-assisted dispatch and criminal databases, as they travel between different subnets of a mobile network.[15] They are also used in field service management and by healthcare organizations,[16] among other industries.

Increasingly, mobile VPNs are being adopted by mobile professionals and white-collar workers who need reliable connections.[16] They allow users to roam seamlessly across networks and in and out of wireless-coverage areas without losing application sessions or dropping the secure VPN session. A conventional VPN cannot survive such events because the network tunnel is disrupted, causing applications to disconnect, time out[14], or fail, or even cause the computing device itself to crash.[16]

Instead of logically tying the endpoint of the network tunnel to the physical IP address, each tunnel is bound to a permanently associated IP address at the device. The mobile VPN software handles the necessary network authentication and maintains the network sessions in a manner transparent to the application and the user.[14] The Host Identity Protocol (HIP), under study by the Internet Engineering Task Force, is designed to support mobility of hosts by separating the role of IP addresses for host identification from their locator functionality in an IP network. With HIP a mobile host maintains its logical connections established via the host identity identifier while associating with different IP addresses when roaming between access networks.

source : http://en.wikipedia.org/wiki/Virtual_private_network