what caused these problems precisely?
If you are an IT administrator/Network
Manager/Networking SW
designer/Networking silicon designer, did you ever thought of a need for
something as follows:
"A need for switching traffic from multiple, independent operating systems, each with distinct IP and MAC addresses, and sharing the same physical interface. "
This is one situation Ethernet protocol itself never anticipated. Thus, virtual switch design that sits above Hypervisor which does the phenomenon specified above - will still stand as an “ad hoc solution” to the problem of VM switching.
Most of the issues came from the introduction of
vSwitch which is also called Virtual Ethernet Bridge (VEB). Before going into proposed solutions, first
let us explore how Virtual Ethernet Bridge works…However, remember that we are
still talking about vSwitch in Server, we did not enter into any solution in
edge switches.
Virtual Ethernet
Bridges
As I
mentioned in previous post, A Virtual Ethernet Bridge (VEB) is a virtual Ethernet
switch that you implement in a virtualized server environment. It is anything
that mimics a traditional external layer 2 (L2) switch or bridge for connecting
VMs. VEBs can communicate between VMs on a single physical server, or they can
connect VMs to the external network.
Most
common implementations of VEB are software-based or hardware-based.
Software-based VEBs – Virtual
Switches
In a virtualized server, the hypervisor abstracts and shares physical NICs among multiple virtual machines, creating virtual NICs for each virtual machine. For the vSwitch, the physical NIC acts as the uplink to the external network. The hypervisor implements one or more software-based virtual switches that connect the virtual NICs to the physical NICs
Data traffic received by a physical NIC
passes to a vSwitch. The vSwitch uses its hypervisor-based configuration
information to forward traffic to the correct VMs. When a VM transmits traffic from its
virtual NIC, a vSwitch forwards the traffic in one of two :
•
If the destination is external to the physical server or to a different
vSwitch, the vSwitch forwards traffic to the physical NIC. (blue line in pic)
• If the destination is
internal to the physical server on the same vSwitch, the vSwitch forwards the
traffic directly back to another VM. (gray line in pic)
I already discussed in my previous post on what kind of problems these VEBs can cause.
Hardware-based VEBs – Single Root
–I/O Virtualization enabled NICs
PCI Special Interest Group(PCI –SIG) proposed a technique called SR-IOV(Single
Root – I/O Virtualization) which moves VEB functionality into an intelligent
NIC instead of vSwitch. Moving VEB functionality into the hardware reduces the
performance issues associated with vSwitches. SR-IOV essentially carves up an
intelligent NIC into multiple virtual NICs—one for each VM.But how does it to? What
does it to ? It does this by providing
independent memory space, interrupts and DMA streams for each VM.
SR-IOV-enabled
NICs let the virtual NICs bypass the hypervisor vSwitch by exposing the virtual
NIC functions directly to the guest OS. Thus, the NIC reduces latency between
the VM to the external port significantly. The hypervisor continues to allocate
resources and handle exception conditions, but it doesn’t need to perform
routine data processing for traffic between the VMs and the NIC.
In a VEB
implemented as an SR-IOV NIC, traffic flows the same way as with a vSwitch.
Traffic can switch locally inside the VEB (gray line in the picture) or go
directly to the external network (blue line in the picture).
Intel mainly drove SR-IOV effort
and it seemed like a promising solution when Intel proposed it several years
ago, but it’s failed to gain market momentum due to poor interoperability
between NICs and scalability concerns as the number of VMs per server grows.
Aside from lackluster industry adoption, the problem is that each embedded
bridge is yet another device to manage (no different from a software switch),
and that management function is not integrated with the overall network management
system. Due to implementation
differences (that is, extended functions not part of the standard), different
NICs may have different bridging capabilities, and these often don’t
interoperate.
Ok.. Enough problems….What is
the solution??
How about ditching this virtual
switch which is causing all problems?
But vSwitch/VEB has it’s own advantages as well(of course without those
advantages, vSwitch could not have come out right?).Let me recap the things
vSwitch is useful for?
·
Good for intra-VM
switching
·
Can connect to
external networking environment
·
Good for
deployment when there is no need for an external switch (how come?? For example, you can run a local network
between a web server and a firewall application running on separate VMs within
the same physical server)
Sounds reasonable?? So, we might still need VEB and able to solve issues caused by VEB. Anyhow, even if we want to completely remove VEB functionality and move bridging completely to Edge device, how about devices already in field which are using VEBs? So, that's another case where we might want to keep existing server advancements in tact and still able to solve these issues?
This is where two new IEEE
standards projects come in, with work proceeding on two parallel and largely
complementary paths. These two solutions involve edge devices where every good network engineer believes switching belongs.Both are amendments to the base IEEE 802.1Q VLAN tagging
standard.
Edge Virtual Bridging (EVB) –
IEEE 802.1Qbg :
This involves VEPA (Virtual Ethernet Port Aggregator) and
S-Channel Variants.
Port Extension Technology - IEEE802.1BR and IEEE802.1Qbh :
This involves using Cisco’s VN-Tag or 802.1BR E-Tag
I will discuss each of these in my next posts....
[To Be continued - Next post contains EVB, VEPA, S-Channel]
No comments:
Post a Comment