As I read literature on Data Center Networks with respect to
enormous increase in data loads and virtualization of servers, I see that
market is trending towards data center network architectures which are flat in
nature. I hear a term called “Fabric” to refer to data center networks. In this
post, I will try to express my understandings and opinions on this concept of
transformation of Data center networks from 3-tier to flat.
Three-tier Network Architecture - Current Data center network
Network architecture that is dominant in current data centers is Three-tier network architecture. Most
of current data center networks are built on this architecture. By three tiers, we mean access
switches/Top-of-Rack (ToR) switches, or modular/End-of-Row (EoR) switches that
connect to servers and IP based storage. These access switches are connected
via Ethernet to aggregation switches. The aggregation switches are connected
into a set of core switches or routers that forward traffic flows from servers
to an intranet and internet, and between the aggregation switches. Typically this can be depicted as follows:
As you can
see some blocked links, it is apparent that these links are blocked because of
Spanning Tree Protocol (STP) running in the network.
For detailed
connections with focus on Access(TOR/EOR) switches connected to servers, you
can always refer to my previous post which shows a beautiful picture of
interconnections.
In this
3-tier architecture, it is common that VLANs are constructed within access and
aggregation switches, while layer 3 capabilities in the aggregation or core
switches, route between them. Within the high-end data center market, where the
number of servers is in the thousands to tens of thousands and east-west bandwidth(intra-server
traffic) is significant, and also where applications need a single layer 2
domain, the existing Ethernet or Layer 2 capabilities within this tiered
architecture do not meet emerging demands.
When I say, Layer 2 capabilities, I mainly refer to Spanning-Tree
protocol which keeps the network connected without any loops.
STP..STP…STP.. I thought it was good…what happened?
Radia Perlman created the Spanning
Tree algorithm, which became part of the Spanning Tree Protocol (STP), to solve issues such as loops. Ms. Perlman certainly doesn’t need me to come to
the defense of Spanning Tree–but I will. I like Spanning Tree, because it works.
I would say that in at least 40% of the networks I see, Spanning Tree has never
been changed from its default settings, but it keeps the network up, while at
the same time providing some redundancy.
However, while STP solves significant
problems,it also forces a network design that isn’t optimized for many of
today’s data center requirements. For instance, STP paths are determined in a
north-south tree, which forces traffic to flow from a top-of-rack switch out to
a distribution switch and then back in again to another top-of-rack switch. By
contrast, an eastwest path directly between the two top-of-rack switches would
be more efficient, but this type of path isn’t allowed under STP. The original 802.1D Spanning Tree can take up to 52
seconds to fail to a redundant link. RSTP (802.1w) is much faster, but can
still take up to 6 seconds to converge. It’s an improvement, but six seconds
can still be an eternity in the data center.
So, what is needed???
- poor path optimization
- failover timing
- limited or expensive reachability
- latency.
How this flat network helps in DC networks???
Enter
the flat network. This approach, also called a fabric, allows for more paths
through the network, and is better suited to the requirements of the data
center, including the need to support virtualized networking, VM mobility, and
high-priority storage traffic on the LAN such as iSCSI and FCoE. A flat network aims
to minimize delay and maximize available bandwidth while providing the level of
reachability demanded in a virtual world.
Don’t think flat network is Eutopia…
It is all not ready made or ready to deploy... a flat network also requires some tradeoffs, including the need to rearchitect
your data center LAN and adopt either new standards such as TRILL (Transparent
Interconnection of Lots of Links) and SPB (Shortest Path Bridging),or
proprietary, vendor-specific approaches. It is a debate on how many people in
the industry are willing to go for this rearchitecture. I could access a survey
in this regard:
Commercial Sample Leaf & Spine Architecture
A commercial Leaf and Spine architecture built using Dell
Force10 switches can be shown as follows. In this design Force 10
products are used.
You can see that each S4810 switch has connections to four
Z9000 switches. That is, each switch in Leaf network has multiple paths(4
paths) to reach spine network.
Conclusion….
[My next post contains my take on virtual networks with emphasis on L2 Multipathing, L2 extension and SDN]
This is a very informative article. My company has recently moved to a new office and were in the process of doing data transformation. It is a very confusing process but I am looking for a system that is capable of transforming our data into another format very quickly. I know there are a few services out there that can perform these duties.
ReplyDeleteReally impressive and insight article. Ricoh India is Tier III level data center and hosting provider company. And provides data center India and hosting services at best affordable price with friendly 24X7 customer support. You may inquire about the company and its service for your satisfaction.
ReplyDeleteThe new predictions by Cisco's at the annual Global Cloud Index (GCI) reveals the latest findings and predictions on data centre traffic and cloud computing between 2013 and 2018.
ReplyDelete