
Since the intent of a data center switch is to provide a transitive low-latency buffer between hosts, we evaluated the hardware to take measure of the approach each manufacturer used when it comes to forwarding, filters, and buffer. Each manufacturer provided high capacity leaves to provide interconnect to storage, UCS Fabric Interconnects as well as a low-density 10/25 Gbps leaves, and a 1G option for telemetry. Fabric HardwareĪll platforms tested utilized 100G backbone connectivity between leaves and spines. I use tomorrow–not today–for good reason. And when asked what the differences were, it really came down to the philosophical approach each manufacturer took to solving higher level problems we face in the data center of tomorrow. To do that, we need to actually determine how to qualify and quantify why we preferred our choice. For the first time we had four products, and all four solutions by any any technical measure go above and beyond our requirements. There was a solitary product which clearly solved a majority of the business or technology problems and others that did not. In every architecture bake off I have previously participated in, there was usually a clear winner.

All tasks can be accomplished on all platforms, and all features, functions, and protocols required from a modern data center function appropriately for our usecases.

We have successfully proven all fabrics tested have the capabilities we need. Eighteen months of planning, identifying candidate solutions, writing business and technology requirements, building test plans and finally testing hardware have transpired.
