optical communication

How to Deploy High Density MTP/MPO Cables in 10G/40G/100G Migration?

Just as large enterprise settle into 10G networking, bandwidth intensive applications and big demands are forcing companies to adopt 40G or even 100G network speeds. To address the upgrading from 10G to 40G/100G more efficiently and effectively, high density MTP/MPO cables are a good solution. In this post, I’d like to introduce the deployment of MTP/MPO cables (MTP harness cable, MTP trunk cable and MTP conversion harness) in 10G/40G/100G migration.

10G to 40G Migration: 8-Fiber MTP Harness Cable

8-fiber MTP-LC harness cable is one commonly used solution to directly connect 10G device to 40G device. As the following image shows, the MTP harness cable is in conjunction with a QSFP+ port carrying 40GbE data rates, then breakouts into four LC duplex cables which will be plugged into four 10G SFP+ transceivers.


40G to 40G Connection
Solution 1: 12-Fiber MTP Trunk Cable

For 40G to 40G direct connection, 12-fiber MTP trunk cable is the first choice. In the following scenario, 12-fiber MTP trunk cables are needed to connect the 40G transceivers (four fibers transmit, four fibers receive, leaving four fibers unused), adapting to the QSFP+ ports on the two 40G switches.


Solution 2: 2×3 MTP Conversion Module

In this scenario, 2×3 MTP conversion module is used. For every two 12-fber MTP connectors in the backbone cable, you can create three 8-fiber links. There is an additional cost for the additional MTP connectivity, but that is offset by the cost savings from 100 percent fiber utilization in the structured cabling. The 2×3 conversion module must be used in pairs—one at each end of the link. As the following image shows, the eight live fibers from each of the three QSFP+ transceivers are transmitted through the trunks using the full 24 fibers. The second 2×3 module unpacks these fibers to connect to the 3 QSFP+ transceivers on the other end.


Solution 3: 2×3 MTP Conversion Harness

For those needing a direct connection with 100 percent fiber trunk utilization, 2×3 MTP conversion harness (two 12-fiber MTP connectors on one end going to three 8-fiber MTP connectors on the other end) is an alternative fanout solution available which has the same functionality as 2×3 conversion module. Connectivity of the conversion harness is identical to the 2×3 module, and they are interchangeable, but must be used in pairs—one (cable or module) at each end of the link.


10G to 100G Migration: 20-Fiber MTP Harness Cable

CFP is a very popular implementation when deploying 100G network. To achieve 10G to 100G migration, in this scenario, 20-fiber MTP LC harness cable will be used(ten fibers for transmit and ten fibers for receive, then breakout into ten duplex LC cables). Simply connect this cable to a CFP transceiver and the customer can access the 10 SFP+ individually transceiver pairs.

10G to 100G migration with mtp breakout cable

100G to 100G Connection: MTP Trunk Cable

For directly connecting switches with QSFP+ ports, 12-fiber MTP trunk cable can be used, while for connecting 100GBase-SR10 CFP equipped devices, 24-fiber MTP trunk cable will be deployed.



From the text above, we have introduced several 10G/40G/100G scenarios that use MTP/MPO cables for data transmission. MTP trunk cable is a common solution for device direct connection, MTP harness cable is used for easier upgrading to higher speed network, and MTP conversion harness can achieve 100% fibers utilization, saving costs. All the MTP/MPO cables that we mentioned can be purchased in FS.COM.

Understanding Array Polarity With Parallel Link

The use of pre-terminated fiber assemblies and cassettes is growing, and the deployment of systems with speeds up to and beyond 100G are on the horizon for many users. As a result, the issue of maintaining polarity in parallel fiber-optic links is becoming increasingly important. In the previous posts, we have introduced polarity in point-to-point duplex links which is achieved through what is known as an A-to-B patch cord. In this post, we are going to talk about array polarity with parallel link.

Array Polarity With Parallel Link Overview

Array polarity with parallel link has the corresponding Method A, B and C to establish polarity for parallel signals using an MPO transceiver interface with one row of fibers. For example, 40 Gigabit Ethernet over multimode fiber uses 4 transmit and 4 receive fibers in a 12-fiber array, or 4 lanes at 10Gbps. In order to understand these polarity methods more specifically, we can make a comparison with polarity methods for duplex signals. From the following table, we can easily find out that the breakout MTP cassette and the duplex fiber patch cords in duplex link are replaced with 12-fiber array patch cords that plug directly into the MTP adapter at the patch panel and into the equipment interface in parallel link.

polarity of multiple duplex signals vs. parallel signals

Three Methods for Array Polarity With Parallel Link
Method A

Method A as shown below recognized in 568-C.0 uses Type A backbone on each end connected to a patch panel. On one end of the optical link, a Type A array patch cord is used to connect patch panel, while on the other end, a Type B array patch cord is used to connect patch ports to their respective parallel transceiver ports.


Method B

Also recognized in 568-C.0 uses Type B throughout-Type B array cable, Type B adapters and Type B array patch cords to achieve the whole optical link. More detailed information can be seen in the following image.


Method C

The proposed Method C as shown in the image below is similar to Method A, but it would use Type C trunk cable instead of Type A, and a Type C cross-over patch cord is required at one end and the other end uses Type B patch cable.


Note: An important point to remember is that MPO plugs use alignment pins. For a MPO connection, one plug is pinned and the other plug is unpinned. As MPO transceiver typically has pins, this convention leads to the following implementation on initial build out: 1) Patch cords from transceiver to patch panel are typically unpinned (female) on both ends. 2) Trunk cables are typically pinned (male) on both ends.

As duplex link, there are also three methods for parallel link. However, maintaining array polarity with parallel link is not as simple as it seems. This article can only provide some basic information about the polarity with parallel link. In the following updating, we will talk about more about array polarity system.

Efficiently and Conveniently Integrate 10G, 40G, and 100G Equipment With MTP Breakout Patch Panel

Not all that long ago, Ethernet networks that supported 10G speeds were considered amazingly fast, and now 40G is the norm in most data center. As 40G Ethernet becomes a standard in data centers, the challenge of connecting 40G equipment with existing 10G equipment moves front and center. Adding further complexity, it’s clear that organizations of all sizes also need to be prepared to integrate speeds of 100/120G and beyond. MTP breakout cable filled a pressing need when no other options were available, but using unstructured cabling makes installs, upgrades, changes, troubleshooting and repairs extremely inefficient. The emergence of MTP breakout patch panel allows you to seamlessly and conveniently integrate equipment with different network speeds to meet your connectivity need today.

The Nuts and Bolts of MTP Breakout Patch Panel

MTP breakout patch panel is integrated with a range of modular, removable fiber cassettes in a rack mount patch panel, which combines the functionality of breakout cables, the efficiency of structured cabling and the convenience of a pre-assembled kit. Breakout patch panel splits 40G QSFP+ and 100G CFP switch port into 10G duplex LC ports, which connect to devices’ SFP+ ports with high-quality off-the-shelf fiber patch cables.

working principle of MTP breakout patch panel

Sparkles of MTP Breakout Patch Panel

Convenience and Efficiency: Pre-assembled panels including modular fiber breakout cassettes with build-in MTP cable and duplex LC ports makes it possible for quicker deployment. In addition, its structured cabling makes installs, upgrades, changes, troubleshooting and repairs quicker, easier and more cost-effective than using MTP breakout cables.

Space Saving: By managing varying port densities and speeds in a single high density patch panel, you save valuable rack space, helping to lower data center costs. A 1U 40G MTP breakout cable can provide 96 high-density duplex LC ports for 10G connection, while a 2U 100G MTP breakout patch panel can support up to 160 duplex LC ports.

Reduced Congestion: Reduced cable slack means less clutter, less confusion and an easily organized, better-labeled cabling infrastructure. You can also manage cables in any direction—horizontal or vertical, front or back.

Two Main MTP Breakout Patch Panel Solutions
1U 40G Breakout Patch Panel Supporting Base-8 Connectivity

Base-8 connectivity is supposed to be the most suitable network link which can supports popular 40G switches today and 100/400G networks tomorrow. High density 1U 40G MTP breakout patch panel shown in the following image is designed to connect 40G QSFP+ ports with 8-fiber MTP cables, mapping to the back of the panel, then breaking out as 48x10G on the front with duplex LC fiber cables.

1U 40G Breakout Patch Panel Supporting Base-8 Connectivity

2U 100G MTP Breakout Patch Panel Supporting Ultra High-Density Cabling

100G MTP breakout patch panel as shown below is designed in a standard 2U rack, which has the same working principle as 40G MTP patch panel, but instead of connecting 40GBase-SR4 ports, it connects 100GBase-SR10 ports with 24-fiber MTP cable (10 for Tx, 10 for Rx, leaving the rest 4 fibers unused) to the rear of the panel and then break out as 80×10 on the front with LC fiber cable.

2U 100G MTP Breakout Patch Panel

With more and more high-speed equipment deployed in data center, integrating those different speed network poses a issue to IT managers. MTP breakout patch panel efficiently and conveniently solve this problem. It can support your ability to plan, deploy and upgrade your network to meet the growing demand for additional and higher speed.

Things Should Be Noticed Before Choosing 24-Fiber MPO Cable

In the process of migrating to greater bandwidth 40G and 100G network, MTP cabling system which provides high density and high performance plays an important role. Whether to use 12-fiber or 24-fiber MPO cable has been a hot topic in higher speed networking migration. In my previous blog Choosing 24-Fiber MPO/MTP Cabling for 40/100G Migration, we have indicated that MPO 24 cable is more suitable for 40G and 100G network. Besides, with active equipment planning to use a single 24-fiber MPO interface for 100G and the channel currently requiring 20 fibers, many IT managers are also considering the use of 24-fiber MPO solutions. However, before choosing 24-fiber MPO cable, there are some facts that should be noticed.

The Higher the Fiber Count, the Higher the Loss

Optical loss budget is a big concern among data center managers, and due to limitations in the protocol, standards now require a total connector loss budget of 1.0 dB for 40G and 100G, but a 24-fiber MPO connector typically has a loss of 0.5dB which is much higher than 0.2dB that 12-fiber MPO connector has. This is mainly due to the fact that the more the fiber count, the higher the loss. The higher loss of the 24-fiber MPO limits data center managers to having just two mated pairs in a channel.

Note: Current proper polishing technique can address 24-fiber MPO to meet the low loss requirement as 12 fiber MPO connector. For example, 24-fiber MTP trunk cable in FS.COM only has 0.35dB insertion loss.

The Higher the Fiber Count, the More Difficult to Control End-Face Geometry

In a quality fiber connector, the fibers protrude slightly beyond the ferrule. When two fibers are mated using the right pressure, the fibers will deform slightly to fill in any gaps and provide a solid mating. Any variance in the pressure can impact the insertion loss and return loss on a fiber-to-fiber basis. To achieve consistence pressure, it is important to have a very flat ferrule after polishing with all the fibers protruding equally. With higher count arrays, like 24-fiber MPOs, there are more fibers to control, which can significantly increase the odds for height variance. For example, in the following 72-fiber array, if we look at this graphic of the middle two rows of fibers, we can see the variance in the height profile. The height variance becomes even more pronounced across more rows of fibers. Besides, it is more difficult to achieve a flat ferrule polishing on a large array area.

The 24-fiber MPO’s End-Face Geometry is More Difficult to Control

Although the polishing technique has been significantly improved, there still exists limitation to achieve a flat end-face and equal pressure over the array.

Standards and Testing Remain an Issue to 24-fiber MPO Cabling

100GBase-SR4 standard has be a reality and that most of users is running 100G over 8 fibers, rather than 20, which will render the 24-fiber MPO a dated interface for 100G Ethernet. In addition, the MPO cabling testing is far more complicated than duplex cabling testing. You have to gain very professional training, tools understanding that you can efficiently conduct multifiber testing. In other words, if there is any issue with the multifiber cabling, it’s not easy to troubleshoot it.

It’s Still Your Choice

With the significant demand for higher speed 40GbE and 100GbE, MPO cabling has become more popular than ever. We have indicated that 24-fiber MPO cable reveals more advantages than 12-fiber MPO cable, however, before choosing it, there are more factors we have talked above that should be taken into consideration.

How to Deploy 10G, 40G, 100G in the Same Network

In 2010, 10G SFP+ became the primary equipment interface in data center applications. However, jump to 2017, as demand for greater bandwidth shows no signs of slowing, 40G and 100G transceiver shipments saw a whopping increase. While shipments of 40G and 100G modules are on the rise, the large majority of data center networks don’t undergo a whole replacement of 10G device with 40G or 100G device. Instead, many typically deploy necessary equipment to achieve the coexistence of 10G, 40G, and 100G in the same network. Read this post, and you will get detailed solution.

QSFP+ 40G to 10G

In the following scenario, an upgraded 40G switch is networked to existing 10G servers with a 1×24-fiber to 3×8-fiber MTP conversion cable. At the switch, a cassette combines three 40G ports (QSFP 8-fiber) on the 24-fiber trunk. In the server cabinet, each 40G port is segregated into 10G LC connections to support server connectivity.

QSFP+ 40G to 10G

Note: in this architecture, if you have existing 12-fiber MTP trunks, you can use a cassette with two 12-fiber MTP inputs that breakout into 3×8-fiber MTP strands, instead of deploying a new 24-fiber MTP trunk cable. However, if you have to move to denser and more complicated applications, the 24-fiber MTP solution makes for easier migration.

CFP2 100G Port (10×10)

Like the previous example, the following figure 2 also shows a similar scenario in existing 10G servers, but it uses 100Gbase-SR10 ports on the switch, which requires a 24-fiber connector to drive the 10×10 transceiver port. Instead of breaking into 8-fiber connections, it uses 24-fiber MTP patch cord from the switch to the patch panel in the top of the rack. A 24-fiber MTP trunk connects the switch and server cabinet. The MTP cassette at the top of the server cabinet converts the 100G port into ten individual 10G port with LC connectors.

CFP2 100G port (10x10)

Note: As in the figure 1, in this scenario, if you already have two 12-fiber MTP trunks, you can use 12-fiber MTP adapter panel, then a 2×12-fiber to 1×24-fiber MTP harness cable could be used at the switch to build the same channel.

New Installation for 40G/100G Deployment

Figure 3 shows an example of a completely new installation, using 40G/100G right out of the box without any 10G switches in the channel. This method has 40G or 100G port on the core switches, and 40G uplinks at the ToR switches. The patch panels at the top of each rack use MTP bulkhead, with all 8-fiber cords from one QSFP port to the next.

40G100G Deployment - New Installation

In this architecture, we can either use 24-fiber trunks that break into 40G ports, or create trunks with 8-fiber strands on every leg, with 8 fibers per 40G or 100G port, as shown in the diagram above. However, we have to pay attention that with 8-fiber legs, the density will become a challenge. In addition, 12-fiber MTP trunks are avoided in this scenario, since integrating existing 12-fiber trunks with 8-fiber connectivity on the patch cord creates fibers unused.

Deploying 10G, 40G, 100G in the same network can effectively avoid costly upgrades that require ripping out cabling and starting over with a new network architecture. This post have provided three solutions. All the devices in these three scenarios can be purchased in FS.COM. If you are interested, kindly visit FS.COM.

FAQs About OM5 Fiber Optic Cable

Data centers everywhere are moving quickly to manage ever-increasing bandwidth demands. And the emergence of cloud computing has acted as catalyst for driving even faster adopting of new network technology and higher bandwidth. Speeds as high as 40G and 100G Ethernet have already become mainstream in data centers, and the industry is working collaboratively on next-generation Ethernet development, such as 200G and 400G Ethernet. In this high speed migration, multimode fiber (MMF) plays an important role. As everyone knows, OM1/OM2/OM3/OM4 are commonly used multimode fibers in networking field, especially OM3 and OM4 are proven to be the future-proofing MMF. And now, a new types of MMF fiber medium—OM5, specified in ANSI/TIA-492AAAE and published in June 2016, is introduced. OM5 is being presented as a potential new option for data centers that require greater link distance and higher speeds, however, is it really a good solutions for data centers? This post will deal with this question from some FAQs about OM5.

Q: Does OM5 Offer a Longer Transmission Distance than OM4?

A: Actually, for all current and future multimode IEEE applications including 40GBase-SR4, 100GBase-SR10, 200GBase-SR4, and 400Gbase-SR16, the maximum allowable reach is the same for OM5 as OM4. According to a recently done application testing with 40G-SWDM4 transceivers, it shows that 40G-SWDM4 could reach 400 meters over OM4 cable, while over OM5 cable, the module can achieve link length up to 500 meters. Besides, if a data center is using non-IEEE-compliant 100G-SWDM4 transceivers, it proven that OM5 can support 150-meter reach—only 50 meters more than OM4. In addition, for most data centers, when transmission distance over 100 meters, IT managers will choose single-mode fiber.

transmission distance of OM4 and OM5 in 100G

Q: Does OM5 Costs Less?

A: As the matter of fact, OM5 cabling will costs about 50% more than OM4. Besides, with the considerably declined costs of single-mode transceivers over the past 12-18 month due to silicon photonics technologies and large hyperscale data centers buying in large volumes, more and more users will be pone to choose single-mode transceiver modules. For example, 100GBase-PSM4 using single-mode MTP trunk cable that can support 500-meter reach is only $750.

Q: Is OM5 Really Required for Higher Speeds?

A: All of the IEEE standards in next-generation 100/200/400G Ethernet will work either with SMF and MMF, but in most situations, these next-generation speeds will require single-mode fiber, since IEEE always strives to develop future standards that work with the primary installed base of cabling infrastructure, so customers can easily upgrade to new speeds. Besides, none of these current active IEEE standards addressing next-generation speeds will use SWDM technology.

Q: Will OM5 Create Higher Density from Switch Port?

A: As we all know, it is common in data center using 40GBase-SR4 to increase port density by breaking out 40G to 10G with MTP breakout module or MTP breakout cable. This is also a benefit of new 100GBaes-SR4 modules, which use OM4 cabling. However, if data center manager decides to use 100G SWDM4 modules with OM5 cabling, they cannot breakout into 25Gb/s channels, which will become a real issue as the 25Gb/s ecosystem fully develops and we begin to see more 25G to the server.


According to the questions we have discussed above, it is apparent that OM5 is not suitable for large data centers. As far as I’m concerned, for current high-speed network applications, OM3 and OM4 is still the most recommended multimode fibers.

Sida 1 av 10
......Sista »