HomePC DIYFeaturesPCIe lanes and you: what they are and how to pick a...

PCIe lanes and you: what they are and how to pick a motherboard with the layout you need

In the intricate world of PC hardware, every component plays a vital role. While CPUs, GPUs, and RAM often grab the headlines, there’s a silent, incredibly fast communication network underneath it all: the Peripheral Component Interconnect Express, or PCIe. These lanes allow for data transfer between your CPU or chipset and the components that you connect to your motherboard. Understanding PCIe lanes and their implementation on your motherboard is crucial for maximizing the expansion capabilities of your PC.

Here, we’ll demystify PCIe lanes, explain why concepts like “bifurcation” are important, and ultimately demonstrate why PCIe lane optimization should be a key factor in your next motherboard purchase.

What are PCIe lanes?

Imagine a multi-lane highway designed for data. Each “lane” is a pathway that allows data to travel between a component (like a graphics card, NVMe SSD, or expansion card) and the CPU or chipset. A PCIe slot is a physical connector, then, that provides a portal to a certain number of these lanes.

PCIe comes in different generations: PCIe 3.0, 4.0, 5.0, and (soon) 6.0, each doubling the bandwidth per lane over its predecessor. So a single PCIe 4.0 lane offers twice the throughput of a PCIe 3.0 lane.

It’s not just the generation that matters, but the lane count. The most common configurations you’ll see are x1, x4, x8, and x16, where the number indicates the lane count. A PCIe x16 slot, for instance, has 16 lanes available for data transfer, vastly increasing available bandwidth.

Some key components that make use of PCIe lanes:

  • Graphics cards (GPUs): These are the most bandwidth-hungry components and almost always utilize a PCIe x16 connection for maximum performance.
  • NVMe solid state drives (SSDs): High-speed NVMe drives connect via PCIe, typically using x4 lanes for blistering fast storage. Some M.2 slots connect directly to the CPU, and others to the chipset, which can make a difference in terms of latency. (More on that in a moment.)
  • Network cards: 10 Gigabit Ethernet or WiFi cards often use x1 or x4 lanes.
  • Capture cards: Depending on their capabilities, these can use x1, x4, or even x8.
  • RAID cards: For professional storage setups, these can use x4, x8, or x16.

How many PCIe lanes does my motherboard offer?

The total number of PCIe lanes offered by your motherboard is actually determined by the CPU manufacturer. With each new generation of CPUs, AMD and Intel specify a total number of PCIe lanes, typically offering multiple tiers of options for entry-level, mainstream, and enthusiast builders.

Of the total PCIe lanes on offer, a certain number are typically reserved for direct connection to the CPU. An AMD X870E motherboard, for instance, reserves 16 PCIe 5.0 lanes to drive an expansion slot at x16, or two expansion slots at x8/x8, and then another PCIe 5.0 lanes for the primary M.2 slot.

Beyond those reserved lanes, there are usually many more that your system can put to work. They’re often called the “usable” PCIe lanes, and motherboard manufacturers like ASUS have considerable discretion over the utilization of these lanes.

The source of the lanes: CPU vs. chipset

This is where things get interesting. Not all PCIe lanes are created equal, and their source determines their performance characteristics and flexibility.

CPU-direct lanes

These lanes are physically integrated into the CPU itself. They offer the lowest latency and highest bandwidth because they communicate directly with the processor without an intermediary. For mainstream desktop CPUs (like Intel Core Ultra or AMD Ryzen CPUs), you typically get 16-20 CPU-direct PCIe lanes. 16 lanes are almost always dedicated to the primary graphics card slot (PCIe x16). The remaining 4 lanes are usually routed to a primary M.2 NVMe slot.

Since these lanes offer unbeatable speed and responsiveness, they’re ideal for GPUs and primary boot drives. That’s why you’ll always want to install your highest-performance SSD into the M.2 slot that provides a direct connection to the CPU.

Chipset lanes

The motherboard’s chipset (e.g., Intel Z890, AMD X870E) acts as a hub, acting as an intermediary between many components and motherboard elements and the CPU. The chipset has a high-bandwidth link to the CPU via PCIe lanes. Many elements of your PC are connected to your CPU via the chipset, including secondary M.2 slots, SATA ports, USB controllers, additional PCIe x1 or x4 slots, and integrated network controllers.

You can imagine the CPU as the central headquarters and the chipset as a major regional office. The CPU has its own direct expressways to critical facilities (GPU, primary M.2), and then one main highway to the chipset. The chipset, in turn, has its own network of roads to local businesses (other M.2s, expansion cards, USB).

Your motherboard’s chipset is vital for expanding connectivity beyond what the CPU directly offers. It is worth noting that data traveling through chipset lanes has to make an extra “hop” to reach the CPU, introducing a tiny bit more latency compared to CPU-direct lanes. Most of the time, this difference is imperceptible, but there are use cases where it’s important to keep in mind.

PCIe lane bifurcation gives your motherboard flexibility for connecting multiple devices

PCIe lane bifurcation is the ability of the CPU’s PCIe controller to split a single physical PCIe slot’s lane allocation into smaller logical groupings. Bifurcation allows you to run multiple PCIe devices using a single physical connector on the motherboard, or split the available lanes between two separate physical connectors.

PCIe lane bifurcation is often necessary due to the fixed number of lanes provided by CPUs. If you want to use multiple high-bandwidth devices that demand CPU-direct lanes, you have to split the available lanes. Typically, bifurcation happens automatically as you install components, though motherboards often give you some measure of control over the bifurcation through BIOS settings.

Some common scenarios where bifurcation comes into play:

  • Multi-GPU setups: While multi-GPU gaming is largely in the rear-view mirror, it’s relatively common for creators and AI developers to run systems with multiple graphics cards to get access to more VRAM. For such a setup to run optimally, users look for motherboards with two expansion slots capable of running in an x8/x8 configuration, with each GPU getting 8 lanes.
  • M.2 add-in cards: Some users need to install more NVMe SSDs than their motherboard makes available with onboard M.2 slots. An add-in card can stand in the gap. These cards often require a physical x16 slot, which is then bifurcated to provide x4 lanes to each slot on the card (up to x4/x4/x4/x4). This is a fantastic way to utilize otherwise unused CPU-direct lanes for extremely fast storage.
  • High-bandwidth expansion cards: Certain professional capture cards, FPGA accelerators, or specialized network cards might require more than x4 lanes and benefit from CPU-direct allocation.

Please note that not all motherboards support bifurcation on all slots, and the available bifurcation options can vary significantly.

PCIe lane sharing gives you choices for what you can install

Since there’s a finite number of PCIe lanes for any motherboard to play with, PC builders can unexpectedly run into pitfalls as they install more PC components. That’s because multiple elements of your motherboard might share PCIe bandwidth.

For instance, your motherboard might offer a secondary PCIe x16 slot that’s connected to the chipset in x4 mode. Since many users may not install anything into that slot, it might share bandwidth with a secondary M.2 slot. The advantage of such a layout is that you have choice: the lanes for that PCIe x16 slot don’t have to go unused. But in such an arrangement, you won’t be able to install both a graphics card and a capture card while also populating all your M.2 slots.

These kinds of dependencies are critical and are always detailed in your motherboard’s manual. These days, it’s quite common for secondary M.2 slots and PCIe expansion slots to share bandwidth. Depending on how many M.2 slots you’re using and which M.2 slots are in use, you might limit the PCIe lanes available for an expansion slot or disable the slot entirely.

Get an optimized PCIe lane layout with the latest ASUS AM5 motherboards

At ASUS, we’re always refining our motherboard designs to best fit the needs of today’s PC users. For that reason, we’re optimizing the PCIe lane design of our latest AM5 motherboards.

Take the ROG Strix X870E-E Gaming WiFi7 Neo, for instance, a high-performance motherboard designed for mainstream gamers. We’ve found that gamers in this segment are increasingly looking to install many SSDs in their system. Our new PCIe lane layout makes sure that these users can install two PCIe 5.0 M.2 SSDs and three PCIe 4.0 M.2s SSDs while still running their graphics card in PCIe 5.0 x16 mode. This is made possible by sharing bandwidth between the USB4 ports and the second M.2 slot.

We’re also offering options that empower high-end gaming and streaming setups. The new ROG Crosshair X870E Glacial lets users enjoy the full bandwidth of the first PCIe 5.0 x16 slot and two PCIe 5.0 M.2 slots, ensuring maximum gaming and storage performance for high-quality streaming and recording. In this configuration, the second PCIe slot will run at PCIe 3.0 x4, delivering sufficient performance for capture cards.

Finally, we offer motherboards tailor-made for creative and AI applications. In these fields, it’s not uncommon to see multi-GPU setups that increase the amount of VRAM available to applications. The ProArt B850-Creator WiFi Neo and ROG Crosshair X870E Glacial both offer a pair of PCIe x16 slots that can run in x8/x8 configurations to allow for a smooth experience in this scenario.

RELATED ARTICLES

Most Popular