linux - Alter PCIe speed and bandwidth

08
2014-07
  • user78016

    So I know the basics about lspci and setpci (needs to be run as root, etc), but I'm looking for more information. I want to alter the speed and bandwidth of my PCIe card's bus. The reason being, sometimes on our nodes the PCIe bus for a device is misconfigured. So I will have a PCIe card that is capable of 16x, but running at 8x. How can I switch back and forth? (The slot and card are both 16x capable, just auto-configured incorrectly)

    What I have so far is that lspci will print out the registers for me:

    snode1:~ # lspci -s ff:10.7 -xxxx
    ff:10.7 System peripheral: Intel Corporation Device 3cb7 (rev 05)
    00: 86 80 b7 3c 00 00 10 00 05 00 80 08 00 00 80 00
    10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    ...
    f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    ...
    ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    

    I can edit these registers with: setpci -s ff:10.7 40.b=50:d0,04:0c,ff The 40.b indicates start at register 40 and write a byte at a time. There is b for byte, w for word, and l for long on the ways to edit the registers.

    The 50:d0 tells it what to turn on and off at register 40. The logic is XOR to turn on, AND to turn off

    50 XOR d0 = 1000 0000, so bit 7 is turned on
    50 AND d0 = 0101 0000, so bits 6 and 4 are turned off
    
    next comes register 41:
    04 XOR 0c = 0000 1001, so bits 0 and 3 are turned on
    04 AND 0c = 0000 0100, so bit 3 is turned off
    
    next comes register 42: (I figured that blank is implied 00, allow straight hex entry)
    00 XOR ff = 1111 1111, so all the bits are turned on
    00 and ff = 0000 0000, none of the bits are turned off
    

    What I learned so far came mostly from this page: http://www.tutorialspoint.com/unix_commands/setpci.htm

    I don't know how to find which registers map to what. Any help is appreciate, thanks.

  • Answers
    Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.

    Related Question

    Is there a noticable difference between OB SATA bandwidth vs PCI-e SATA Controller
  • Peter Bernier

    Main Question : Would I see any benefit (I/O bandwidth-wise) to purchasing a separate (non-raid) PCI-e controller card to plug SATAII drives into vs on-board SATAII slots?

    I'd be plugging at least four drives into whichever solution I go with.

    Context :

    For portability reasons, I'm running a fileserver in a VM (the physical machine is dedicated explicitly for running this VM), serving files from a number of virtual disks each located on its own physical hard-drive. The host has its own dedicated drive, and the VM is also on its own physical, software-raid-mirrored drive. Occasionally I'll see some slowdowns in the I/O for reading/writing files to/from the server and I'm suspecting the fact that right now all of this is being done over the PCI bus. (Limited to 100Mb/s vs 150Mb/s for SATA).

    I was willing to tolerate being limited to 100Mb/s via the PCI bus, but I'm starting to want something faster.

    The machine that is running all this is a little old (P4, no PCI-e slot) so I'm considering an upgrade. I'd like whatever solution I end up going with (just a new board with >4 SATAII connections or a new board and a PCI-e controller card with 4 SATAII connections) to have as much bandwidth as possible for the disks, without getting into enterprise-level controller cards etc.


  • Related Answers
  • caliban

    No, you won't see any benefit. In fact there might be a very slight performance decrease if you use the PCI-e SATA controller.

  • Breakthrough

    Third-party SATA controllers usually are for hardware RAID configurations and people who don't have enough SATA ports on their motherboard. Unless you need the advanced RAID features (or if you have enough ports on your motherboard), don't bother.

    When I say "advanced" I don't mean non-standard RAID levels. I just mean that it removes most of the software-related overhead by using RAID (which is even present to some extent with onboard systems).