networking - How to test the throughput between 2 Network adapters on the same PC?

06
2014-04
  • sammyg

    The motherboard of my desktop PC features two Ethernet ports. Each port has its own network controller from Realtek and they are both capable of Gigabit Ethernet. Now, I have this crazy idea to take a short Ethernet category 5e cable and connect it between the two ports, to create a loop. The idea is to test the throughput of both of these ports using the shortest possible cable length.

    enter image description here

    I have already connected the two ports. One of the connections is identified as Network 6, and I have chosen Home as network type to make it discoverable. The other one got stuck at "identifying". But it's seen as unidentified network now and I have manually changed the type from Public to Home. I was then prompted by Windows Media Player to share media files between these two. Under Network, one is seen as a network computer and the other one shows up as a portable media player. There are three devices with the same name.

    a b

    Update


    I have set one connection to IP 10.1.1.1 and mask 255.255.255.0 and the other one to IP 10.1.1.2 and mask 255.255.255.0. I also added 10.1.1.1 as gateway on the second connection.

    c d

    After doing this, "Network 7" has now been identified.

    e

    Using pcattcp...

    On the receiver end:

    C:\PCATTCP-0114>pcattcp -r
    PCAUSA Test TCP Utility V2.01.01.14 (IPv4/IPv6)
      IP Version  : IPv4
    Started TCP Receive Test 0...
    TCP Receive Test
      Local Host  : GIGA
    **************
      Listening...: On TCPv4 0.0.0.0:5001
    
      Accept      : TCPv4 0.0.0.0:5001 <- 10.1.1.1:8127
      Buffer Size : 8192; Alignment: 16384/0
      Receive Mode: Sinking (discarding) Data
      Statistics  : TCPv4 0.0.0.0:5001 <- 10.1.1.1:8127
    16777216 bytes in 0.089 real seconds = 184089.89 KB/sec +++
    numCalls: 2061; msec/call: 0.044; calls/sec: 23157.303
    
    C:\PCATTCP-0114>
    

    On the transmitter end:

    C:\PCATTCP-0114>pcattcp -t 10.1.1.1
    PCAUSA Test TCP Utility V2.01.01.14 (IPv4/IPv6)
      IP Version  : IPv4
    Started TCP Transmit Test 0...
    TCP Transmit Test
      Transmit    : TCPv4 0.0.0.0 -> 10.1.1.1:5001
      Buffer Size : 8192; Alignment: 16384/0
      TCP_NODELAY : DISABLED (0)
      Connect     : Connected to 10.1.1.1:5001
      Send Mode   : Send Pattern; Number of Buffers: 2048
      Statistics  : TCPv4 0.0.0.0 -> 10.1.1.1:5001
    16777216 bytes in 0.091 real seconds = 180043.96 KB/sec +++
    numCalls: 2048; msec/call: 0.045; calls/sec: 22505.495
    
    C:\PCATTCP-0114>
    
  • Answers
  • techie007

    I'll preface this by saying I've never actually tried this, but I can't see why it wouldn't work as such:

    • Hook up the wire (since they're Gb ports, you shouldn't need a crossover cable to get Link).

    • Configure the adapters to be on the same subnet (say 10.1.1.1 and 10.1.1.2, mask 255.255.255.0). Only one of them can/should have a gateway, and if you need to put one in, just pick one of those IPs (10.1.1.1 or .2).

    Even if it's stuck on "identifying" you can should still be able to use the connection ('identifying' often requires DNS, which may act weird if not available, but won't prevent IP address-based connections).

    You also don't need to make them 'discoverable' per-se. Just turn off the firewall and use IP addresses for targeting.

    • Test with ping: ping 10.1.1.1 to ensure connection.

    • Run a host-to-host TCP bandwidth tester. I like this one for Windows use.


  • Related Question

    networking - A gigabit network interface is CPU-limited to 25MB/s. How can I maximize the throughput?
  • netvope

    I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU.

    The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website.

    ethtool -k eth0 shows that checksum offload is enabled:

    Offload parameters for eth0:
    rx-checksumming: on
    tx-checksumming: on
    scatter-gather: on
    tcp segmentation offload: on
    udp fragmentation offload: off
    generic segmentation offload: off
    

    The following is the output of powertop when the network is idle:

    Wakeups-from-idle per second : 61.9     interval: 10.0s
    no ACPI power usage estimate available
    
    Top causes for wakeups:
      90.9% (101.3)       <interrupt> : eth0
       4.5% (  5.0)             iftop : schedule_timeout (process_timeout)
       1.8% (  2.0)     <kernel core> : clocksource_register (clocksource_watchdog)
       0.9% (  1.0)            dhcdbd : schedule_timeout (process_timeout)
       0.5% (  0.6)     <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer)
    

    And when the maximum throughput of about 25MB/s is reached:

    Wakeups-from-idle per second : 11175.5  interval: 10.0s
    no ACPI power usage estimate available
    
    Top causes for wakeups:
      99.9% (22097.4)       <interrupt> : eth0
       0.0% (  5.0)             iftop : schedule_timeout (process_timeout)
       0.0% (  2.0)     <kernel core> : clocksource_register (clocksource_watchdog)
       0.0% (  1.0)            dhcdbd : schedule_timeout (process_timeout)
       0.0% (  0.6)     <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer)
    

    Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation?

    As a reference, the other computers in the network can usually transfer at 50+MB/s without problems. A computer with a Core 2 CPU generates only 5000 interrupts per second when it's transferring at 110MB/s. The number of interrupts is about 20 times less than the Atom system (if interrupts scale linearly with throughput.)

    Can increasing the TCP window size solves the problem? Is it a general setting in the OS, or application specific?

    And a minor question: How can I find out what is the driver in use for eth0?


  • Related Answers
  • Eric

    It sounds like the networking card has a fairly small buffer and is operating in interrupt mode, you might be able to increase throughput by switching to polling, if it's supported by your NIC & driver.

    However, the problem likely can't be completely resolved without switching to a NIC with a larger buffer, which probably isn't possible with that hardware.

  • geek

    How can I find out what is the driver in use for eth0?

    Inspecting the output of dmesg might help.

    Here is a particular case which I get on this computer where I type the answer:

    $ dmesg | grep ethernet
    forcedeth: Reverse Engineered nForce ethernet driver. Version 0.62.
    

    In certain cases NIC support is built straight into the kernel (not as module). So it at least won't appear in the output of lsmod.

  • Lawrence Dol

    Try using TCP Optimizer. Selecting it's optimized recommendations consistently ups the throughput over default installation TCP settings.