Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6278

Re: ib_send_bw performance puzzle

$
0
0

I had a similar problem with FDR and two adapters being available at the system level.. try to 'disable' one of the adapters, you can do that with:

 

# lspci | grep Mellanox --> obtain the ID  (supposed that the ID is 05:00.0 and 81:00.0)

# vi /etc/udev/rules.d/88-infiniband-remove-adapter.rules

 

ACTION=="add", KERNEL=="0000:05:00.0", SUBSYSTEM=="pci", RUN+="/bin/sh -c 'echo 0 > /sys/bus/pci/devices/0000:05:00.0/remove'"

ACTION=="add", KERNEL=="0000:81:00.0", SUBSYSTEM=="pci", RUN+="/bin/sh -c 'echo 1 > /sys/bus/pci/devices/0000:81:00.0/remove'"

 

reboot the server and hopefully you will get only one adapter available, and then try to repeat the tests and see if it works fine. By the way, did you try to configure Bonding? If so, can you share the configuration of the /etc/network/interfaces file?

Also, look at:

 

/sys/module/ib_ipoib/parameters/recv_queue_size

/sys/module/ib_ipoib/parameters/send_queue_size

 

to see if you have 512 or 128 bytes, you should have at least 512. If not, change that on the following file (/etc/modprobe.d/ib_ipoib.conf) just comment the options line. Did you try to use datagram mode? with an MTU of 2044


Viewing all articles
Browse latest Browse all 6278

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>