One thing to keep in mind is that you'll hit the bandwidth of the PCIe bus.
I've not used the ib_write test myself - but I'm fairly sure that it's not actually handling data - just accepting it and tossing it away so it's going to be a theoretical maximum.
In real life situations that bus is going to be handling all data in/out of the CPU and for my oldest motherboards that maxes out at 25Gb/s - which is what I hit with fio tests on QDR links. I've heard that with PCIe gen 3 you'll get up to 35Gb/s.
Generally whenever newer networking tech rolls out there is nothing that a single computer can do to saturate the link - unless it's pushing junk data and the only way to really max it out is for switch-switch (hardware to hardware) traffic.
Of course using IPoIB an anything other than native IB traffic is going to cost you performance. In my case of NFS with IPoIB (with or without RDMA) I quickly slam into the bandwidth of my SSDs. The only exception I'll have is the Oracle dB where the low latency is what I'm after as the database is small enough to fit in RAM.