Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6226 articles
Browse latest View live

Why the very high MAXIMUM latency in UDP ping-pong test?

$
0
0

I've set up a test system with two Dell R730 servers, each of them with a ConnectX-3 Pro NIC card and connected by a 40GE cable. I've followed the BIOS tuning for R730 (https://community.mellanox.com/docs/DOC-2631) and the VMA Performance Tuning Guide (https://community.mellanox.com/docs/DOC-2797) very carefully and made sure I understood everything I did. Then I run the VMA latency test with:

sudo LD_PRELOAD=libvma.so VMA_SPEC=latency numactl --cpunodebind=1 taskset -c 33 sockperf sr -i 192.168.48.2

and

sudo LD_PRELOAD=libvma.so VMA_SPEC=latency numactl --cpunodebind=1 taskset -c 33 sockperf pp -i 192.168.48.2 -t 10

 

I've checked that the NIC cards are in the right slot with 16x PCIE width and are in NUMA node #1. However, the test gives me a surprisingly high MAXIMUM latency of 160us while the average is only 1us:

Test Result of UDP ping-pong with VMA

sockperf: ---> <MAX> observation =  162.336

sockperf: ---> percentile 99.999 =    6.488

sockperf: ---> percentile 99.990 =    4.949

sockperf: ---> percentile 99.900 =    2.099

sockperf: ---> percentile 99.000 =    1.705

sockperf: ---> percentile 90.000 =    1.409

sockperf: ---> percentile 75.000 =    1.356

sockperf: ---> percentile 50.000 =    1.179

sockperf: ---> percentile 25.000 =    1.135

sockperf: ---> <MIN> observation =    1.075

 

So I was wondering what could be the cause of this very high worst case latency, and what could be done to reduce it. I've also done another test without VMA. While it gives me a higher average latency of 6us the worst case latency is not so bad:

Test Result of UDP ping-pong without VMA

sockperf: ---> <MAX> observation =   21.201

sockperf: ---> percentile 99.999 =    9.604

sockperf: ---> percentile 99.990 =    8.219

sockperf: ---> percentile 99.900 =    7.626

sockperf: ---> percentile 99.000 =    6.796

sockperf: ---> percentile 90.000 =    6.318

sockperf: ---> percentile 75.000 =    6.147

sockperf: ---> percentile 50.000 =    5.937

sockperf: ---> percentile 25.000 =    5.848

sockperf: ---> <MIN> observation =    5.561

 

So is this because of the VMA itself or there're something else to suspect? The OS I am using is Ubuntu 14.04 with low-latency kernel 3.17.

 

Any advice is appreciated!

 

Regards,

Hongyuan


Re: NFSoRDMA: svcrdma: Error -12 posting RDMA_READ

$
0
0

Hi guoqingwang,

 

Thank you for reaching out to the Mellanox Support Community and for your patience in this matter.

 

Unfortunately, as from Mellanox OFED version 3.4-x and above NFSoRDMA is not supported anymore. This was never mentioned in the Release Notes for version 3.4-1.0.0.0 but it is mentioned in later released version.

If you want to use NFSoRDMA, you need to downgrade the driver to Mellanox OFED version 3.3-1.0.4.0 in which we do still support NFSoRDMA and is available for Ubuntu 14.04 or switch to the OS-vendor supplied driver (INBOX Driver) for the Mellanox NIC.

 

Hopefully this will clarify and resolves your issue.

 

Thanks.

 

Cheers,

~Martijn

The Way to manage the SN2100 Swtich

$
0
0

  Hello,I have a  SN 2100 swtich,but I can't use the command to manage the swtich,I know  I should use the console cable to connect the PC,and I also enter the interface,but it shows I must input the ''cumulus login''and '' password'' ,I input cumulus login :''admin'',the password is also "admin",but it tells me the login incorrect.

  So I reset the switch back to factory defaults,but the issue  isn't solved. Your help will be appreciated , Thanks so much.

Re: The Way to manage the SN2100 Swtich

$
0
0

Hi,

 

You have an SN2100 with Cumulus-Linux OS installed so admin/admin are not the default credentials for Cumulus-Linux.

 

the default credentials are

cumulus

CumulusLinux!

 

Have you purchased (or at least wanted) a Cumulus based switch or  MLNX-OS based switch?

Re: SR-IOV binding inside VM for PktGen

$
0
0

Hi Francois,

We have a new Mellanox DPDK version 16.11.3.0 available for download on our website:

http://www.mellanox.com/page/products_dyn?product_family=209&mtag=pmd_for_dpdk

 

Please use this version and refer to the quick start guide for additonal information:

http://www.mellanox.com/related-docs/prod_software/MLNX_DPDK_Quick_Start_Guide_v16.11_3.0.pdf

 

Please note that pktgen is not a Mellanox tools and we provide a limited support for it.

However, I have this general example of running pktgen :

pktgen -c 0xfffc -n 4 -w 05:00.0,txq_inline=256,txqs_min_inline=4- w 05:00.1,txq_inline=256,txqs_min_inline=4 -- -T -P -m '[3-5:6-9].0,[3-5:10.1]' -N "

 

The line might require some adaptation for you environment, but, generally, device is specified using '-w' option.

 

Thank you,

Karen.

Re: NFSoRDMA: svcrdma: Error -12 posting RDMA_READ

$
0
0

Hi  Martijn :

 

     thank you very much for your reply

 

     After downgraded the driver to Mellanox OFED version 3.3-1.0.4.0 from your advice, I have recompiled the mlnx-ofed-kernel module with nfsrdma support .

     But still get same error  "svcrdma: Error -12 posting RDMA_READ" while executing “fio --rw=randwrite“.

     I wonder if there is a bug or some wrong configuration .

 

     Hope to get any of your suggestions .

 

Thanks.

 

Regards

   guoqingwang

 

 

PS:

HowTo Compile MLNX_OFED Drivers (mlnx-ofa_kernel example)

options taken from /etc/infiniband/info + --with-nfsrdma-mod

mlnx-ofed-kernel-3.3# ./configure --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-ipoib-mod --with-sdp-mod --with-e_ipoib-mod--with-nfsrdma-mod

mlnx-ofed-kernel-3.3# make

 

copy xprtrdma.ko to nfs client and svcrdma.ko to nfs server

mlnx-ofed-kernel-3.3# cp net/sunrpc/xprtrdma/xprtrdma.ko /lib/modules/3.16.46-031646-generic/updates/dkms/

mlnx-ofed-kernel-3.3# depmod -a && modprobe xprtrdma

Re: SR-IOV binding inside VM for PktGen

$
0
0

Hi Karen

 

I did specify a whitelist for PktGen to use.

On the host it worked like a charm, but in the guest it just does not pick up the ports.

 

Regards

Francois Kleynhans

ESXi 6.5U1 40GbE Performance problems

$
0
0

Hi!

I'm test lossy TCP/IP test on ESXi 6.5U1.

 

Here is a sample link.

Speed testing 40G Ethernet in the Homelab | Erik Bussink

 

I'm also test with same ConnectX-3 MCX354A-FCBT with latest firmware.

All bunch of MCX354A-FCBT connect to SX6036G Gateway Switch systems.

 

Q01. Any switch mode (include VPI-single-mode or Ethernet mode 40GbE port configuration) in SX6036G 40GbE port configuration shows me a terrible performance level.

40GbE_ESXi6.5U1_test.png

 

 

But If change port mode to 10GbE mode shows me a normal 10GbE ethernet performance.

 

Do you have a  any solution?

 

 

Q02. latest MCX354-FCBT firmware can't link-up properly with SX6036G's 56GbE port mode.

 

Do you have any a solution?


Re: ConnectX-3 Pro connecting at 10g instead of 40g

$
0
0

Hi Karen,

 

I applied the suggested step, but didn't change the speed. VMware shows "Configured Speed: 40000 Mb" and "Actual Speed: 10000 Mb". I disabled and re-enabled the NIC in the Windows server. Also unplugged and plugged back the cable. The speed stayed at 10 Gb. I couldn't find an option to also change the speed to 40000 Mb in the Windows server.

Any more thought on this?

 

[root@VMhost3:~]  esxcfg-nics -l | grep nml

vmnic1000402 0000:82:00.0 nmlx4_en    Down 0Mbps     Half   24:8a:07:6c:d8:41 1500   Mellanox Technologies MT27520 Family

vmnic4       0000:82:00.0 nmlx4_en    Up   10000Mbps Full   24:8a:07:6c:d8:40 1500   Mellanox Technologies MT27520 Family

[root@VMhost3:~] esxcli network nic set -n vmnic4 -S 40000 -D full

[root@VMhost3:~]  esxcfg-nics -l | grep nml

vmnic1000402 0000:82:00.0 nmlx4_en    Down 0Mbps     Half   24:8a:07:6c:d8:41 1500   Mellanox Technologies MT27520 Family

vmnic4       0000:82:00.0 nmlx4_en    Up   10000Mbps Full   24:8a:07:6c:d8:40 1500   Mellanox Technologies MT27520 Family

 

 

mallanox 40g vmware settings 1.png

 

mlx-nic-win1.png

Re: Why the very high MAXIMUM latency in UDP ping-pong test?

$
0
0

It may happen because of the some kind of warmup when sending first packets and you can see something similar happens when not using VMA.  What you really interested in is the average latency that is not shown in your output, but is it much smaller when VMA is not used.

For better understanding, you might patch sockperf and VMA code and see when and where this latency is higher and finally find where time is spent.

QinQ Ethertype configurable ?

$
0
0

SN2100: MLNX-OS Rev 4.40 software version 3.6.1102

 

According to the QinQ documentation below the switch adds an additional 8021.Q tag 0x8100 (along with the S-VLAN) for a dotq tunnel port.

Is it possible to add IEEE 802.1ad standard based Ethertype 0x88A8 instead ?

 

Is there any CLI option available to use a custom Ethertype say 0x9100 or 0x88A8 like some other vendors support ?

 

 

Re: The Way to manage the SN2100 Swtich

$
0
0

Dear Eddie,Thanks for your help.Now I can enter the swtich as you provided ,the system of the swtich  seems Linux,to be honesty,I'm  not familiar with the linux.I have another question,

Do you know whether the 2100 swtich  support  the function of Digital Diagnostic Monitoring.Where can I get the  user guide?if you know the command ,could you tell me?Thanks so much.

Re: ConnectX-3 Pro connecting at 10g instead of 40g

Re: how to load os defaule module,not the module in the mellanox driver .

$
0
0

is there any way to change the modle link ?because i want to use mellanox driver (for better support)

MCX414A-BCAT is supporting ROCEV2

$
0
0

Hi ,

 

 

Is MCX414A-BCAT  card supporting ROCEv2 mode

 

Please conform

 

Thanks

Rama


OL7.4 Mellanox OFED

$
0
0

Are there plans to release a build for Oracle Linux 7.4? We are running the unbreakable kernel (4.1.12) and are having issues getting a successful build.

ConnectX VPI (MT26418) nic and SFP modules

$
0
0

I have dual port MT26418 ConnectX VPI  nic  (15b3:6732). I tried to insert LTF8505-BC-IN transceiver into its cage,

like this one:

http://www.hisensebroadband.com/html/Products/SFP_Ethernet/20160919_114.html

However, the transceiver is free in the cage, and I cannot here a "click" when it is inserted (and I do have this "click" when

this transceiver is inserted into SFP+ cages). So this seems not to work.

My question is - shouldn't this work ? doesn't MT26418 support this type of SFP+ modules ? and if not, which modules does it support ?

Re: MCX414A-BCAT is supporting ROCEV2

Re: QinQ Ethertype configurable ?

Re: The Way to manage the SN2100 Swtich

Viewing all 6226 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>