It's a ZFS memory cache ARC range.
You must increase test sample size to 100GB or above.
Best Practice is using iometer with multi-user configurations...:)
BR,
Jae-Hoon Choi
It's a ZFS memory cache ARC range.
You must increase test sample size to 100GB or above.
Best Practice is using iometer with multi-user configurations...:)
BR,
Jae-Hoon Choi
Understood, i am testing the interconnect though, where its coming from is a bit irrelevant. Just saying that iser is performing somewhat better than iscsi so it is worth going through this hassle for those sitting on the sidelines wondering =). I was never able to get above 8800 or so on iscsi. The non sequential are very close, not a huge difference.
I've encountered the same problem with the latest centos7 inbox rpms. Back to mofed it seems.
RDMA vs TCP/IP = latency war ...
You must check latency in test with another tool with multi-user configurations.
I'm also test it.
By increase client count, there are huge deffrence in latency factor on same throughput usage.
BR,
Jae-Hoon Choi
Hi Grant,
Please see the following user manual Mellanox ConnectX-4/ConnectX-5 NATIVE ESXi Driver for VMware vSphere 6.5.
Please see section 3.1.4 Priority Flow Control (PFC) :
Thanks,
Karen.
Karen, Thank you for the reply.
Can you tell me what is Mellanox's recommended configuration if there is just 1 traffic flow? My connectx-4 cards are direct connected between two servers and used only to connect the iser initator to the iser target. There is no other traffic on these nics. I have seen references to creating a vlan and assigning it to one of the priority queues yet none of this applies to my scenario. Should i just run global pause?
Hi, It looks like the latest Debian 8 (Jessie) sources will compile and install on Debian 9 (Stretch) however do you have a timeline for official support?
Thanks
We will support Debian 9 (kernel 4.9) in release MLNX_OFED 4.2
Thanks
What OS and kernel are you use ?
Is that release expected next month?
Yes, the schedule is next month
Thanks
Hi, I am using a ConnectX-4 with a fibre 40Gbase-SR4 QSFP. The QSFP is directly connected via a fibre breakout to 4 independent 10G streams.
It appears the standard driver included with debian connects at 10G and only uses one of the lanes.
What are the other options to receive data from all 4 lanes? Is this only possible using VMA?
Thanks
You mean you connect QSFP to a CX4 card ?
No, the 40GBaseSR4 QSFP has an MPO12 fibre connector which is then split into 4 independent streams (LC fibre) coming from different devices which use 10GBaseR. I would like to receive data from all streams (transmit is not important).
No unfortunately that does not work.
The run flash_self_safe forces the switch to boot from secondary and we get the exact same error output.
From uboot I attempted to download an image by tftp but the file transfer begins, the switch outputs an error and boots to the same error.
I opened the unit and there are 4 red LEDs so I suspect a hardware failure.
The LEDS are as follows:
Top row of LEDS (next to RAM module, below the chassis fans)
D104 + D105 on RED
LEDS in bottom right of chassis
CPLD 2 R643 - D87 RED
CPLD 4 R645 - D89 RED
The LEDs turn red shortly after power is applied.
Do you know what may have failed? Will the failed components be replaceable as this is a legacy unit and this with another 7 switches may have a similar problem.
Kind regards.
Rav.
In this case, I think you need RMA the switch if the switch under warranty
Hello my configuration is simple OEL 7.4 two Mellanox ConnectX-3 VPI cards , SX1036 switch and two very Fast NVMe drives.
So my problem is that I configured NFS over RDMA using Infiniband Support packages from OEL because OFED from Mellanox not support
NFS over RDMA from version 3.4 + .
Everything is working I can connect to server over RDMA and I can write/read from NFS server etc. but I have a problem with performance .
I done test on my stipe LV and fit shows me 900k IOPS and around 3,8 GB/s using 4k but when I do the same tests on NFS client I can't get more
then 190k IOPS ? Problem is not the bandwidth because when I change the block size I can get even over 4GB/s but the problem seams to be number
of IOPS delivered from server to client.
I am asking maybe somebody have idea ?? I already change size and size to 1m but without any performance benefits.
My next steps will be configure Aggregation (LCAP) to see if it change something , now I'm using only one Port .
Adam
Legacy equipment out of warranty.
Are we able to purchase a service contract or is this equipment unsupported?
Many thanks.
You should contact sales guys
Thank you for the assistance.
This call can now be closed.