Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6226 articles
Browse latest View live

Re: iSER for ESXi 6.5 No target detected, no traffic sent.

$
0
0

It's a ZFS memory cache ARC range.

You must increase test sample size to 100GB or above.

Best Practice is using iometer with multi-user configurations...:)

 

BR,

Jae-Hoon Choi


Re: iSER for ESXi 6.5 No target detected, no traffic sent.

$
0
0

Understood, i am testing the interconnect though, where its coming from is a bit irrelevant. Just saying that iser is performing somewhat better than iscsi so it is worth going through this hassle for those sitting on the sidelines wondering =). I was never able to get above 8800 or so on iscsi. The non sequential are very close, not a huge difference.

Re: mlx5 with inbox driver 100G is not detecting

$
0
0

I've encountered the same problem with the latest centos7 inbox rpms.  Back to mofed it seems.

Re: iSER for ESXi 6.5 No target detected, no traffic sent.

$
0
0

RDMA vs TCP/IP = latency war ...

 

You must check latency in test with another tool with multi-user configurations.

 

I'm also test it.

 

By increase client count, there are huge deffrence in latency factor on same throughput usage.

 

BR,

Jae-Hoon Choi

Re: PFC on esxi 6.5

Re: PFC on esxi 6.5

$
0
0

Karen, Thank you for the reply.

 

Can you tell me what is Mellanox's recommended configuration if there is just 1 traffic flow? My connectx-4 cards are direct connected between two servers and used only to connect the iser initator to the iser target. There is no other traffic on these nics. I have seen references to creating a vlan and assigning it to one of the priority queues yet none of this applies to my scenario. Should i just run global pause?

Debian 9 OFED driver

$
0
0

Hi, It looks like the latest Debian 8 (Jessie) sources will compile and install on Debian 9 (Stretch) however do you have a timeline for official support?

Thanks

Re: Debian 9 OFED driver

$
0
0

We will support Debian 9 (kernel 4.9) in release MLNX_OFED 4.2

 

Thanks


Re: mlx5 with inbox driver 100G is not detecting

$
0
0

What OS and kernel are you use ?

Re: Debian 9 OFED driver

$
0
0

Is that release expected next month?

Re: Debian 9 OFED driver

$
0
0

Yes,  the schedule is next month

 

Thanks

40G SR4 NIC Support

$
0
0

Hi, I am using a ConnectX-4 with a fibre 40Gbase-SR4 QSFP. The QSFP is directly connected via a fibre breakout to 4 independent 10G streams.

It appears the standard driver included with debian connects at 10G and only uses one of the lanes.

What are the other options to receive data from all 4 lanes? Is this only possible using VMA?

 

Thanks

Re: 40G SR4 NIC Support

$
0
0

You mean you connect QSFP to a CX4 card ?

Re: 40G SR4 NIC Support

$
0
0

No, the 40GBaseSR4 QSFP has an MPO12 fibre connector which is then split into 4 independent streams (LC fibre) coming from different devices which use 10GBaseR. I would like to receive data from all streams (transmit is not important).

Re: Melanox grid director 4036e won't boot.

$
0
0

No unfortunately that does not work.

The run flash_self_safe forces the switch to boot from secondary and we get the exact same error output.

 

From uboot I attempted to download an image by tftp but the file transfer begins, the switch outputs an error and boots to the same error.

 

I opened the unit and there are 4 red LEDs so I suspect a hardware failure.

The LEDS are as follows:

Top row of LEDS (next to RAM module, below the chassis fans)

D104 + D105 on RED

LEDS in bottom right of chassis

CPLD 2 R643 - D87 RED

CPLD 4 R645 - D89 RED

 

The LEDs turn red shortly after power is applied.

 

Do you know what may have failed? Will the failed components be  replaceable as this is a legacy unit and this with another 7 switches may have a similar problem.

 

Kind regards.

Rav.


Re: Melanox grid director 4036e won't boot.

$
0
0

In this case, I think you need RMA the switch if the switch under warranty

NFS over RDMA on OEL 7.4

$
0
0

Hello my configuration is simple OEL 7.4 two Mellanox ConnectX-3 VPI cards , SX1036 switch and two very Fast NVMe drives.

So my problem is that I configured NFS over RDMA using Infiniband Support packages from OEL because OFED from Mellanox not support

NFS over RDMA from version 3.4 + .

Everything is working I can connect to server over RDMA and I can write/read from NFS server etc.  but I have a problem with performance .

I done test on my stipe LV and fit shows me 900k IOPS and around 3,8 GB/s using 4k but when I do the same tests on NFS client I can't get more

then 190k IOPS ? Problem is not the bandwidth because when I change the block size I can get even over 4GB/s but the problem seams to be number

of IOPS delivered from server to client. 

I am asking maybe somebody have idea ?? I already change size and size to 1m but without any performance benefits.

My next steps will be configure Aggregation (LCAP) to see if it change something , now I'm using only one Port .

 

Adam

Re: Melanox grid director 4036e won't boot.

$
0
0

Legacy equipment out of warranty.

Are we able to purchase a service contract or is this equipment unsupported?

Many thanks.

Re: Melanox grid director 4036e won't boot.

Re: Melanox grid director 4036e won't boot.

$
0
0

Thank you for the assistance.

This call can now be closed.

Viewing all 6226 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>