Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6226 articles
Browse latest View live

roce in freebsd 10.1 is failing at poll completion and sending buff data.

$
0
0

We are running server and client, where we are sending some data to server. When we start server, it will start polling 'ibv_poll_cq' waiting for notification from client. But client would have already already posted its data using 'ibv_post_send' and it will come out without any error and server will be in loop.


OpenStack Neutron ML2 w/ SR-IOV (VLAN): Can't ping VM IP

$
0
0

Hello,

 

I'm trying OpenStack Icehouse ML2 w/ SR-IOV (VLAN) on CentOS7 and ConnectX-3 cards (40gE):

 

(1) Mellanox-Neutron-Icehouse-Redhat-Ethernet - OpenStack 

(2) Nova-neutron-sriov - OpenStack

(3) Mellanox OFED Driver Installation and Configuration for SR-IOV

 

After setting SR-IOV w/ ConnectX-3 while following (3), I also verified that vlan is working among systems.

 

I installed OpenStack Icehouse using packstack on single node (all-in-one).

And then, I modified all the configurations for SR-IOV while following (1), except /etc/neutron/dhcp_agent.ini.

I fixed the interface_driver from BridgeInterfaceDriver to OVSInterfaceDriver in /etc/neutron/dhcp_agent.ini.

 

Even though VM can get the DHCP IP, host and VM can't ping each other.

 

[root@gpu6 ~(keystone_admin)]# cat /etc/modprobe.d/mlx4_core.conf

options mlx4_core port_type_array=2,2 num_vfs=16 probe_vf=0 enable_64b_cqe_eqe=0  log_num_mgm_entry_size=-1

 

[root@gpu6 ~(keystone_admin)]# neutron net-list

+--------------------------------------+--------+-----------------------------------------------------+

| id                                   | name   | subnets                                             |

+--------------------------------------+--------+-----------------------------------------------------+

| 1c555886-f026-4727-a2e6-99913e383bf2 | net40g | afdeec0e-6b9f-421a-9a5b-421a77c283d8 192.168.2.0/24 |

+--------------------------------------+--------+-----------------------------------------------------+

[root@gpu6 ~(keystone_admin)]# neutron subnet-list

+--------------------------------------+-------------+----------------+--------------------------------------------------+

| id                                   | name        | cidr           | allocation_pools                                 |

+--------------------------------------+-------------+----------------+--------------------------------------------------+

| afdeec0e-6b9f-421a-9a5b-421a77c283d8 | demo-subnet | 192.168.2.0/24 | {"start": "192.168.2.2", "end": "192.168.2.254"} |

+--------------------------------------+-------------+----------------+--------------------------------------------------+

[root@gpu6 ~(keystone_admin)]# neutron port-list

+--------------------------------------+------------+-------------------+------------------------------------------------------------------------------------+

| id                                   | name       | mac_address       | fixed_ips                                                                          |

+--------------------------------------+------------+-------------------+------------------------------------------------------------------------------------+

| 385600c0-fafa-4e15-b0b4-83f780e26daf |            | fa:16:3e:ce:2b:5f | {"subnet_id": "afdeec0e-6b9f-421a-9a5b-421a77c283d8", "ip_address": "192.168.2.2"} |

| 9a291386-c020-4cfd-9e11-bc98fa418566 |            | fa:16:3e:90:d8:bc | {"subnet_id": "afdeec0e-6b9f-421a-9a5b-421a77c283d8", "ip_address": "192.168.2.1"} |

| e0f81bbb-2da3-4ba0-9bba-3f90a79fd9a7 | sriov_port | fa:16:3e:8b:83:76 | {"subnet_id": "afdeec0e-6b9f-421a-9a5b-421a77c283d8", "ip_address": "192.168.2.7"} |

+--------------------------------------+------------+-------------------+------------------------------------------------------------------------------------+

 

[root@gpu6 ~(keystone_admin)]# ip netns

qdhcp-1c555886-f026-4727-a2e6-99913e383bf2

qrouter-4d297bce-3888-4036-9b63-e61028f9ff8f

[root@gpu6 ~(keystone_admin)]# ip netns exec qdhcp-1c555886-f026-4727-a2e6-99913e383bf2 ping -c1 192.168.2.2

PING 192.168.2.2 (192.168.2.2) 56(84) bytes of data.

64 bytes from 192.168.2.2: icmp_seq=1 ttl=64 time=0.027 ms

--- 192.168.2.2 ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms

[root@gpu6 ~(keystone_admin)]# ip netns exec qdhcp-1c555886-f026-4727-a2e6-99913e383bf2 ping -c1 192.168.2.1

PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.

64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.495 ms

--- 192.168.2.1 ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms

[root@gpu6 ~(keystone_admin)]# ip netns exec qdhcp-1c555886-f026-4727-a2e6-99913e383bf2 ping -c1 192.168.2.7

PING 192.168.2.7 (192.168.2.7) 56(84) bytes of data.

^C

--- 192.168.2.7 ping statistics ---

1 packets transmitted, 0 received, 100% packet loss, time 0ms

 

 

VM also can't ping 192.168.2.1 or 192.168.2.2, except own IP, 192.168.2.7.

VM's lspci result is as follows:

00:04.0 Network controller [0280]: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function] [15b3:1004]

00:05.0 Network controller [0280]: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function] [15b3:1004]

 

 

[root@gpu6 ~(keystone_admin)]# ovs-vsctl show

af9350bf-af96-4fac-adf5-0cd665e1215e

...

    Bridge br-int

        fail_mode: secure

        Port "qr-9a291386-c0"

            tag: 1

            Interface "qr-9a291386-c0"

                type: internal

        Port int-br-ex

            Interface int-br-ex

        Port "int-br-ens4"

            Interface "int-br-ens4"

        Port "tap385600c0-fa"

            tag: 1

            Interface "tap385600c0-fa"

                type: internal

        Port br-int

            Interface br-int

                type: internal

    Bridge "br-ens4"

        Port "br-ens4"

            Interface "br-ens4"

                type: internal

        Port "ens4"

            Interface "ens4"

        Port "phy-br-ens4"

            Interface "phy-br-ens4"

    ovs_version: "2.1.3"

 

[root@gpu6 ~(keystone_admin)]# ovs-ofctl show br-int

OFPT_FEATURES_REPLY (xid=0x2): dpid:0000029b05424542

n_tables:254, n_buffers:256

capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP

actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE

25(tap385600c0-fa): addr:00:00:00:00:00:00

     config:     PORT_DOWN

     state:      LINK_DOWN

     speed: 0 Mbps now, 0 Mbps max

26(qr-9a291386-c0): addr:00:00:00:00:00:00

     config:     PORT_DOWN

     state:      LINK_DOWN

     speed: 0 Mbps now, 0 Mbps max

29(int-br-ex): addr:ee:06:9e:4b:9e:62

     config:     0

     state:      0

     current:    10GB-FD COPPER

     speed: 10000 Mbps now, 0 Mbps max

30(int-br-ens4): addr:6e:aa:42:99:af:d2

     config:     0

     state:      0

     current:    10GB-FD COPPER

     speed: 10000 Mbps now, 0 Mbps max

LOCAL(br-int): addr:02:9b:05:42:45:42

     config:     0

     state:      0

     speed: 0 Mbps now, 0 Mbps max

OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

 

[root@gpu6 ~(keystone_admin)]# ovs-ofctl show br-ens4

OFPT_FEATURES_REPLY (xid=0x2): dpid:000024be05820470

n_tables:254, n_buffers:256

capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP

actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE

1(ens4): addr:24:be:05:82:04:70

     config:     0

     state:      0

     current:    AUTO_NEG

     advertised: AUTO_NEG AUTO_PAUSE

     supported:  FIBER AUTO_NEG AUTO_PAUSE AUTO_PAUSE_ASYM

     speed: 0 Mbps now, 0 Mbps max

11(phy-br-ens4): addr:b6:07:55:f6:42:7c

     config:     0

     state:      0

     current:    10GB-FD COPPER

     speed: 10000 Mbps now, 0 Mbps max

LOCAL(br-ens4): addr:24:be:05:82:04:70

     config:     0

     state:      0

     speed: 0 Mbps now, 0 Mbps max

OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

 

Could you please let me know what I shol check for this problem?

Re: Why has support for ConnecX-2 cards been discontinued in WinOF 4.90?

Re: IBM FlexSystem EN6131 40Gb ethernet switch and MIB

$
0
0

It is actually the same MIB that Mellanox provides with the MellanoxOS.

it can be downloaded from Mellanox support-web

Re: установка OFED на Ubuntu 14.10 (проблемы с прошивкой сетевого адаптера)

$
0
0

Could you post the output of the command?

lspci |grep Mellanox | awk '{print $1}' | xargs -i -r lspci -s {} -xxxvvv

Re: Why has support for ConnecX-2 cards been discontinued in WinOF 4.90?

$
0
0

As far as supporting Windows users, WinOF4.8 is the last official version that supported ConnectX2 cards. As Mellanox releases new drivers, those are also including features that have strong dependency with FW releases and MS OS releases.

Unfortunately, it became almost impossible for ConnectX2 cards to catch up with the new feature and capabilities and therefore the driver releases had to continue without it.

 

specifically, i know of few folks that are using WinOF 4.9 and ConnectX2 cards and it works. It was not tested and released this way - still, it doesn't stop people from trying ;-)

 

Cheers!

ARP timeout on Mellanox IB Gateway SX6036G

$
0
0

Hi,

 

I have set up an installation of Openstack Juno on 8 nodes: one cloud controller and 7 Nova compute nodes. The nodes are all equipped with Mellanox ConnectX-3 Infiniband HCA. Openstack networking (Neutron) is configured to work over the Infiniband interface using the eth_ipoib driver for ethernet para-virtualization. The ethernet interface created by eth_ipoib is used by Openstack networking services over 2 UFM (version 4.8.0) partitions:

 

  • one pkey is used for the private network between cloud nodes. It is defined as a local network and it is only connected to the Logical Server Group of the cloud nodes
  • the other pkey is used for the Openstack/Neutron external network. It is defined as a local network and it is connected to the Logical Server Group of the cloud nodes and to 2 Infiniband gateways SX6036G (Mellanox OS 3.3.5006) configured in load balancing Active-Active mode (load balancing algorithm: ib-base-ip)

 

Neutron is configured with the new Distributed Virtual Router (DVR) feature of Juno (https://wiki.openstack.org/wiki/Neutron/DVR). With DVR enabled, NAT services are distributed across all the nodes (controller and compute). One node (our cloud controller) provides only SNAT to VMs and all the other compute nodes provides DNAT to VMs.

 

Networking over the private network it’s working fine.Instead, on the external network problems arise when assigning floating IPs to the VMs. DVR manages floating IPs by creating a FloatingIP router namespace, the Floating IP agent gateway (FIP), on each node that runs a VM to which is assigned a floating IP. Each floating IP of a VM is then assigned to the DVR namespace of the tenant network.

 

I guess that floating IPs are not always reachable from the internet because, sometimes, the gateways map IP addresses with the wrong IB-MAC addresses in the ARP table.

 

For example, now the ARP tables of the external network (proxy-arp 7) of the 2 gateways contains

 

root@master# xdsh gw -l admin --devicetype IBSwitch::Mellanox "show ip arp interface proxy-arp 7"

sx60g01: Mellanox MLNX-OS Switch Management

sx60g01:

sx60g01: Total number of entries: 16

sx60g01:   Address              Type            Hardware Address          Interface          

sx60g01:   ------------------------------------------------------------------------

sx60g01:   XXX.XXX.XX.188       Dynamic ETH     00:24:F7:14:7A:C1         proxy-arp 7        

sx60g01:   XXX.XXX.XX.189       Dynamic ETH     00:24:F7:14:B4:C1         proxy-arp 7        

sx60g01:   XXX.XXX.XX.190       Dynamic ETH     00:00:0C:07:AC:83         proxy-arp 7        

sx60g01:   XXX.XXX.XX.130       Dynamic IB      50:05:07:00:5B:01:7E:11   proxy-arp 7        

sx60g01:   XXX.XXX.XX.132       Dynamic IB      50:05:07:00:5B:01:7E:11   proxy-arp 7        

sx60g01:   XXX.XXX.XX.134       Dynamic IB      50:05:07:00:5B:01:80:0D   proxy-arp 7        

sx60g01:   XXX.XXX.XX.136       Dynamic IB      50:05:07:00:5B:01:80:0D   proxy-arp 7        

sx60g01:   XXX.XXX.XX.138       Dynamic IB      50:05:07:00:5B:01:7F:E5   proxy-arp 7        

sx60g01:   XXX.XXX.XX.142       Dynamic IB      50:05:07:00:5B:01:78:19   proxy-arp 7        

sx60g01:   XXX.XXX.XX.144       Dynamic IB      50:05:07:00:5B:01:7F:D5   proxy-arp 7        

sx60g01:   XXX.XXX.XX.146       Dynamic IB      50:05:07:00:5B:01:7F:D5   proxy-arp 7        

sx60g01:   XXX.XXX.XX.148       Dynamic IB      50:05:07:00:5B:01:7E:11   proxy-arp 7        

sx60g01:   XXX.XXX.XX.152       Dynamic IB      50:05:07:00:5B:01:7F:D5   proxy-arp 7        

sx60g01:   XXX.XXX.XX.154       Dynamic IB      50:05:07:00:5B:01:7F:E5   proxy-arp 7        

sx60g01:   XXX.XXX.XX.158       Dynamic IB      50:05:07:00:5B:01:78:19   proxy-arp 7        

sx60g01:   XXX.XXX.XX.160       Dynamic IB      50:05:07:00:5B:01:7E:CD   proxy-arp 7        

sx60g01:

sx60g02: Mellanox MLNX-OS Switch Management

sx60g02:

sx60g02: Total number of entries: 12

sx60g02:   Address              Type            Hardware Address          Interface          

sx60g02:   ------------------------------------------------------------------------

sx60g02:   XXX.XXX.XX.188       Dynamic ETH     00:24:F7:14:7A:C1         proxy-arp 7        

sx60g02:   XXX.XXX.XX.189       Dynamic ETH     00:24:F7:14:B4:C1         proxy-arp 7        

sx60g02:   XXX.XXX.XX.190       Dynamic ETH     00:00:0C:07:AC:83         proxy-arp 7        

sx60g02:   XXX.XXX.XX.129       Dynamic IB      50:05:07:00:5B:01:7E:11   proxy-arp 7        

sx60g02:   XXX.XXX.XX.133       Dynamic IB      50:05:07:00:5B:01:7E:B1   proxy-arp 7        

sx60g02:   XXX.XXX.XX.135       Dynamic IB      50:05:07:00:5B:01:7E:CD   proxy-arp 7        

sx60g02:   XXX.XXX.XX.137       Dynamic IB      50:05:07:00:5B:01:80:0D   proxy-arp 7        

sx60g02:   XXX.XXX.XX.139       Dynamic IB      50:05:07:00:5B:01:78:19   proxy-arp 7        

sx60g02:   XXX.XXX.XX.141       Dynamic IB      50:05:07:00:5B:01:7F:E5   proxy-arp 7        

sx60g02:   XXX.XXX.XX.143       Dynamic IB      50:05:07:00:5B:01:7E:B1   proxy-arp 7        

sx60g02:   XXX.XXX.XX.145       Dynamic IB      50:05:07:00:5B:01:7F:D5   proxy-arp 7        

sx60g02:   XXX.XXX.XX.147       Dynamic IB      50:05:07:00:5B:01:7E:B1   proxy-arp 7        

sx60g02:

 

So, for example, the IP address XXX.XXX.XX.133, that is unreachable from the internet, is defined on sx60g02 with IB-MAC 50:05:07:00:5B:01:7E:B1. However the node that is running the VM to which that floating IP is assigned, has MAC address 50:05:07:00:5b:01:7e:cd.

 

Moreover, looking at the ARP table of sx60g01, you can see that there are entries for the following IPs

 

sx60g01:   XXX.XXX.XX.148       Dynamic IB      50:05:07:00:5B:01:7E:11   proxy-arp 7        

sx60g01:   XXX.XXX.XX.152       Dynamic IB      50:05:07:00:5B:01:7F:D5   proxy-arp 7        

sx60g01:   XXX.XXX.XX.154       Dynamic IB      50:05:07:00:5B:01:7F:E5   proxy-arp 7        

sx60g01:   XXX.XXX.XX.158       Dynamic IB      50:05:07:00:5B:01:78:19   proxy-arp 7        

sx60g01:   XXX.XXX.XX.160       Dynamic IB      50:05:07:00:5B:01:7E:CD   proxy-arp 7        

 

I have assigned these floating IPs to VMs more that a week ago. At the moment no nodes, either virtual or real, have these IPs assigned.

 

Why are they still present in the ARP table of the gateway?

 

Thank you very much in advance

 

Ale

Re: OpenStack Neutron ML2 w/ SR-IOV (VLAN): Can't ping VM IP

$
0
0

I'm trying to compare (1) default OpenStack testbed using LibvirtGenericVIFDriver to (2) SR-IOV+VLAN testbed using MlxEthVIFDriver. (1) works fine, but (2) doesn't have qv* device (ip link / ovs-ofctl show br-int) and doesn't have iptables rules that are related to neutron-openvswi-* and VM IP/port.


Designing for high availability

$
0
0

We are in the early planning stages for a high bandwidth interconnect between servers. The software stack will be OFED and GPFS on Linux. GPFS can use RDMA.

The idea is to have two IB switches and each server to connect to both. I've seen such configurations with normal linux bonding (mode=1, active/passive).

- How does this works out if RDMA is used ?

- Is there a possibility to have a active/active configuration at least for RDMA ?

Re: Why has support for ConnecX-2 cards been discontinued in WinOF 4.90?

$
0
0

I do understand the device driver dependency with the firmware, which probably is related to the fast development of new feautures in WinOF.

 

What I don't get is what capabilities that might be missing in the CTX2 MT25408 chip (besides lower bandwidth) compared to for example CTX3 MT27508 when most features, that were added in the former releases of WinOF, are not related to the fw but rather the os level.

 

So the question remains, what capabilities are missing that prevents further development of the MT25408 firmware?

--

 

Cheerio,

Lars.

Need Driver For: Isilon 415-0017-07 3000X Mellanox InfiniHost III

$
0
0

I Need Driver For: Isilon 415-0017-07 3000X Mellanox InfiniHost III for Windows Server 2012 r2

 

Thank You

Re: OpenStack Neutron ML2 w/ SR-IOV (VLAN): Can't ping VM IP

$
0
0

I've changed single-node setup to multi node setup including one controller+network node and one compute node. After stopping neutron-openvswitch-agent on compute node, it works fine w/ SR-IOV+VLAN.

Re: ARP timeout on Mellanox IB Gateway SX6036G

$
0
0

Hi Ale,

 

There is  a known issue with Gratuitous ARP handling in GW HA mode in code below 3.4.0012 and below, we should release a code with the fix soon (Until End Of March),

for now I would suggest clearing the Arp Entry from all of the GW's with the below commands:

 

 

Entire arp table for the Proxy-arp interface:

 

 

switch# clear ip arp interface proxy-arp X

 

 

when X is the proxy-arp Interface id

 

 

or a specific entry:

 

 

switch# clear ip arp 1.1.1.133

Re: ARP timeout on Mellanox IB Gateway SX6036G

$
0
0

Hi Eddie,

 

I'm looking forward the new version of MLNX-OS will be released :-)

 

Thank you very much

 

Ale

Re: установка OFED на Ubuntu 14.10 (проблемы с прошивкой сетевого адаптера)

$
0
0

user@ivan-X7DWT:~$

user@ivan-X7DWT:~$ lspci |grep Mellanox | awk '{print $1}' | xargs -i -r lspci -s {} -xxxvvv

07:00.0 InfiniBand: Mellanox Technologies MT25204 [InfiniHost III Lx HCA] (rev 20)

    Subsystem: Mellanox Technologies MT25204 [InfiniHost III Lx HCA]

    Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-

    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

    Interrupt: pin A routed to IRQ 30

    Region 0: Memory at d8a00000 (64-bit, non-prefetchable) [size=1M]

    Region 2: Memory at d8000000 (64-bit, prefetchable) [size=8M]

    Capabilities: <access denied>

00: b3 15 74 62 02 00 10 00 20 00 06 0c 08 00 00 00

10: 04 00 a0 d8 00 00 00 00 0c 00 00 d8 00 00 00 00

20: 00 00 00 00 00 00 00 00 00 00 00 00 b3 15 74 62

30: 00 00 00 00 40 00 00 00 00 00 00 00 0b 01 00 00

 

user@ivan-X7DWT:~$

user@ivan-X7DWT:~$

user@ivan-X7DWT:~$


MLNX_OFED_LINUX-2.4-1.0.4 - test problems

$
0
0

Hello,
I have downloaded yesterday the most recent of Mellanox OFED (MLNX_OFED_LINUX-2.4-1.0.4 (OFED-2.4-1.0.4)) for our new ConnectX-4 cards and I am encountering  problems when trying to run some tests with following tools:  ib_read_bw     ib_read_lat    ib_send_bw     ib_send_lat    ib_write_bw    ib_write_lat .

Here are the error messages that I am getting for ib_write_bw test, but it is the same for other benchmarks:

[aotto@lab16 ~]$ib_write_bw

************************************

* Waiting for client to connect... *

************************************

---------------------------------------------------------------------------------------

                    RDMA_Write BW Test

Dual-port       : OFF Device         : mlx5_0

Number of qps   : 1 Transport type : IB

Connection type : RC Using SRQ      : OFF

CQ Moderation   : 100

Mtu             : 4096[B]

Link type       : IB

Max inline data : 0[B]

rdma_cm QPs : OFF

Data ex. method : Ethernet

---------------------------------------------------------------------------------------

local address: LID 0x02 QPN 0x002f PSN 0xc34b21 RKey 0x009f76 VAddr 0x007f11f9990000

remote address: LID 0x03 QPN 0x002f PSN 0x6b9d23 RKey 0x009363 VAddr 0x007fa679400000

---------------------------------------------------------------------------------------

#bytes     #iterations    BW peak[MB/sec]    BW average[MB/sec]   MsgRate[Mpps]

ethernet_read_keys: Couldn't read remote address

Unable to read to socket/rdam_cm

Failed to exchange data between server and clients

 

[aotto@lab17 ~]$ ib_write_bw -a lab16

---------------------------------------------------------------------------------------

                    RDMA_Write BW Test

Dual-port       : OFF Device         : mlx5_0

Number of qps   : 1 Transport type : IB

Connection type : RC Using SRQ      : OFF

TX depth        : 128

CQ Moderation   : 100

Mtu             : 4096[B]

Link type       : IB

Max inline data : 0[B]

rdma_cm QPs : OFF

Data ex. method : Ethernet

---------------------------------------------------------------------------------------

local address: LID 0x03 QPN 0x002f PSN 0x6b9d23 RKey 0x009363 VAddr 0x007fa679400000

remote address: LID 0x02 QPN 0x002f PSN 0xc34b21 RKey 0x009f76 VAddr 0x007f11f9990000

---------------------------------------------------------------------------------------

#bytes     #iterations    BW peak[MB/sec]    BW average[MB/sec]   MsgRate[Mpps]

2          5000             11.40              9.82     5.147955

4          5000             34.56              34.47     9.036457

8          5000             68.84              67.93     8.904261

16         5000             137.69             135.93   8.908591

32         5000             276.46             275.92   9.041272

64         5000             555.11             553.14   9.062630

128        5000             1105.84            1101.91   9.026843

256        5000             2203.01            2195.79   8.993952

512        5000             4272.00            4090.32   8.376984

1024       5000             6551.23            6244.85   6.394722

2048       5000             7748.52            7602.42   3.892439

4096       5000             8075.72            8067.92   2.065387

8192       5000             10593.14            10491.92   1.342965

16384      5000             10475.85            10382.35   0.664471

32768      5000             10469.75            10464.48   0.334863

65536      5000             10534.17            10533.56   0.168537

mlx5: lab17: got completion with error:

00000000 00000000 00000000 00000000

00000000 00000000 00000000 00000000

00000000 00000000 00000000 00000000

00000000 00008813 0800002f 408079d1

Problems with warm up


The test always stops when trying to send more than 65536 bytes. Do you know what can be a problem and its solution? Can it be related to drivers?
If you need some more information, let me know so I can post them.

 

Thank you very much for your help.


Cheers,
Adam

Re: установка OFED на Ubuntu 14.10 (проблемы с прошивкой сетевого адаптера)

Re: MLNX_OFED install failed at Ubuntu 14.04.2

$
0
0

Can confirm this issue on Ubuntu 14.04:

 

2.4-1.0.4 with kernel 3.13 compiles

2.4-1.0.4 with kernel 3.16 fails on DKMS

2.3.1.0.1 with kernel 3.16 compiles

Re: ARP timeout on Mellanox IB Gateway SX6036G

Re: Receive Side Scaling

$
0
0

I am routing, so there is lots of IP/Port pairs. In fact I can see that all queues are used from /proc/interrupts.

The top utility is simply buggy in this case, however perf works correctly and shows correct CPU utilisation.

Viewing all 6226 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>