Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6226 articles
Browse latest View live

power 740 over Voltaire Switch 4036-651E

$
0
0

Background:

i've ibm p740 use voltaire Switch 4036-651E.and i've install some software followed by power info cnter.such as udpl,

jsycdccf1:/#lslpp -l bos.mp64 devices.chrp.IBM.lhca.rte devices.common.IBM.ib.rte udapl.rte

  Fileset                      Level  State      Description        

  ----------------------------------------------------------------------------

Path: /usr/lib/objrepos

  bos.mp64                   6.1.9.0  COMMITTED  Base Operating System 64-bit

                                                 Multiprocessor Runtime

  devices.chrp.IBM.lhca.rte  6.1.9.0  COMMITTED  Infiniband Logical HCA Runtime

                                                 Environment

  devices.common.IBM.ib.rte  6.1.9.0  COMMITTED  Infiniband Common Runtime

                                                 Environment

  udapl.rte                 6.1.8.18  COMMITTED  uDAPL

 

Path: /etc/objrepos

  bos.mp64                   6.1.9.0  COMMITTED  Base Operating System 64-bit

                                                 Multiprocessor Runtime

  devices.chrp.IBM.lhca.rte  6.1.9.0  COMMITTED  Infiniband Logical HCA Runtime

                                                 Environment

  devices.common.IBM.ib.rte  6.1.9.0  COMMITTED  Infiniband Common Runtime

                                                 Environment

  udapl.rte                 6.1.8.18  COMMITTED  uDAPL

 

 

 

issuse:

now,i config ip over ib network device:

jsycdccf1:/#ifconfig -a

en3: flags=5e080863,18c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN>

        inet 10.40.8.19 netmask 0xffffff00 broadcast 10.40.8.255

         tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0

ib0: flags=e0a0023<UP,BROADCAST,NOTRAILERS,ALLCAST,MULTICAST,GROUPRT,64BIT>

        inet 192.168.1.100 netmask 0xffffff00 broadcast 192.168.1.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

ib1: flags=e0a0023<UP,BROADCAST,NOTRAILERS,ALLCAST,MULTICAST,GROUPRT,64BIT>

        inet 192.168.1.102 netmask 0xffffff00 broadcast 192.168.1.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

vi0: flags=84000041<UP,RUNNING,64BIT>

        inet 192.168.1.103 netmask 0xffffff00

        iflist : ib0 ib1

ib2: flags=e0a0023<UP,BROADCAST,NOTRAILERS,ALLCAST,MULTICAST,GROUPRT,64BIT>

lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>

        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

        inet6 ::1%1/0

         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

 

when i ping another OS ip,then report:

jsycdccf1:/#ping 192.168.1.104

PING 192.168.1.104: (192.168.1.104): 56 data bytes

0821-069 ping: sendto: The network is not currently available.

ping: wrote 192.168.1.104 64 chars, ret=-1

0821-069 ping: sendto: The network is not currently available.

ping: wrote 192.168.1.104 64 chars, ret=-1

0821-069 ping: sendto: The network is not currently available.

ping: wrote 192.168.1.104 64 chars, ret=-1

 

 

so,i found some reason,the port state is initialized.............

jsycdccf1:/#ibstat  

 

===============================================================================

INFINIBAND DEVICE INFORMATION (iba0)

===============================================================================

PORT 1 (iba0)   Physical Port State: Initialized

PORT 2 (iba0)   Physical Port State:  Initialized

DMA Size = 0x00000000 80000000

jsycdccf1:/#

 

 

so cane tell me what can i do for this issue.thanks for your greate support!


Re: power 740 over Voltaire Switch 4036-651E

$
0
0

Hi Zeyan Wei,

 

Please try and enable subnet manager on of the nodes or the switch.

 

On the switch:

 

switch>enable

password = 123456

switch#

switch(config)#sm

switch(config-sm)#sm-info mode set enable

Re: IBM FlexSystem EN6131 40Gb ethernet switch and MIB

$
0
0

if you are able to login to support.mellanox.com (for contracted and registered users) you will find it under the SX6036 product download page. If not, IBM support can probably provide it to you.

the MIB file is good per MellanoxOS version. i can probably upload the file to you but i first need to know what MellanoxOS version your product is using (show version in the CLI)

Re: Why has support for ConnecX-2 cards been discontinued in WinOF 4.90?

$
0
0

Hi,

I am not sure I can get my hands on a list that diffs between the two card's features (beside BW) but for fact i know that there are quite a few including off-load engines and other scalability-related features.

checkout the product's pages:
ConnectX2: Mellanox Products: ConnectX®-2 VPI with CORE-Direct Technology 

ConnectX3: Mellanox Products: ConnectX®-3 Single/Dual-Port Adapter with VPI 

 

i am sure that after reviewing you would agree that there are new features. All couldn't be enabled without the HW support.

 

 

Thanks! 

Re: MLNX_OFED install failed at Ubuntu 14.04.2

Re: MLNX_OFED_LINUX-2.4-1.0.4 - test problems

Re: IBM FlexSystem EN6131 40Gb ethernet switch and MIB

$
0
0

Thanks Yairi.

 

The MellanoxOS version is:

 

CPDASWETH01 [mlag-vip-domain-tfe: standby] # show version
Product name:      SX_PPC_M460EX
Product release:   SX_3.4.0008
Build ID:          #1-dev
Build date:        2014-11-10 20:07:43
Target arch:       ppc
Target hw:         m460ex
Built by:          jenkins@fit74
Version summary:   SX_PPC_M460EX SX_3.4.0008 2014-11-10 20:07:43 ppc

Product model:     ppc
Host ID: 0002C963D900

Uptime:            39d 19h 8m 53.120s
CPU load averages: 1.23 / 1.34 / 1.35
Number of CPUs:    1
System memory:     572 MB used / 1455 MB free / 2027 MB total
Swap: 0 MB used / 0 MB free / 0 MB total

Re: Why has support for ConnecX-2 cards been discontinued in WinOF 4.90?

$
0
0

Looking through the product brief and feature summary list (hardware features), there are only 2 differences, connectx3 supports energy efficient ethernet and connectx2 is compatible with 10Gb no mention of 40Gb like the x3. EDIT: forgot to add Precision clock sync is also in x3. The 2 documents are almost identical, a few things moved about and a few things removed due to duplication in the feature list. So as Lars has said, there doesn't appear to be much difference between x2 and x3 apart from speed, also the product briefs say for feature lists, refer to the driver documentation.

 

"*This product brief describes all of the hardware features and capabilities. Please refer to the driver release notes on www.mellanox.com for feature availability."

 

So as the hardware feature list is almost identical, all the additional features are implemented in driver.

 

EDIT: Also the "silicon" documents are here:

 

http://www.mellanox.com/related-docs/prod_silicon/PB_ConnectX3_VPI_Silicon.pdf

http://www.mellanox.com/related-docs/prod_silicon/ConnectX-2_Silicon.pdf


Re: IBM FlexSystem EN6131 40Gb ethernet switch and MIB

Re: IBM FlexSystem EN6131 40Gb ethernet switch and MIB

Re: Why has support for ConnecX-2 cards been discontinued in WinOF 4.90?

$
0
0

I'm also a bit worried about the general lifespan of the Mellanox adapter cards if they are to follow the same life cycle as CTX2 which was about five years. If our customers upgrade to CTX3 (CTX4 is overkill and far to expensive), technical support will only remain for one more year, i.e. until Q2-2016 next year.

 

Please correct me if I'm wrong....

Re: mst.sys driver error

$
0
0

Hi Erez

 

is there any chance you can compare the file in your toolkit to the hash I provided? or provide me with a link to the toolkit?

 

 

Unfortunately as the server, indeed this site, doesn't have any Mellanox equipment in it I still don't know why this file as appeared on one of the servers. If it only serves to access hardware registers it seems completely unneeded.

 

Can this file be bundled with other products legitimately? How many versions of the mst.sys file are there? I want to continue investigating the issue as it's so completely out of place on the site and hasn't been knowingly installed which is of serious concern.

I have two MCQH29-XDR mezzannie cards in different servers, that suddenly do not appear in device manager, or work at all.

$
0
0

I have two MCQH29-XDR mezzannine cards, in two different servers, that after a recent cold-boot, no longer appear in device manager.

 

I get no link lights, and no response from any of the firmware tools that I try to query them.

 

I tried to enable "live fish" mode by putting the ignore flash jumper on the card, but still didn't get any results, although there is limited documentation about how to use this.

At least one of these cards was working fine for certain, and I have swapped them between servers to try and isolate the problem.  I have two other machines they could fit in (besides the two that they are in), but I haven't been able to try swapping them in there, to see if it is a pci problem.

 

These cards are in Dell C6100 cloud servers.

 

I would think that if I could reflash the firmware, perhaps they would come back to life, but I don't think I can put the card in a standard PCI slot!

 

Thanks for any ideas!

 

Will

Re: I have two MCQH29-XDR mezzannie cards in different servers, that suddenly do not appear in device manager, or work at all.

$
0
0

Also, as a followup, I have used the tool PCI-Z to examine the PCI-E bus directly, and compared with all the other servers, the only thing missing is the Mellanox card.  It does not show up at all on the PCI-Z scan on the problem servers, but shows up OK for my single port, standard PCI cards.

Re: установка OFED на Ubuntu 14.10 (проблемы с прошивкой сетевого адаптера)

$
0
0

root@ivan-X7DWT:~#

root@ivan-X7DWT:~#

root@ivan-X7DWT:~# lspci | grep Mellanox | awk '{print $1}' | xargs -i -r lspci -s {} -xxxvvv

07:00.0 InfiniBand: Mellanox Technologies MT25204 [InfiniHost III Lx HCA] (rev 20)

            Subsystem: Mellanox Technologies MT25204 [InfiniHost III Lx HCA]

            Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+

            Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

            Latency: 0, Cache Line Size: 32 bytes

            Interrupt: pin A routed to IRQ 30

            Region 0: Memory at d8a00000 (64-bit, non-prefetchable) [size=1M]

            Region 2: Memory at d8000000 (64-bit, prefetchable) [size=8M]

            Capabilities: [40] Power Management version 2

                          Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)

                          Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-

            Capabilities: [48] Vital Product Data

                          No end tag found

            Capabilities: [90] MSI: Enable- Count=1/32 Maskable- 64bit+

                          Address: 0000000000000000  Data: 0000

           Capabilities: [84] MSI-X: Enable+ Count=32 Masked-

                          Vector table: BAR=0 offset=00082000

                          PBA: BAR=0 offset=00082200

           Capabilities: [60] Express (v1) Endpoint, MSI 00

                          DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64ns, L1 unlimited

                                         ExtTag+ AttnBtn- AttnInd- PwrInd- RBE- FLReset-

                          DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-

                                       RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-

                                       MaxPayload 128 bytes, MaxReadReq 512 bytes

                          DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-

                          LnkCap: Port #8, Speed 2.5GT/s, Width x8, ASPM L0s, Exit Latency L0s unlimited, L1 unlimited

                                         ClockPM- Surprise- LLActRep- BwNot-

                          LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-

                                         ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

                          LnkSta: Speed 2.5GT/s, Width x8, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-

            Kernel driver in use: ib_mthca

00: b3 15 74 62 06 04 10 00 20 00 06 0c 08 00 00 00

10: 04 00 a0 d8 00 00 00 00 0c 00 00 d8 00 00 00 00

20: 00 00 00 00 00 00 00 00 00 00 00 00 b3 15 74 62

30: 00 00 00 00 40 00 00 00 00 00 00 00 0b 01 00 00

40: 01 48 02 00 00 00 00 00 03 90 00 80 ff ff ff ff

50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

60: 10 00 01 00 20 0e 00 00 00 20 00 00 81 f4 03 08

70: 00 00 81 00 00 00 00 00 00 00 00 00 00 00 00 00

80: 00 00 00 00 11 60 1f 80 00 20 08 00 00 22 08 00

90: 05 84 8a 00 00 00 00 00 00 00 00 00 00 00 00 00

a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

 

 

root@ivan-X7DWT:~#

root@ivan-X7DWT:~#

root@ivan-X7DWT:~#


Re: Omnios + RSF-1 + Inifiniband

$
0
0

Additional info.

 

Have tested with ethernet, same setup, just using ethernet instead and it works perfect for failover. a few seconds with the datastore not contactable and then comes back online.

 

Other testing done with infiiniband is that when 1 pool/datastore is failed over to the other node, if it does come up, then it will most likely work for the most of the time, but if you failover a second pool, then this second pool will not work initially, if after around 5 minutes it does actually come up as visible from the other node, then the pool that was running ok will almost always then fall off and no longer be visible, even though nothing has actually been done with it. Then sometimes after again 5 minutes or so it will come back.

 

I thought that it may have been arp updates being the problem, but looking at the arp tables on the esxi servers shows that they are updating correctly.

Re: Why has support for ConnecX-2 cards been discontinued in WinOF 4.90?

$
0
0

Same here, don't need more bandwidth than the x2, the cost of newer cards is too much, and when the price is low enough, looks like support will end soon on them so no point in buying them.

I have a problem with the current install that i have, which is in development, but as it looks like support is a problem, I dont think i will try hard to resolve it as I think I will probably scrap it and go with 10Gbit ethernet instead, using intel cards.

Re: установка OFED на Ubuntu 14.10 (проблемы с прошивкой сетевого адаптера)

$
0
0

I'm sorry, there is no serial number in the output and I afraid it will be very difficult to figure out what type it is. Try to use other MFT tools versions available from Mellanox Technologies

OpenStack Neutron ML2 w/ SR-IOV (VLAN): Can't ping VM IP

$
0
0

Hello,

 

I'm trying OpenStack Icehouse ML2 w/ SR-IOV (VLAN) on CentOS7 and ConnectX-3 cards (40gE):

 

(1) Mellanox-Neutron-Icehouse-Redhat-Ethernet - OpenStack 

(2) Nova-neutron-sriov - OpenStack

(3) Mellanox OFED Driver Installation and Configuration for SR-IOV

 

After setting SR-IOV w/ ConnectX-3 while following (3), I also verified that vlan is working among systems.

 

I installed OpenStack Icehouse using packstack on single node (all-in-one).

And then, I modified all the configurations for SR-IOV while following (1), except /etc/neutron/dhcp_agent.ini.

I fixed the interface_driver from BridgeInterfaceDriver to OVSInterfaceDriver in /etc/neutron/dhcp_agent.ini.

 

Even though VM can get the DHCP IP, host and VM can't ping each other.

 

[root@gpu6 ~(keystone_admin)]# cat /etc/modprobe.d/mlx4_core.conf

options mlx4_core port_type_array=2,2 num_vfs=16 probe_vf=0 enable_64b_cqe_eqe=0  log_num_mgm_entry_size=-1

 

[root@gpu6 ~(keystone_admin)]# neutron net-list

+--------------------------------------+--------+-----------------------------------------------------+

| id                                   | name   | subnets                                             |

+--------------------------------------+--------+-----------------------------------------------------+

| 1c555886-f026-4727-a2e6-99913e383bf2 | net40g | afdeec0e-6b9f-421a-9a5b-421a77c283d8 192.168.2.0/24 |

+--------------------------------------+--------+-----------------------------------------------------+

[root@gpu6 ~(keystone_admin)]# neutron subnet-list

+--------------------------------------+-------------+----------------+--------------------------------------------------+

| id                                   | name        | cidr           | allocation_pools                                 |

+--------------------------------------+-------------+----------------+--------------------------------------------------+

| afdeec0e-6b9f-421a-9a5b-421a77c283d8 | demo-subnet | 192.168.2.0/24 | {"start": "192.168.2.2", "end": "192.168.2.254"} |

+--------------------------------------+-------------+----------------+--------------------------------------------------+

[root@gpu6 ~(keystone_admin)]# neutron port-list

+--------------------------------------+------------+-------------------+------------------------------------------------------------------------------------+

| id                                   | name       | mac_address       | fixed_ips                                                                          |

+--------------------------------------+------------+-------------------+------------------------------------------------------------------------------------+

| 385600c0-fafa-4e15-b0b4-83f780e26daf |            | fa:16:3e:ce:2b:5f | {"subnet_id": "afdeec0e-6b9f-421a-9a5b-421a77c283d8", "ip_address": "192.168.2.2"} |

| 9a291386-c020-4cfd-9e11-bc98fa418566 |            | fa:16:3e:90:d8:bc | {"subnet_id": "afdeec0e-6b9f-421a-9a5b-421a77c283d8", "ip_address": "192.168.2.1"} |

| e0f81bbb-2da3-4ba0-9bba-3f90a79fd9a7 | sriov_port | fa:16:3e:8b:83:76 | {"subnet_id": "afdeec0e-6b9f-421a-9a5b-421a77c283d8", "ip_address": "192.168.2.7"} |

+--------------------------------------+------------+-------------------+------------------------------------------------------------------------------------+

 

[root@gpu6 ~(keystone_admin)]# ip netns

qdhcp-1c555886-f026-4727-a2e6-99913e383bf2

qrouter-4d297bce-3888-4036-9b63-e61028f9ff8f

[root@gpu6 ~(keystone_admin)]# ip netns exec qdhcp-1c555886-f026-4727-a2e6-99913e383bf2 ping -c1 192.168.2.2

PING 192.168.2.2 (192.168.2.2) 56(84) bytes of data.

64 bytes from 192.168.2.2: icmp_seq=1 ttl=64 time=0.027 ms

--- 192.168.2.2 ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms

[root@gpu6 ~(keystone_admin)]# ip netns exec qdhcp-1c555886-f026-4727-a2e6-99913e383bf2 ping -c1 192.168.2.1

PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.

64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.495 ms

--- 192.168.2.1 ping statistics ---

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms

[root@gpu6 ~(keystone_admin)]# ip netns exec qdhcp-1c555886-f026-4727-a2e6-99913e383bf2 ping -c1 192.168.2.7

PING 192.168.2.7 (192.168.2.7) 56(84) bytes of data.

^C

--- 192.168.2.7 ping statistics ---

1 packets transmitted, 0 received, 100% packet loss, time 0ms

 

 

VM also can't ping 192.168.2.1 or 192.168.2.2, except own IP, 192.168.2.7.

VM's lspci result is as follows:

00:04.0 Network controller [0280]: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function] [15b3:1004]

00:05.0 Network controller [0280]: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function] [15b3:1004]

 

 

[root@gpu6 ~(keystone_admin)]# ovs-vsctl show

af9350bf-af96-4fac-adf5-0cd665e1215e

...

    Bridge br-int

        fail_mode: secure

        Port "qr-9a291386-c0"

            tag: 1

            Interface "qr-9a291386-c0"

                type: internal

        Port int-br-ex

            Interface int-br-ex

        Port "int-br-ens4"

            Interface "int-br-ens4"

        Port "tap385600c0-fa"

            tag: 1

            Interface "tap385600c0-fa"

                type: internal

        Port br-int

            Interface br-int

                type: internal

    Bridge "br-ens4"

        Port "br-ens4"

            Interface "br-ens4"

                type: internal

        Port "ens4"

            Interface "ens4"

        Port "phy-br-ens4"

            Interface "phy-br-ens4"

    ovs_version: "2.1.3"

 

[root@gpu6 ~(keystone_admin)]# ovs-ofctl show br-int

OFPT_FEATURES_REPLY (xid=0x2): dpid:0000029b05424542

n_tables:254, n_buffers:256

capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP

actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE

25(tap385600c0-fa): addr:00:00:00:00:00:00

     config:     PORT_DOWN

     state:      LINK_DOWN

     speed: 0 Mbps now, 0 Mbps max

26(qr-9a291386-c0): addr:00:00:00:00:00:00

     config:     PORT_DOWN

     state:      LINK_DOWN

     speed: 0 Mbps now, 0 Mbps max

29(int-br-ex): addr:ee:06:9e:4b:9e:62

     config:     0

     state:      0

     current:    10GB-FD COPPER

     speed: 10000 Mbps now, 0 Mbps max

30(int-br-ens4): addr:6e:aa:42:99:af:d2

     config:     0

     state:      0

     current:    10GB-FD COPPER

     speed: 10000 Mbps now, 0 Mbps max

LOCAL(br-int): addr:02:9b:05:42:45:42

     config:     0

     state:      0

     speed: 0 Mbps now, 0 Mbps max

OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

 

[root@gpu6 ~(keystone_admin)]# ovs-ofctl show br-ens4

OFPT_FEATURES_REPLY (xid=0x2): dpid:000024be05820470

n_tables:254, n_buffers:256

capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP

actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE

1(ens4): addr:24:be:05:82:04:70

     config:     0

     state:      0

     current:    AUTO_NEG

     advertised: AUTO_NEG AUTO_PAUSE

     supported:  FIBER AUTO_NEG AUTO_PAUSE AUTO_PAUSE_ASYM

     speed: 0 Mbps now, 0 Mbps max

11(phy-br-ens4): addr:b6:07:55:f6:42:7c

     config:     0

     state:      0

     current:    10GB-FD COPPER

     speed: 10000 Mbps now, 0 Mbps max

LOCAL(br-ens4): addr:24:be:05:82:04:70

     config:     0

     state:      0

     speed: 0 Mbps now, 0 Mbps max

OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

 

Could you please let me know what I shol check for this problem?

Re: Omnios + RSF-1 + Inifiniband

$
0
0

I assume the failover is triggered by ARP calls by RSF?, this would explain by ethernet works as expected.

 

If this is the case, i am not sure if a solution exists.

 

You may need to investigate a different storage solution such as Ceph, which will be able to handle failures itself and also support the infiniband fabric.

Viewing all 6226 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>