Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6226 articles
Browse latest View live

UFM hostname change DNS

$
0
0

Does UFM use DNS or IP to communicate with the switches?

 

Thanks in advance


Re: UFM hostname change DNS

$
0
0

There are 2 layers of communication

 

For inband communication and statistics UFM sends standard MADs and receives specific InfiniBand traps in return.

 

For management purposes, Mellanox switches have an embedded UFM agent.The UFM Server sends a multicast notification to MCast address 224.0.23.172, port 6306. The switch management replies to UFM (via port 6306) with a unicast message that contains relevant information about the switch (IP,GUID,Firmware etc...)

Afterwards the information is being exchanged via XML

Re: mst.sys driver error

$
0
0

Is there anyone that can help with my follow up query?

We have another server that started reporting the same issue and had the file appear on it last week, after that the server started crashing repeatedly. It appears your file is being used for "bad things" of some description so I'd like to confirm hashes and the like.

 

thanks

OpenStack JUNO

$
0
0

Hi, I'm Vincenzo Greco and I've installed in my company OpenStack(Juno) with 5 compute node, 1 network node, 1 controller node and 1 storage node using CentOS 7. All node have Infiniband and they work perfectly. My question is: is possible to virtualize the instance interface with infiniband?And, if it is possible, how can I make it? thank you so much for your attention and for your work!

Re: Can I mix 10GigE and IB on an ESXi (v5.1 u1) host?

$
0
0

Hi! I have a same problems and give up it now!

Because there is a compatibility issue on core driver.

Both IB and ETH require Mellanox core driver, but difference in version and compatibility. If Mellanox release common core driver for both IB and ETH then it's possible...:)

Re: Port 1 IB Port 2 Ethernet - ESXI 5.1 (5.5)

$
0
0

HI! Everybody!

I found ESXi MFT tools has s change port mode capability. But it can be support ConnectX-3 or above only. There is nothing to do for 2nd port ethernet function with 1.8.2.4 VPI driver. I don't know why mlnx-config function is included in ESXi MFT tools.

 

Is there anyone who tried mlnx-config with old 1.6.2 ethernet driver?

nbdx - data mismatch

$
0
0

Hello,

 

I installed OFED_MLNX, Accelio and nbdx packages. My card is Connect X3 PRO.

I can see /dev/nbdx0 from client and issue READ/WRITE I/O against /dev/nbdx0.

However, I got different data when I read back the data I had written onto /dev/nbdx0.

You wouldn't notice this problem if you just run "fio" test, because "fio" does not check data integrity by default.

 

Anyone has seen this problem?

 

Thanks,

Ted

Re: mst.sys driver error

$
0
0

This can happen if you (accidentally) install the 32 bit version of Dell Open Manage Server Administrator on a server running 64 bit Windows Server 2008 R2. Ask me how I know:-)


Are there any plans for an Illumos driver?

$
0
0

Illumos is increasing in popularity, with currently available distributions such as:

 

SmartOS by Joyent - https://smartos.org/

OpenIndiana - OpenIndiana

NexentaStor (CE and Enterprise Edition) - Home - Nexenta

and others: Distributions - illumos - illumos wiki

 

My use case specifically, is SmartOS and Nexenta. I use SmartOS for virtualization, and Nexenta for NAS file storage hosting.

 

Our internal network is built with Mellanox Switches, specifically the SX1012's. With today's growing need and desire for high performance, low latency bandwidth, it won't be long before the 10G cards currently in use will be maxed out. It would be ideal if a Illumos driver for the Mellanox ConnectX-3 cards were available. Once a driver is made available and public, it can be certified by Illumos and other companies such as Nexenta.

 

This is a huge opportunity for Mellanox and their customers, as they can corner the Illumos market. Currently there are no 40G solutions for Illumos, and specifically Nexenta. Mellanox could stand to gain a lot of additional traction in the storage market, and I know I would be very happy to introduce 40G to our NAS appliances.

 

Thank you for your time.

 

Best Regards,

Kyle

Re: Are there any plans for an Illumos driver?

$
0
0

I'd like that as well.

 

I would not get your hopes up for Nexenta support though. Even the old ConnectX-2 cards got their drivers removed from Nexenta v4 (and were never officially supported in the first place, even in v3). Nexenta did promise support for them in v4 way back in 2013, but apparently that got scratched and they never delivered.

Learning about Infiniband switches - Advice Please

$
0
0

Hello,

 

I'm putting in (or planning on putting in) a Ceph Cluster with my Proxmox (Virtual Hypervisor) in my home network.  Between my 3 Ceph servers I want to use Infiniband and saw some decent prices on ebay.  I'd like to learn more about the types of switches available and I'm hoping this community can provide some tips for me.

 

I'm looking at using DDR (20Gbps) Nics in my servers.   Are there 8 port Infiniband switches you can suggest I look for on ebay that would work?  Something that isn't too loud.  I see lots of larger 24 port switches but for my small setup 8 ports would be more than enough. 

I also found an Flextronics F-X430066 8-Port 4X SDR InfiniBand External Switch for a great price but I'm assuming that this switch would limit me to 10Gbps...is this a correct assumption?

 

Any advice you can give would be greatly appreciated.

 

Thank you,

FreeBSD 10.1 and 40GbE

$
0
0

I have a pair of SuperMicro servers, each with dual port ConnectX-3 EN cards. Each of these servers is connected to a SX1012, and between the SX1012s is a 40Gb trunk all using Mellanox cabling. The OS/cards report their own link is up at 40Gb. So in theory end to end there is a clean 40Gb path but results with netperf and iperf are telling me anything but. Despite disabling PF, dialing up socket buffers, tcp window sizes, minimum segment sizes, stack send queue, forcing interrupt cpu affinity, the best I can pull out of these cards with a 1500 MTU is about 3.5Gb/s. Only bumping to a 9000 MTU and setting all access and trunk interfaces to the same on the SX1012s, gets me a jump in performance. However that still only brings me a consistent max throughput of 14Gb/s. For comparison, I have four other SuperMicro servers with Intel 82599 10Gb interfaces. Those are connected to the same SX1012 switches using the 10/40 fanout cables and with not nearly the same amount of goofing around or crazy network stack settings they're each able to push a near-wire speed of 9.3Gb/s on a stock MTU of 1500.

 

The ConnectX-3 cards have been flashed to the 2.1.5 firmware and drivers. The SX1012 switches are running the latest 3.4.1120 MLNX-OS.

 

The throughput numbers are so odd they don't really give me much of a clue what is going on. They're certainly not secretly negotiating 10Gb while reporting 40. Looking at the SX1012 side I don't see any errors or discards occurring on the relevant interfaces.

Re: Are there any plans for an Illumos driver?

$
0
0

We'll see I suppose. I also submitted the request to Nexenta, who have filed it internally as: NEX-3382 - Feature Request: 40G Card Support on the NexentaStor/HCL

LEDs of switch and adapters not lighting up.

$
0
0

Hi,

 

Configuration:

Host: RHEL7.0

Mellanox Card: ConnectX-3 Pro FDR InfiniBand + 40 GigE (MX354A-FCCT)

Mellanox Driver: MLNX_OFED_LINUX-2.4-1.0.4-rhel7.0-x86_64

Mellanox Switch: SX1012

 

I have one server connected to the switch with above config. After installation of OFED driver, I have set port type as "eth" ports and configured the IP for the port. However, I am not able to see the LED's lighting up on the switch as well as adapter side. I have few observations. Can someone please take a look and give suggestion?

 

=================

1. Adapters ports status is DOWN:

 

 

[root@RHEL7-R54-U5 ~]# connectx_port_config -s

--------------------------------

Port configuration for PCI device: 0000:1a:00.0 is:

eth

eth

 

[root@RHEL7-R54-U5 ~]# cat /sys/class/infiniband/mlx4_0/ports/1/state

1: DOWN

 

[root@RHEL7-R54-U5 ~]# cat /sys/class/infiniband/mlx4_0/ports/2/state

1: DOWN

 

[root@RHEL7-R54-U5 ~]# ibv_devinfo

hca_id: mlx4_0

        transport:                      InfiniBand (0)

        fw_ver:                         2.33.5000

        node_guid:                      f452:1403:00f4:f8c0

        sys_image_guid:                 f452:1403:00f4:f8c3

        vendor_id:                      0x02c9

        vendor_part_id:                 4103

        hw_ver:                         0x0

        board_id:                       MT_1090111019

        phys_port_cnt:                  2

                port:   1

                        state:                  PORT_DOWN (1)

                        max_mtu:                4096 (5)

                        active_mtu:             1024 (3)

                        sm_lid:                 0

                        port_lid:               0

                        port_lmc:               0x00

                        link_layer:             Ethernet

 

                port:   2

                        state:                  PORT_DOWN (1)

                        max_mtu:                4096 (5)

                        active_mtu:             1024 (3)

                        sm_lid:                 0

                        port_lid:               0

                        port_lmc:               0x00

                        link_layer:             Ethernet

 

 

 

 

[root@RHEL7-R54-U5 ~]# ethtool enp26s0

Settings for enp26s0:

        Supported ports: [ FIBRE ]

        Supported link modes:   1000baseKX/Full

10000baseKX4/Full

40000baseCR4/Full

40000baseSR4/Full

        Supported pause frame use: Symmetric Receive-only

        Supports auto-negotiation: Yes

        Advertised link modes:  40000baseCR4/Full

40000baseSR4/Full

        Advertised pause frame use: Symmetric

        Advertised auto-negotiation: Yes

        Speed: Unknown!

        Duplex: Unknown! (255)

        Port: FIBRE

        PHYAD: 0

        Transceiver: internal

Auto-negotiation: off

        Supports Wake-on: d

        Wake-on: d

        Current message level: 0x00000014 (20)

                               link ifdown

        Link detected: no

 

 

2. Switch Port's admin state is enabled but operational state is down.

 

MLNX-R54-U15-40G [standalone: master] # show interfaces ethernet 1/1

 

Eth1/1

 

  Admin state: Enabled

  Operational state: Down

  Description: N\A

  Mac address: f4:52:14:65:55:06

  MTU: 1500 bytes(Maximum packet size 1522 bytes)

  Flow-control: receive on send on

  Actual speed: 40 Gbps

  Width reduction mode: disabled

  Switchport mode: access

  Last clearing of "show interface" counters : Never

  60 seconds ingress rate: 0 bits/sec, 0 bytes/sec, 0 packets/sec

  60 seconds egress rate: 0 bits/sec, 0 bytes/sec, 0 packets/sec

 

Rx

  0                    packets

  0                    unicast packets

  0                    multicast packets

  0                    broadcast packets

  0                    bytes

  0                    error packets

  0                    discard packets

 

Tx

  0                    packets

  0                    unicast packets

  0                    multicast packets

  0                    broadcast packets

  0                    bytes

  0                    discard packets

 

 

=================

Thanks!

Re: LEDs of switch and adapters not lighting up.

$
0
0

Hi Komal,

 

Can you try the following:

 

ip link set enp26s0 up


QSFP+ connection to IBM Flex System EN4093R 40/10G switches

$
0
0

Greetings,

 

I am working on a solution that includes interconnection between 2x SX1710 and 2x IBM EN4093R Flex System Switches using 40G QSFP+ for a distance of maximum 10 meters, utilizing SX1710 MLAG and IBM EN4093R switches VLAG

i am  asking for the correct P/N of the QSFP+ transceiver to fit into SX1710, and the P/N of the fiber cable too, since the concept of DAC will not work in this case. Additionally, i want to confirm the support of PVRST+ on SX1710.

 

thank you

 

Re: Installation of 4.60 fails in german version of 2008R2x64

$
0
0

Hi,

 

I have the same problem but with English 2008 R2. I have tried and windows 7 but with no luck (same issues).

Does anybody know the resolution for this problem?

 

Kind regards,

 

Milan

Re: LEDs of switch and adapters not lighting up.

$
0
0

Hi Eddie,

 

I tried setting up the link with the command you mentioned above. Still, port status is PORT_DOWN.

Re: LEDs of switch and adapters not lighting up.

$
0
0

Hi Komal,

 

What is the switch software you are using?

run "show version concise"

what are the cable types and models?

run (from enable mode) "show interfaces ethernet transceiver"

Re: LEDs of switch and adapters not lighting up.

$
0
0


Hi Eddie,

 

Thanks for looking into it.

 

I am using Fibre Optic Cable(speed upto 56GbE). I could see LED lightened in one port after I reconnected cables, but the other port(enp26s0d1) is still facing the issue. Port state is still down.

 

Here are the ans to your questions:

 

1.

MLNX-R54-U15-40G [standalone: master] # show version concise

SX_PPC_M460EX SX_3.3.5006 2014-05-20 12:19:44 ppc

 

2.

MLNX-R54-U15-40G [standalone: master] # show interfaces ethernet transceiver

List of transceiver details for all ports in the switch:

Port 1/1 state

        identifier             : QSFP+

        cable/ module type     : Optical cable/ module

        ethernet speed and type: 56GigE

        vendor                 : Mellanox

        cable length           : 10m

        part number            : MC220731V-010

        revision               : A1

        serial number          : DC321400013

 

Port 1/2 state

        identifier             : QSFP+

        cable/ module type     : Optical cable/ module

        ethernet speed and type: 56GigE

        vendor                 : Mellanox

        cable length           : 10m

        part number            : MC220731V-010

        revision               : A1

        serial number          : DC321400004

3. Port status:

[root@RHEL7-R54-U13 ~]# ibv_devinfo

hca_id: mlx4_0

        transport:                      InfiniBand (0)

        fw_ver:                         2.33.5000

        node_guid:                      f452:1403:00f4:f560

        sys_image_guid:                 f452:1403:00f4:f563

        vendor_id:                      0x02c9

        vendor_part_id:                 4103

        hw_ver:                         0x0

        board_id:                       MT_1090111019

        phys_port_cnt:                  2

                port:   1

                        state:                  PORT_ACTIVE (4)

                        max_mtu:                4096 (5)

                        active_mtu:             1024 (3)

                        sm_lid:                 0

                        port_lid:               0

                        port_lmc:               0x00

                        link_layer:             Ethernet

 

                port:   2

                       state:                  PORT_DOWN (1)

                        max_mtu:                4096 (5)

                        active_mtu:             1024 (3)

                        sm_lid:                 0

                        port_lid:               0

                        port_lmc:               0x00

                        link_layer:             Ethernet

 

[root@RHEL7-R54-U13 ~]#

Viewing all 6226 articles
Browse latest View live