Quantcast
Channel: ProLiant Servers (ML,DL,SL) topics
Viewing all 10362 articles
Browse latest View live

P420i in Gen8 DL380p with no FBWC

$
0
0

Hi there. We are setting up a Gen8 DL380p for Windows 2008R2 and resolved the P420i HBA/Raid mode. Now would like to maximise our flexibility with our assortment of HP SFF HDD drives. There are pairs of drives in different sizes from 146GB, 300GB up to 1TB so we can RAID 0 them in pairs, which will suit us. Without FBWC we have come up against some hardware limitations.

1. Without FBWC, is it a hardware limit to eastablish only 2 maxmium no. logical drives? Can this be extended in any way without using an FBWC?

2. Can we mix drives of different sizes into a RAID 0 to utilise the 8 drives we have in 4 different sizes (paired drives in 4 sizes)

3. We have 2 RAID 0 logical drives using 300GB and 1TB drives. Can the remaining unassigned (4) drives be HBA accessible via the Windows 2008 R2 OS ? (i.e. a hybrid RAID/HBA mix where the unassigned drives are visible to the OS)

Appreciate any response. Thanks.


bnx2x driver crash

$
0
0

We have 2 interfaces configured with bonding, we are seeing bnx2x crash in dmesg:

 

9450:[21631713.887893] bnx2x: [bnx2x_attn_int_deasserted3:4297(eno49)]MC assert!
9451:[21631713.887954] bnx2x: [bnx2x_mc_assert:716(eno49)]XSTORM_ASSERT_LIST_INDEX 0x2
9452:[21631713.888014] bnx2x: [bnx2x_mc_assert:732(eno49)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0xc9e0c9e0 0x2e382e38 0x00010057
9453:[21631713.888107] bnx2x: [bnx2x_mc_assert:746(eno49)]Chip Revision: everest3, FW Version: 7_10_51
9454:[21631713.888171] bnx2x: [bnx2x_attn_int_deasserted3:4303(eno49)]driver assert
9455:[21631713.888224] bnx2x: [bnx2x_panic_dump:914(eno49)]begin crash dump -----------------
9456:[21631713.888284] bnx2x: [bnx2x_panic_dump:924(eno49)]def_idx(0x34c0)  def_att_idx(0x3292)  attn_state(0x1)  spq_prod_idx(0xd8) next_stats_cnt(0x34b1)
9457:[21631713.888382] bnx2x: [bnx2x_panic_dump:929(eno49)]DSB: attn bits(0x0)  ack(0x1)  id(0x0)  idx(0x3292)
9458:[21631713.888452] bnx2x: [bnx2x_panic_dump:930(eno49)]     def (0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x7f3f 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0)  igu_sb_id(0x0)  igu_seg_id(0x1) pf_id(0x0)  vnic_id(0x0)  vf_id(0xff)  vf_valid (0x0) state(0x1)
9459:[21631713.888667] bnx2x: [bnx2x_panic_dump:981(eno49)]fp0: rx_bd_prod(0x829b)  rx_bd_cons(0xd4)  rx_comp_prod(0xd36b)  rx_comp_cons(0xd19f)  *rx_cons_sb(0xd19f)
9460:[21631713.888769] bnx2x: [bnx2x_panic_dump:984(eno49)]     rx_sge_prod(0x32c0)  last_max_sge(0x2edf)  fp_hc_idx(0xffe9)
9461:[21631713.888847] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp0: tx_pkt_prod(0x244e)  tx_pkt_cons(0x244e)  tx_bd_prod(0xc459)  tx_bd_cons(0xc458)  *tx_cons_sb(0x244e)
9462:[21631713.888950] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp0: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9463:[21631713.889043] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp0: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9464:[21631713.889137] bnx2x: [bnx2x_panic_dump:1012(eno49)]     run indexes (0xffe9 0x0)
9465:[21631713.889198] bnx2x: [bnx2x_panic_dump:1018(eno49)]     indexes (
9477:[21631713.889284] bnx2x: [bnx2x_panic_dump:981(eno49)]fp1: rx_bd_prod(0x9a7a)  rx_bd_cons(0x8b3)  rx_comp_prod(0x845c)  rx_comp_cons(0x8290)  *rx_cons_sb(0x8290)
9478:[21631713.889385] bnx2x: [bnx2x_panic_dump:984(eno49)]     rx_sge_prod(0x7440)  last_max_sge(0x7060)  fp_hc_idx(0x3fc7)
9479:[21631713.889464] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp1: tx_pkt_prod(0xbec5)  tx_pkt_cons(0xbec3)  tx_bd_prod(0x1d85)  tx_bd_cons(0x1d80)  *tx_cons_sb(0xbec3)
9480:[21631713.889579] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp1: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9481:[21631713.889674] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp1: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9482:[21631713.889769] bnx2x: [bnx2x_panic_dump:1012(eno49)]     run indexes (0x3fc7 0x0)
9483:[21631713.889830] bnx2x: [bnx2x_panic_dump:1018(eno49)]     indexes (
9495:[21631713.889920] bnx2x: [bnx2x_panic_dump:981(eno49)]fp2: rx_bd_prod(0xd5dc)  rx_bd_cons(0x417)  rx_comp_prod(0xfad5)  rx_comp_cons(0xf909)  *rx_cons_sb(0xf909)
9496:[21631713.890020] bnx2x: [bnx2x_panic_dump:984(eno49)]     rx_sge_prod(0xb640)  last_max_sge(0xb27c)  fp_hc_idx(0x39b9)
9497:[21631713.890097] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp2: tx_pkt_prod(0xf5eb)  tx_pkt_cons(0xf5eb)  tx_bd_prod(0x1bb4)  tx_bd_cons(0x1bb3)  *tx_cons_sb(0xf5eb)
9498:[21631713.890200] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp2: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9499:[21631713.892868] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp2: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9500:[21631713.898213] bnx2x: [bnx2x_panic_dump:1012(eno49)]     run indexes (0x39b9 0x0)
9501:[21631713.898270] bnx2x: [bnx2x_panic_dump:1018(eno49)]     indexes (
9513:[21631713.901029] bnx2x: [bnx2x_panic_dump:981(eno49)]fp3: rx_bd_prod(0xebd0)  rx_bd_cons(0xa0b)  rx_comp_prod(0x8304)  rx_comp_cons(0x8137)  *rx_cons_sb(0x8137)
9514:[21631713.906471] bnx2x: [bnx2x_panic_dump:984(eno49)]     rx_sge_prod(0x480)  last_max_sge(0xbd)  fp_hc_idx(0x5495)
9515:[21631713.909290] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp3: tx_pkt_prod(0x72fd)  tx_pkt_cons(0x72fd)  tx_bd_prod(0xf310)  tx_bd_cons(0xf30f)  *tx_cons_sb(0x72fd)
9516:[21631713.914996] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp3: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9517:[21631713.920825] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp3: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9518:[21631713.926823] bnx2x: [bnx2x_panic_dump:1012(eno49)]     run indexes (0x5495 0x0)
9519:[21631713.926882] bnx2x: [bnx2x_panic_dump:1018(eno49)]     indexes (
9531:[21631713.929931] bnx2x: [bnx2x_panic_dump:981(eno49)]fp4: rx_bd_prod(0xca60)  rx_bd_cons(0x899)  rx_comp_prod(0xc0b8)  rx_comp_cons(0xbeec)  *rx_cons_sb(0xbeec)
9532:[21631713.936006] bnx2x: [bnx2x_panic_dump:984(eno49)]     rx_sge_prod(0x80)  last_max_sge(0xfc95)  fp_hc_idx(0xc1df)
9533:[21631713.939113] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp4: tx_pkt_prod(0xd736)  tx_pkt_cons(0xd736)  tx_bd_prod(0xe7f5)  tx_bd_cons(0xe7f4)  *tx_cons_sb(0xd736)
9534:[21631713.945239] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp4: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9535:[21631713.951427] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp4: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9536:[21631713.957612] bnx2x: [bnx2x_panic_dump:1012(eno49)]     run indexes (0xc1df 0x0)
9537:[21631713.957670] bnx2x: [bnx2x_panic_dump:1018(eno49)]     indexes (
9549:[21631713.960802] bnx2x: [bnx2x_panic_dump:981(eno49)]fp5: rx_bd_prod(0xece6)  rx_bd_cons(0xb1f)  rx_comp_prod(0x7e6d)  rx_comp_cons(0x7ca1)  *rx_cons_sb(0x7ca1)
9550:[21631713.967229] bnx2x: [bnx2x_panic_dump:984(eno49)]     rx_sge_prod(0x3c80)  last_max_sge(0x38a2)  fp_hc_idx(0xa03f)
9551:[21631713.970327] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp5: tx_pkt_prod(0x2636)  tx_pkt_cons(0x2636)  tx_bd_prod(0xccc5)  tx_bd_cons(0xccc4)  *tx_cons_sb(0x2636)
9552:[21631713.976421] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp5: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9553:[21631713.982533] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp5: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9554:[21631713.988724] bnx2x: [bnx2x_panic_dump:1012(eno49)]     run indexes (0xa03f 0x0)
9555:[21631713.988782] bnx2x: [bnx2x_panic_dump:1018(eno49)]     indexes (
9567:[21631713.991897] bnx2x: [bnx2x_panic_dump:981(eno49)]fp6: rx_bd_prod(0x8c6)  rx_bd_cons(0x6ff)  rx_comp_prod(0x747a)  rx_comp_cons(0x72ae)  *rx_cons_sb(0x72ae)
9568:[21631713.998043] bnx2x: [bnx2x_panic_dump:984(eno49)]     rx_sge_prod(0xb00)  last_max_sge(0x734)  fp_hc_idx(0xce5d)
9569:[21631714.001139] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp6: tx_pkt_prod(0x78bc)  tx_pkt_cons(0x78bc)  tx_bd_prod(0x397e)  tx_bd_cons(0x397d)  *tx_cons_sb(0x78bc)
9570:[21631714.007243] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp6: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9571:[21631714.013360] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp6: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9572:[21631714.019560] bnx2x: [bnx2x_panic_dump:1012(eno49)]     run indexes (0xce5d 0x0)
9573:[21631714.019622] bnx2x: [bnx2x_panic_dump:1018(eno49)]     indexes (
9585:[21631714.022739] bnx2x: [bnx2x_panic_dump:981(eno49)]fp7: rx_bd_prod(0x2faa)  rx_bd_cons(0xde3)  rx_comp_prod(0x6ab9)  rx_comp_cons(0x68ed)  *rx_cons_sb(0x68ed)
9586:[21631714.028892] bnx2x: [bnx2x_panic_dump:984(eno49)]     rx_sge_prod(0xf140)  last_max_sge(0xed6b)  fp_hc_idx(0x1085)
9587:[21631714.031994] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp7: tx_pkt_prod(0x2e3d)  tx_pkt_cons(0x2e38)  tx_bd_prod(0xc9ea)  tx_bd_cons(0xc9dd)  *tx_cons_sb(0x2e38)
9588:[21631714.038096] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp7: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9589:[21631714.044216] bnx2x: [bnx2x_panic_dump:1001(eno49)]fp7: tx_pkt_prod(0x0)  tx_pkt_cons(0x0)  tx_bd_prod(0x0)  tx_bd_cons(0x0)  *tx_cons_sb(0x0)
9590:[21631714.050421] bnx2x: [bnx2x_panic_dump:1012(eno49)]     run indexes (0x1085 0x0)
9591:[21631714.050479] bnx2x: [bnx2x_panic_dump:1018(eno49)]     indexes (
9603:[21631714.053632] bnx2x 0000:04:00.0 eno49: bc 7.13.23
9696:[21631714.063136] bnx2x: [bnx2x_mc_assert:716(eno49)]XSTORM_ASSERT_LIST_INDEX 0x2
9697:[21631714.066125] bnx2x: [bnx2x_mc_assert:732(eno49)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0xc9e0c9e0 0x2e382e38 0x00010057
9698:[21631714.069110] bnx2x: [bnx2x_mc_assert:746(eno49)]Chip Revision: everest3, FW Version: 7_10_51
9699:[21631714.072039] bnx2x: [bnx2x_panic_dump:1177(eno49)]end crash dump -----------------
9707:[21631736.644206] NETDEV WATCHDOG: eno49 (bnx2x): transmit queue 1 timed out
9741:[21631738.684604] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[0]: txdata->tx_pkt_prod(9299) != txdata->tx_pkt_cons(9294)
9743:[21631740.745792] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[1]: txdata->tx_pkt_prod(48843) != txdata->tx_pkt_cons(48835)
9744:[21631742.815793] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[2]: txdata->tx_pkt_prod(62970) != txdata->tx_pkt_cons(62955)
9745:[21631744.886766] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[3]: txdata->tx_pkt_prod(29448) != txdata->tx_pkt_cons(29437)
9747:[21631746.959711] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[4]: txdata->tx_pkt_prod(55138) != txdata->tx_pkt_cons(55094)
9748:[21631749.031377] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[5]: txdata->tx_pkt_prod(9786) != txdata->tx_pkt_cons(9782)
9749:[21631751.103618] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[7]: txdata->tx_pkt_prod(11860) != txdata->tx_pkt_cons(11832)
9750:[21631753.171240] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[0]: txdata->tx_pkt_prod(9299) != txdata->tx_pkt_cons(9294)
9751:[21631755.246299] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[1]: txdata->tx_pkt_prod(48843) != txdata->tx_pkt_cons(48835)
9752:[21631757.322651] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[2]: txdata->tx_pkt_prod(62970) != txdata->tx_pkt_cons(62955)
9753:[21631759.401861] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[3]: txdata->tx_pkt_prod(29448) != txdata->tx_pkt_cons(29437)
9755:[21631761.476578] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[4]: txdata->tx_pkt_prod(55138) != txdata->tx_pkt_cons(55094)
9756:[21631763.550730] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[5]: txdata->tx_pkt_prod(9786) != txdata->tx_pkt_cons(9782)
9757:[21631765.626399] bnx2x: [bnx2x_clean_tx_queue:1159(eno49)]timeout waiting for queue[7]: txdata->tx_pkt_prod(11860) != txdata->tx_pkt_cons(11832)
9758:[21631765.637799] bnx2x: [bnx2x_del_all_macs:8425(eno49)]Failed to delete MACs: -5
9759:[21631765.640938] bnx2x: [bnx2x_chip_cleanup:9245(eno49)]Failed to schedule DEL commands for UC MACs list: -5
9760:[21631765.658668] bnx2x: [bnx2x_func_stop:9004(eno49)]FUNC_STOP ramrod failed. Running a dry transaction
9761:[21631766.342949] bnx2x 0000:04:00.0 eno49: using MSI-X  IRQs: sp 94  fp[0] 96 ... fp[7] 103
9762:[21631766.449028] bnx2x: [bnx2x_nic_load:2754(eno49)]Function start failed!
9763:[21631766.618178] bond0: link status definitely down for interface eno49, disabling it

 

ethtool shows link not detected:

        Link partner advertised pause frame use: No
        Link partner advertised auto-negotiation: Yes
        Speed: Unknown!
        Duplex: Unknown! (255)
        Port: Twisted Pair
        PHYAD: 17
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: Unknown
        Supports Wake-on: g
        Wake-on: g
        Current message level: 0x00000000 (0)
        Link detected: no

ifdown eno49; ifup eno49, to bounce the interface, then see the following logs shown in dmesg.

[23453271.455597] bond0: Removing slave eno49
[23453271.455606] bond0: option slaves: invalid value (-eno49)
[23453284.807623] bond0: Adding slave eno49
[23453285.504955] bnx2x 0000:04:00.0 eno49: using MSI-X  IRQs: sp 94  fp[0] 96 ... fp[7] 103
[23453285.610512] bnx2x: [bnx2x_nic_load:2754(eno49)]Function start failed!
[23454236.613604] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[23454296.711718] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[23455696.147966] bond0: Removing slave eno49
[23455696.147974] bond0: option slaves: invalid value (-eno49)
[23455712.060846] bond0: Adding slave eno49
[23455712.588317] bnx2x 0000:04:00.0 eno49: using MSI-X  IRQs: sp 94  fp[0] 96 ... fp[7] 103
[23455712.694346] bnx2x: [bnx2x_nic_load:2754(eno49)]Function start failed!

After rebooting the machine, it returned to normal.

Server info:

ethtool -i eno49
driver: bnx2x
version: 1.712.30-0
firmware-version: bc 7.12.83
expansion-rom-version: 
bus-info: 0000:06:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
uname -a
Linux  HOSTNAME.battle.net 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 31 16:04:38 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Product Name: ProLiant DL360 Gen9

Did anyone meet this problem?

 

DL360 Gen 9 failed to boot with (red screen )Illegal OPCODE

$
0
0

After Centos 6.6 install, the server simply failed to boot with HDD with just installed OS.

It reaching boot sequence,, tries CD-ROM and then simply crashes with Illegal OPCODE

 

any tips ?

How do you do it?

$
0
0

Dears, I would request your feedback on the following:

1. Do you use Intelligent Provisioning for Server Deployment? Do you recommend using it for servers deployed in sensitive environments? Do you suggest only using the boot time drivers and leave other drivers and applications?

2. After the deployment, do you fully install the Proliant Support Pack? Do you recommend them? I would like to monitor hardware using a third-party tool (SNMP). What's the minimum requirement for that?

3. How do you update the firmware, drivers and applications without using HP Paid Tools? I tested HP SUM and it seems to do the job. Is there anything that you want me to consider while using this tool in production?

4. What security controls would you reccommend in the BIOS Settings? Firmware level, and Drivers and Application (HP) level? Have anyone come across and HP document for securing servers?

5. Can I get iLO activities logged to Windows Event Logs? If not, how can I collect those logs centrally?

Note: We have all DL3x0 Gen9 servers.

Thank you

DL80 HDD upgrade, which way will be better?

$
0
0

Hello!

Have DL80 G9 and plan to change all 3 SATA HDD (in RAID on HP H240) to new 3 SAS SSD. Server has installed Windows Server 2012 R2 x64 and use as app. server. Question, which way is better and safer to make that change?

1) Backup (on old HDD) -> Restore (on new HDD)

2) Add new HDD in RAID and use all together?

3) New install from scratch at last?

Any advice?

P410 controller not working on Ml110 G9

$
0
0

i have installed a P410i Controller on a new HPE Ml 110 G9 server and i cannot even boot to this controller after intalling windows server 2012 or vmware esxi using intelligent provisioning i wonder if there is a solution in bios or this controller is not compatible as i have looked on older posts 

please advice 

regards

Noel fadel

No Controller detected in HP BL460c G7

$
0
0

Dear experts,

I have HP Blade server BL 460c G7. And I am getting error no controller deteccted at the time of configuration of RAID 1 through smart start CD.

And one more thing I want to know, One disk is HP brand and another is seagate with same capacity (146 gb) of both the disk. So is this possible to configure RAID

So please let me know how to resolve this

Regards

Vaibhav Chavan

Hard Disk Firmware

$
0
0

We have a ProLiant DL380 G6 Server, and one of the hard drives showed off a orange light. After doing research I found out that is could be a firmware problem. I tried to get the firmware, but there is no website available to get the firmware. Are there any suggetions?


ProLiant DL20 Gen9 Server doesn't see disk drives

$
0
0

I have a brand new DL20 that has 2 disk drives connected to a Smart HBA H240 Controller in Slot 2. It doesn’t like the disk drives and things that there is a replacement drive installed that needs to be configured.

Keep in mind that I just bought this server and it is new right out of the box from the factory.

Also, I have 4-hour on-site support and a technician was supposed to come to my facility yesterday but didn’t show up or call.

 I have tried using the Dynamic Smart Array to delete and recreate the LUN with identical configuration as it had originally, but after booting into the installers, they don’t see any disk drives.

There has never been an operating system or any other software on this brand-new system. The only way to boot it is with a CD.

I have requested an RMA from the reseller (Melilo Consulting) but they refused.

Does anyone have any ideas how to fix this server so it can be used?

HP Proliant DL580 G4 memory extremely slow

$
0
0

Hi! I found that system is very very slow, and found, that memory speed (read or write) time to time down 10 times.

Command to test:

 

mkdir RAM_test && sudo mount tmpfs -t tmpfs RAM_test && cd RAM_test
dd if=/dev/zero of=data_tmp bs=128k count=1024

 

gives strange results:

 

alex@node6:~/RAM_test$ dd if=/dev/zero of=data_tmp bs=128k count=1024
1024+0 records in
1024+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 4.89601 s, 27.4 MB/s
alex@node6:~/RAM_test$ dd if=/dev/zero of=data_tmp bs=128k count=1024
1024+0 records in
1024+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 5.05714 s, 26.5 MB/s
alex@node6:~/RAM_test$ dd if=/dev/zero of=data_tmp bs=128k count=1024
1024+0 records in
1024+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 4.39217 s, 30.6 MB/s
alex@node6:~/RAM_test$ dd if=/dev/zero of=data_tmp bs=128k count=1024
1024+0 records in
1024+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 0.669536 s, 200 MB/s
alex@node6:~/RAM_test$ dd if=/dev/zero of=data_tmp bs=128k count=1024
1024+0 records in
1024+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 0.717854 s, 187 MB/s
alex@node6:~/RAM_test$ dd if=/dev/zero of=data_tmp bs=128k count=1024
1024+0 records in
1024+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 4.90812 s, 27.3 MB/s
alex@node6:~/RAM_test$

 

So, some times we have like about 200MB/s, and some times about 30Mb/s.

OS: Ubuntu 16.04

I've tried different memories, in different variations, 1 block, 4 blocks, 2 DIMM, 4 DIMM - no result. Memory configured as default Advanced ECC - no hot-add

Memory - original from HP (1Gb DDR2 EEC) - all the same part number. I tried different - the same result. Looks like something with hardware? Reseting NVROM to default also didn't make any effect.

I have couple of DL380 - they show about 910Mb/s. 

Can anybody have some ideas what is wrong with memory?

Download Bios

$
0
0

Hi all, I need the download new bios for the HP ProLiant DL360 G6. This server out of garantee, how can i do this.

Thanks

 

HP SPP-2016.10 fails to boot in legacy mode on DL180 Gen9

$
0
0

Dear Community.

 

We're currently experiencing an unusual behavior of some of our DL180 Gen9 machines which were recently ordered from HPE. If the HP SPP (for upgrading the firmware) is booted in legacy-mode (UEFI works fine!) kernel messages are displayed, stating multiple "CPU soft lockups" and the system remains in an unresponsive state.

 

What could be the reason for the image not booting properly on some machines while others with the same HW-configuration load the image perfectly fine? We've tried this with several servers, all on factory defaults except for the Legacy/UEFI-Boot setting.

 

Help or some hints would be appreciated.

Thanks in advance.

Rene

HP Proliant DL560 health status tool

$
0
0

I have a ProLiant DL560 running windows server 2012 R2. 

What software can I install on it to show the health/ status of the hardware. A tool with an illustrative GUI. 

Thanks

HP Network Configuration Utility: The service control manager database is locked

$
0
0

Had problems with a BL 460c G7, suspected hardware issue so patched with the HP-SPP from October 2016, just in case was a firmware/driver problem.

Initially the server restarted, and was back on the network just using a single configured NIC.  The HP Network Configuration utility however gave the following error "WARNING:  The version of the miniport driver for the following adapters are not compatibile with the HP Network Configuration Utility".  It listed that for the HP NC553i card that the drivers needed to be 2.102.517.0 < 10.7.245.3).

I have removed theteaming s/w, and the NIC's, restarted and eventually managed to get early version of the drivers loaded.

Since then the HP Network Configuration Utility now says when I start it "The service control manager database is locked. HP Network Configuration Utility cannot be launched".

I have seen some reports about this on the web but no solutions.  Does anyone know how to resolve this?

 

 

Only one hard disk is being detected when there are clearly two

$
0
0

I have a HP Proliant DL380 G7 2X2.93GHz 64GB RAM and have tried both ubuntu 16.10 and centos 7. 

According to the specs the lights indicate that the hard drives are ok, but all operating system checks e.g fdisk, dir /dev/sd* lsblk do not show two disks of roughly 450GB.

The specs clearly say 2x 450GB 6G SAS HDD. There are smaller partitions, but only one disk of that size.

Can't see how to use the ROM Based Setup Utility to resolve this.

Please advise.

-john 

P.S. Result of lsblk with ubuntu 16.10

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 1 29.2G 0 disk
└─sdb1 8:17 1 29.2G 0 part /cdrom
sr0 11:0 1 1024M 0 rom
loop0 7:0 0 1.4G 1 loop /rofs
sda 8:0 0 419.2G 0 disk
├─sda2 8:2 0 418.2G 0 part
│ ├─cl-home 252:1 0 336.7G 0 lvm
│ ├─cl-root 252:2 0 50G 0 lvm /media/ubuntu/2995a8e4-cb56-474b-a6a8-cdbda4a4acf3
│ └─cl-swap 252:0 0 31.5G 0 lvm
└─sda1 8:1 0 1G 0 part /media/ubuntu/76f97246-306e-4e56-9bc0-fcbc5d6ca22c


Фантомный перегрев HDD

$
0
0

Добрый день!

У меня есть два сервера HP Proliant DL360 G7.

На них установлены по 4 диска SAS и SATA.

Контроллер видит, что диски SATA имеют температуру ~18 градусов.

А в iLO видно, что они якобы перегреты и сервер раскручивает вентилиторы на максимум.

Получается очень шумно. Как бы доказать iLO что диски не перегреты?

Или можно как-то снизить скорость вращения вентиляторов?

Upgrade DL180 G6 RAID HDD's

$
0
0

I currently have a DL180 G6 server with 2 x SAS (OS) drives, 6 x 1TB SATA (data) drives (RAID5) and an P410 smart array.

I'd like to upgrade the storage partition by replacing the 1TB drives with 2TB's. This server's off-site and the ACU was uninstalled from the (Windows 2008 R2) so I can interogate it remotely.

How do I go about replacing the HDD's and rebuild the RAID? Hopefully I can keep the OS partition and just delete current data partition, replace drives and rebuild RAID5 partition?

Cheers

Ben

ssascripting unable to Set Smart Array Configuration when using in SCCM for G9 server

$
0
0

SCCM TS steps as follows

1. Erase the current array config -> ssascripting.exe -i erase.ini

2. Set New Array Config -> ssascripting.exe -i custom.ini -internal -reset -e error.log

The first step is executing successfully but step 2 is throwing ERROR: 2828 New Array ID already exists. This is happening only on G9 servers but G7, G8 servers working fine without any issue.

To troubleshoot the issue logged into SSA and found that Array config was removed by step 1 and i was able to implement step 2 successfully after the reboot manually.

Is there any way to achieve this without rebooting server after step 1?

Extend a 4 disk RAID 10 array on ML350p gen8/P420i with 2 disks

$
0
0

Is it possible to extend an RAID 10 array of a ML350P gen8 with a P420i with 2 disks?

The array now consists of 4 disks.

Extend a 4 disk RAID 1+0 array on ML350p gen8/P420i with 2 disks

$
0
0

Is it possible to extend an RAID 1+0 array of a ML350P gen8 with a P420i with 2 disks?

The array now consists of 4 disks.

Viewing all 10362 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>