Quantcast
Channel: ProLiant Servers (ML,DL,SL) topics
Viewing all 10362 articles
Browse latest View live

no 12 volt power on pcie power cable- DL380p GEN8 w/PCIE x16 2 slot riser card and Quadro K5000 gpu

$
0
0
I am working with a DL380p Gen8 where I am trying to install a single Nvidia quadro K5000 in a card cage with 2 pcie x16 slots. When the server comes up, I get an error message on screen telling me plug in the pcie power cable for the GPU. This power cable is plugged in. When I measure power with a multimeter, I get 0 volts. I have two 1200w/110v power supplies installed and properly plugged in. I would be grateful for suggestions

no 12VDC power on gpu power cable- DL380p GEN8 w/PCIE x16 2 slot riser card and Quadro K5000

$
0
0
I am working with a DL380p Gen8 where I am trying to install a single Nvidia quadro K5000 in a card cage with 2 pcie x16 slots. When the server comes up, I get an error message on screen telling me plug in the pcie power cable for the GPU. This power cable is plugged in. When I measure power with a multimeter, I get 0 volts. I have two 1200w/110v power supplies installed and properly plugged in. I would be grateful for suggestions

HP Proliant 580 - GPU error code 43

$
0
0

Hi All 

I have HP DL 580 gen. 8 server and troubling with GPU ( Quadro k6000 - oficialy supported )
After installing proper drivers i get warning about error code 43 - in Device manager 

tested with 2 different QUADRO drivers too. it have solved this.  

Whne booting the GPU is working and i can see till the boot to Windows Server 2016. So the card look to be working, unless i boot to Win. 

the power cable is OK, checked even with Voltmeter and all pins give proper Volatage and etc.  

On the Motheboard is a small display with code  d 4. where in the manual can find more specific info on the error codes of this display ? 
 

fails to boot in legacy mode

$
0
0

Hello,
We're currently experiencing an unusual behavior of some of our machines. If the HP SPP (for upgrading the firmware) is booted in legacy-mode (UEFI works fine!) kernel messages are displayed, stating multiple "CPU soft lockups" and the system remains in an unresponsive state.
What could be the reason for the image not booting properly on some machines while others with the same HW-configuration load the image perfectly fine?
Thanks in advance.

 

 

 

HPE DL 360 Server connection with HPE G3 KVM Switch through Tripp-lite USB interface (B078-101-USB2)

$
0
0

Hi all,

We have 4 no's of HPE DL 360 Servers and 1 no of HPE G3 KVM Switch in our office and we are using TRIPP-LITE B078-101-USB2 model USB server interface to connect between these two. 

We observed only Green LED lit in RJ -45 Connector of TRIPP-lite USB interface and we connected to HPE-G3 KVM switch through supplied CAT5 Cable but nothing is shown in the display of KVM LCD console.

Can any one inform whether HP server or HPE G3 KVM switch supports TRipplite Brand USB Interface or not?

Thanks and Regards,

M.Kannan

ProLiant DL380 Gen9 with SmartArray H240ar not flagging HDs as defective even with critical failure

$
0
0

Hi everyone,

My issue is related to a possible defect with H240ar (latest firmware of course) which manifests itself by H240ar not flagging a drive as failed even when the drive has surface defects and logs critical drive failure error in AHS.

Obviously the failure is real as support immediately opens an RMA and replaces the drive. However the question of "why didn't H240ar flag is as bad" remains open and support isn't willing to address it. There's underlying causality here left uncovered and I'm not being guided though the escalation process which hopefully leads to either a coherent answer or a product design defect analysis internal to HPE.

This isn't one of those anecdotal issues, it is quite real and I am stumped as to why or how the design of this controller is different from the earlier versions which I've used for decades.

AHS> Active Health Event>  Critical,3094,21758,Smart Array,Critical System Event, ,0x00,12/21/2019 05:49:33,Event Code: 48, [2019-12-20 21:49:17] Fatal drive error, Port=1I Box=3 Bay=2

With the above failure logged into AHS the controller still showed the drive as OK and there were no alarms. I've had drives go bad for decades on pre-Gen9 HPE servers and every time the controller will flag a drive as bad even using predictive failure status which is less stringent than fatal failure.

What happened to Gen9? I find this pretty odd? The issue for me is that since the controller doesn't flag and remove the drive from the array what happens next is that the failure condition is passed onto the OS. Guess what happens there - unpredictable behavior due to disk errors, OS lockup, BSOD.

Wasn't that exactly what HPE promised to protect us from by having SmartArray and all the elements of a redundant disk system? Obscure all disk failures by adding the SmartArray layer which manages the underlying hardware biz?

If an HPE staff stumbles upon this I would very much need your guidance and help in escalating my case (will be happy to IM the case #)

Thank you

~B

 

I have two HP DL380 G7 with internal controller dying, I would like to replace with PCIE

$
0
0

Hello,

I ask again a question about controller replacement.

I need to replace controller of two hp DL380 G7.

I can put inside another p410i (refurbished) but I would like to know if I can put a newer controller inside that:

- let me keep existing arrays

- can be connected to existing drive cables

- is compatible (I know if you put not compatible cards fans go crazy)

Thanks again for any hint!

Mario

DL20 Gen10 iLO5 AHCI - Fans at 43% with AMS

$
0
0

Configured SATA AHCI support as I need to install VMWare ESXi. Expectedly, fans went up to 45%. Installed ESXi 6.7 U3 (used HPE Custom Image).

From HP forum (based on similar topics) I noticed comments by HPE employees that AMS is designed (among others) to supply to iLO temperature readings that are otherwise unavailable when SW RAID controller is off - and so utilize more aggressive fan speed management. Also, it is reported that installing AMS indeed helped on some occasions when running Linux.

I verified that:

- In System Information, Agentless Management Service status is OK (it actually came with HPE Custom Image);

- In Power & Thermal, higest temperature reading is 08-BMC System 13 14 OK 66C; other temperature readings do not exceed 44C; ambient temperature is 23C;

- Verified the newest iLO 5 FW: 2.10 Oct 30 2019 (updated server with SPP 2019.12.0, so other FW should must be up-to-date);

- When SATA is configured in Smart Array SW RAID Support, fans are below 9%.

What I tried to do:

1) When using HPE Custom Image ESXi 6.7 Update 3, switching Workload Profile between "General Power Efficient Compute" and "Virtualization - Power Efficient" - makes no difference;

2) Installed from scratch VMWare ESXi 6.7 Update 3 non-HPE image (boot from image, not using Intelligent Provisioning or Rapid Setup), additionally installed packages "HPE Offline Bundle for ESXi 6.7 3.4.5" and "HPE Utilities Offline Bundle for ESXi 6.7 3.4.5". Verified that amsd version 670.11.4.5-18.7535516 is showing in ESXi packages list. Verified that AMS is showing OK in iLO System Info - fans remain at 45%;

3) Just in case, installed on ESXi package "Agentless Management Service Offline Bundle for VMware vSphere 6.7" - fans remain at 45%;

4) With this new ESXi installation, switching Workload Profile between "General Power Efficient Compute" and "Virtualization - Power Efficient" - makes no difference.

It seems that either I am missing some settings (BIOS? iLO?), or the AMS is not working as expected (with VMWare? on 6.7 U3 release?).

The server is intended for use in an environment where excessive noise can cause significant inconvenience. Also, it is intended to run VMWare ESXi hypervisor. With all that in mind I am really motivated to cut down this excessive fan spinning.

Guys, really counting on your suggestions!

The server is not in production, and no VMs created so far, so I am open to any proposals including reinstalling of ESXi or installind of alternative ESXi releases.

Thanks in advance,
Andrey


DL360p Gen8 and numanode

$
0
0

Hello. I have a DL360p Gen8 server with two e5-2680 processors. There is a 10GB network card in the PCI Riser card on numa0. Is it possible on the DL360p Gen8 server to distribute network card interrupts to the second processor as well on numa1? Or is it better to use another HP DL380p Gen8 server with two network cards in different PCI Riser cards in different numanode? and distribute the interrupt of the network card in numa0 to the first processor, and the second network card in numa1 to the second processor?

VMWare 6.0 u3 passthrough Nvidia RTX Quadro 4000 fail by shutdown the VM

$
0
0

Hi liebe Community ,

nach durchreichen einer Nvidia RTX Quadro 4000 an eine VM mit Win10 sturzt mein Server HP DL380p Gen8 nach herrunterfahren der VM ab und startet mit Fehlern neu .

Hat jemand eine Idee??

VMware 6.03 U3

Hi dear community, after passing an Nvidia RTX Quadro 4000 to a VM with Win10, my server HP DL380p Gen8 crashes after the VM has been shut down and restarts with errors. Does somebody has any idea??

 

MFG

Kirito

 

 

swap server with smartarray - not same slots

$
0
0

Hi,

due to a server failure (server environment was causing the issue) I has to swap my Raid 1 config to a new server:

Prolinat DL380 G7,  P410 SA, same model, most likely different FW levels.

Swapping the Raid set  I did not place them in the same slots, insted of 1+2 I placed them in 5+6.

Error message  :

1785 - Slot 4 Drive Array Not Configured.
Drive positions appear to have been changed.
Run Array Diagnostics Utility (ADU) if previous positions are unknown.
Then turn system power OFF and move drives to their original positions.

Now the Raid information and even the Data on the disk seem to be gone?

What can I do to recover the data  - or even better - to recover the raid config?

Thanks

HP DL380 G9 Can’t reboot until discharged cmos battery

$
0
0

Hello,

I have two HP DL380 G9 servers. 

The first server has abnormal down When I was running mysql data synchronization in December 24, 2019 at 2 a.m.

The second server has abnormal down When I was running Mysql   in December 24, 2019 at 11 a.m.

The common phenomenon is that the power LED  flashing green (1 Hz/cycle per sec )  when I try reboot the two machine.  After trying a lot of methods contain  discharged operation by Motherboard but it does not work.

 

At last I discharged the CMOS battery and re-install it, however they started.

The serviceman thought it was a motherboard issue, but I don't agree with such a coincidence.

Within three years , the  HP DL380 G9 servers  with smart array p840  breakdown many times. Therefore, I think it is caused by the abnormality of the smart array p840.

What is the cause of the two servers breakdown ?

Thanks,

Memory population on Proliant DL380

$
0
0

Hi everyone

I've a brand new DL380 G10 server, arrived with one CPU (Xeon 10 core) and 1x32 GBs DIMM (located in slot 8). I have to add 2x16 DIMMs, and I've followed the schema under the chassis (putting the other two DIMMs in slots 10 and 12) but despite of any combination of slots, during the POST the server always says that I'm using a non-optimal RAM configuration, asking me to press F1 to continue or F2 to see logs (after 30 seconds, it goes on, so it's not a big problem). I've found no documenttion about using different size DIMMs

HP PROLIANT ML110 G6 BEEPING

$
0
0

Hello I have a Proliant ML110 G6 and I when I turn it on it beeps 4-3-3-1 short beeps,anyone have solved it?? I have try many differents memory modules, change PSU, change processor but it does not seem to be the problem!I have to tell I noticed it when I woke up a morning and see my two servers down, one was ok but off(hp microserver gen8) but proliant ml 110 g6 does not boot.

I have access to 100ILO but I have no video and fan goes at full speed all the time.

dracut-initqueue timeout

$
0
0

After rebook, the 460c machine is stuck in following: 

[  OK  ] Started udev Kernel Device Manager.
         Starting udev Coldplug all Devices...
         Mounting Configuration File System...
[  OK  ] Mounted Configuration File System.
[  OK  ] Started udev Coldplug all Devices.
         Starting Show Plymouth Boot Screen...
[  OK  ] Reached target System Initialization.
         Starting dracut initqueue hook...
[  OK  ] Started Show Plymouth Boot Screen.
[  OK  ] Reached target Paths.
[  OK  ] Reached target Basic System.
[  126.538177] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts
[  127.051466] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts
[  127.559858] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts
[  128.067571] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts
[  128.575591] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts
[  129.083598] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts
[  129.591389] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts
[  130.099355] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts
[  130.607137] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts
[  131.115329] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts
[  131.623045] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts

 

[  186.519793] dracut-initqueue[542]: Warning: dracut-initqueue timeout - starting timeout scripts
[  187.028486] dracut-initqueue[542]: Warning: dracut-initqueue timeout - sWarning: /dev/mapper/vg00-root does not exist
Warning: /dev/vg00/root does not exist
Warning: /dev/vg00/swap does not exist

Generating "/run/initramfs/rdsosreport.txt"


Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.


ProLiant DL380 Gen9 absently shout-down

$
0
0

I want to share last two similar incidents happen within a week and server reacts similarly.

 

Incident-1)  On  28th Dec 2019 our server room AC was having some issue for some time when this server got shut down and it was around 3 hrs not started, after approx. 3 hrs it started without doing anything.

 

Incident-2) Same cause happen today for a period server room temperature increased and server has got automatically shut down abnormally and same way after some time around two and half hrs we were able to start it without doing anything.

 

Now we need to confirm the root cause why this is happening with this server only? we have multiple servers installed in same RACK and same area which are working fine, So please help to identify the exact reason to overcome the issue as soon as possible as we are running critical business services on this server which are impacting our business.

HP Proliant M10 Gen9 Initial Setup

$
0
0

I bought this server without OS i believe year and half ago.   I did not get time due to busy schedule to check anything or do anything. Since Christmas vacation i am trying to install FreeNAS.  But I am not able to see the display so cannot check the boot sequence.  There is 2 display port in back i tried both directly converting display to HDMI and Display to VGA adapter but there is no display.  I don't have a monitor with display port so need to use the adapter.  Any idea on following will be of great help

1. Is there any different way i should be accessing boot sequence.

2.  Since I am not able to see any display wanted to know if there is another way to check on whats going on.

This is just for home use and have never used HP servers before.

 

 

HPE Gen 10 DL360 P408i-a

$
0
0

The sever in question is HPE Gen 10 DL360 with  P408i-a SR.

I was able to install Windows Server 2016 with Intelligent Provisioning and this is the only way I can install the OS. I have 2 SAS Disk of 600 GB, created RAID 1 and a logical drive.  I was able boot with USB (UEFI GPT ) but after successful boot and getting to the option where I have to choose the Disk. I couldn't see the the logical drive which I created with RAID. I even tried legacy mode. Also tried Norton Gosht, Acronis but same result no matter what I use to boot I can boot the OS LUN is not visible to the OS.

No-Battery Write Cache enable after smart array battery fail

$
0
0

Hello all,

We have a lot of DL380 gen9, and now their batteries are failing.

I know the risk with No-Battery Write Cache from document as follow:

No-Battery Write Cache:
If you have no battery installed or it is failed you can enable this feature to use the on-board cache of the raid controller.
But if the server is not protected via UPS and looses power. Data in the cache will be lost and cause data corruption on the hard drive.

I got the command to config it to enable:

Enable smart array write cache when no battery is present (No-Battery Write Cache option) Values are Enable or Disable.
ssacli ctrl slot=0 modify nbwc=enable

I wish to know if any parameter need to change?

# hpssacli
>ctrl slot=0 show

Smart Array P440ar in Slot 0 (Embedded)
Bus Interface: PCI
Slot: 0
Serial Number: XXXXXXXXXXXX
Cache Serial Number: XXXXXXXXXXXX
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 6.88
Rebuild Priority: High
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Parallel Surface Scan Supported: Yes
Current Parallel Surface Scan Count: 1
Max Parallel Surface Scan Count: 16
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disable
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 10% Read I 90% Write
Drive Write Cache: Enabled
Total Cache Size: 2.e GB
Total Cache Memory Available: 1.8 GB
No-Battery Write Cache: Enabled
SSD Caching RAID5 WriteBack Enabled: True
SSD Caching Version: 2
Cache Backup Power Source: Batteries
Battery/Capacitor Count: 1
Battery/Capacitor Status; Ok
SATA NCQ Supported: True

 

ML110 ProLiant G7 RAID drivers

$
0
0

I'm very new to this forum, but I'm having trouble installing Windows Server Essentials 2019 onto my rig in RAID 0. I am aware that HP don't support this OS for this rig. But since SBS Essentials 2011 support ends this month I felt an upgrade was necessary. It only controls a home network and is merely used for remote access and one of my working backups for my photographs.

I can easily access the RAID array config on boot up and can easily convert the array to 0 with 4 x 1TB drives. But I'm unable to locate the necessay drivers for me to install WSE 2019 onto the RAIDed drives. There is scant to no information on HP's website and little else from third parties. Drivers and setup software are nowhere to be found, so I'm at a loss as to what to do.

Maybe I could downgrade to WSE 2016, but that appears to be very similar to the 2019 version and has the same install problems. WSE 2019 installs perfectly well on AHPI rather than RAID. But I could really make use of the speed bump and single volume of RAID 0. Any ideas, help or drivers even? Thanks for the time.

Viewing all 10362 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>