As a result of hurricane Matthew, our business shutdown all servers for just two times.

As a result of hurricane Matthew, our business shutdown all servers for just two times.

Among the servers ended up being an ESXi host with a connected HP StorageWorks MSA60.

free gay dating apps

We noticed that none of our guest VMs are available (they’re all listed as “inaccessible”) when we logged into the vSphere client,. As soon as I go through the equipment status in vSphere, the array controller and all sorts of attached drives look as “Normal”, nevertheless the drives all reveal up as “unconfigured disk”.

We rebooted the host and attempted going in to the RAID config energy to see just what things appear to be after that, but we received the message that is following

“an drive that is invalid had been reported during POST. Customizations towards the array setup after a drive that is invalid can lead to lack of old setup information and contents associated with the initial rational drives”.

Of course, we are extremely confused by this because absolutely absolutely absolutely nothing ended up being “moved”; absolutely absolutely nothing changed. We simply driven up the MSA while the host, and have now been having this presssing problem from the time.

We have two primary questions/concerns:

The devices off and back on, what could’ve caused this to happen since we did nothing more than power? We needless to say have the choice to reconstruct the array and commence over, but i am leery in regards to the chance for this taking place once more (especially it) since I have no idea what caused.

Can there be a snowball’s possibility in hell that I’m able to recover our array and guest VMs, alternatively of experiencing to reconstruct everything and restore our VM backups?

I’ve two questions/concerns that are main

  1. The devices off and back on, what could’ve caused this to happen since we did nothing more than power? I needless to say have the choice to reconstruct the array and commence over, but i am leery in regards to the potential for this taking place once again (especially it) since I have no idea what caused.

A variety of things. Do you realy schedule reboots on all your valuable Yonkers escort gear? Or even you should really just for this explanation. The only host we now have, XS decided the array was not prepared over time and did not install the primary storage amount on boot. Constantly good to learn these plain things in advance, right?

  1. Is there a snowball’s opportunity in hell that I’m able to recover our guest and array VMs, rather of getting to reconstruct every thing and restore our VM backups?

Possibly, but i have never ever seen that one mistake. We are speaking really experience that is limited. According to which RAID controller it’s attached to the MSA, you could be in a position to browse the array information through the drive on Linux utilising the md utilities, but at that true point it is faster in order to restore from backups.

A variety of things. Would you schedule reboots on your entire gear? Or even you should really just for this reason. Usually the one host we now have, XS decided the array was not prepared with time and did not install the primary storage space amount on boot. Constantly good to learn these plain things in advance, right?

We really rebooted this host numerous times about a month ago whenever I installed updates about it. The reboots went fine. We additionally entirely driven that server down at across the time that is same I added more RAM to it. Once again, after powering every thing right straight back on, the server and raid array information had been all intact.

A variety of things. Would you schedule reboots on your entire equipment? Or even you should really just for this explanation. The main one host we now have, XS decided the array was not prepared with time and did not install the storage that is main on boot. Constantly good to learn these plain things in advance, right?

I really rebooted this host numerous times about a month ago once I installed updates onto it. The reboots went fine. We also entirely powered that server down at round the exact same time because I added more RAM to it. Once more, after powering every thing right right back on, the raid and server array information had been all intact.

Does your normal reboot routine of the server come with a reboot regarding the MSA? would it be which they had been driven straight straight right back on into the wrong purchase? MSAs are notoriously flaky, likely that is where the presssing problem is.

We’d phone HPE help. The MSA is an unit that is flaky HPE help is very good.

I really rebooted this host numerous times about a month ago once I installed updates upon it. The reboots went fine. We additionally entirely driven that server down at round the exact same time because I added more RAM to it. Once again, after powering every thing back on, the raid and server array information ended up being all intact.

Does your normal reboot routine of one’s host add a reboot for the MSA? would it be that they had been driven straight straight right back on within the order that is incorrect? MSAs are notoriously flaky, likely this is where the presssing problem is.

I would call HPE support. The MSA is really a flaky unit but HPE help is very good.

We regrettably don’t possess a reboot that is”normal” for just about any of our servers :-/.

I am not really certain just exactly what the order that is correct :-S. I would personally assume that the MSA would get powered on very first, then ESXi host. Should this be proper, we have currently tried doing that since we first discovered this matter today, together with problem continues to be :(.

We do not have help agreement about this host or even the attached MSA, and they are most most most likely way to avoid it of warranty (ProLiant DL360 G8 and a StorageWorks MSA60), therefore I’m uncertain exactly how much we would need to invest to get HP to “help” us :-S.

I really rebooted this host times that are multiple a month ago once I installed updates about it. The reboots went fine. We additionally completely driven that server down at across the time that is same I added more RAM to it. Once again, after powering every thing straight straight back on, the raid and server array information ended up being all intact.

This entry was posted in yonkers escort directory. Bookmark the permalink.