Categories
Outages

[Resolved] xn10 down

xn10 has suffered a primary drive failure in the RAID10 array and is currently unbootable, it looks like everything in /boot is damaged.

We are currently waiting on remote hands to attach rescue media and will work to repair this and restore service as quickly as possible.

Update: The operating system is too badly damaged and will require a reinstall. Unfortunately it also looks like two drives are failing in the same RAID1 set. We will reinstall and may need to wait for the rebuild to complete before putting the server back in service to ensure complete recovery of customer data. Further updates will follow.

Update 2 (17:45 GMT): The server has been reinstalled and we have been able to recover the raid array, it’s now rebuilding and currently at 60%. Once it finishes we will reboot the server into Xen and run a fsck against each VM individually, as soon as this completes it will be booted. When all VM’s have been booted, solusvm will be reconnected and we will make arrangements to how-swap out the other bad drive.

Update 3 (18:05 GMT): RAID Rebuild at 72%

Update 4 (18:30 GMT): RAID Rebuild at 85%

Update 5 (18:50 GMT): RAID Rebuild at 95%

Update 6 (19:17 GMT): VM’s are now being checked and booted one by one.

Update 7 (21:29 GMT): 50% of VM’s are now online.

Categories
Planned Maintainance

Xen Scheduled Maintenance

Hi,

This is a notification of upcoming maintenance to our Xen host nodes, unfortunately due to recently discovered exploits we will have to perform updates to the host nodes in order to patch them. We take security very seriously and as such these updates will need to be done as a priority.

When are the updates happening?

We will begin Friday 18th December 2015 at 20:00 GMT

How long will it take?

We are aiming to have each machine completed with an hour.

Is there going to be any downtime?

Unfortunately yes, as we will need to update the Virtualisation System itself there will be downtime required due to a reboot of the node.

We will of course do our upmost to ensure these updates are completed as soon as possible with as little disruption to services as possible.

Categories
Planned Maintainance

[Phase One Complete] Emergency Xen Maintenance

We are currently performing some emergency Xen Maintenance which has started now (19:30 GMT – Thursday 29th October), downtime should be less then 60 minutes per server.

More in-depth details will be posted once all systems have been patched.

Completed: xn10, xn22, xn23, xn17, xn13, xn12, xn9, xn3

Phase one has now been completed.

Categories
Outages

[Resolved] XN17 Issues

We are currently experiencing an issue with this server which we have now narrowed down to a faulty Power Supply Unit and is in the process of being replaced now, all updates will be provided here.

Update @ 15:51 GMT: This is proving to be more than just a simple PSU swap, the onsite team are still working on the machine and hope to have it back online within the next 60 minutes.

Update @ 17:07 GMT: An update from the onsite staff, it seems there has been a short inside the machine which may have destroyed multiple hardware parts, we are still investigating this but as of now there is no ETA when this machine will be back online however once we have reached a conclusion will obviously update here.

Update @ 02:00GMT: This has now been resolved after multiple hardware failures caused by a faulty Power Supply Unit.

Categories
Outages

[Resolved] XN17 Unplanned Downtime

We are currently facing an issue with XN17 which is affecting connectivity to all VPS on that node, we are working with on site staff to get this resolved as quickly as possible and all updates will be posted here as soon as we have them.

Update: This has now been resolved.

Categories
Outages

[Resolved] xn4 RAID Issues

We can confirm that two drives in one half of the RAID10 array are failing. SolusVM access to this server has been disabled, and can we please ask that you avoid doing anything on your server which requires heavy IO e.g. Backups.

The server is now online but some VPS filesystems are having issues. For this reason we are stopping all VPS individually, running a manual FSCK over it’s filesystem and starting the VPS. Once this has finished we are taking live backups of all VPS to another machine.

Assuming this completes without a further crash of the host, we will replace one drive at a time. Should the RAID array fail again, we will replace all drives, reinstall the host and restore the backups we created.

We appreciate your patience during this time, rest assured we are doing all we can to ensure the integrity of your data.

Update: This has now been resolved. All customer data is intact, and we have fitted new drives to each half of the RAID10 array to restore full performance and redundancy.

Categories
Planned Maintainance

[Completed] Xen node reboots

Starting at 7PM GMT this evening we will be rebooting all Xen host machines to apply important security updates, as per the emails sent out yesterday.

Update (7.28PM): All Xen host machines have been updated and rebooted successfully.

Categories
Outages

[Resolved] xn9 down

xn9.pcsmarthosting.co.uk is currently down and we’ve requested a KVM be attached so we can investigate further.

Update: Apparently a tech has now been sent to the rack, still waiting for updates!

Update 2: The system is hanging on startup due to block device / possibly RAID related issues. Waiting on further information from the datacenter.

Update 3: Sorry for the delay everyone, the datacenter is being ridiculously slow at doing anything currently. We are doing all we can.

Update 4: Server is now up and VPS are booting, again sorry for the delay on this.

Categories
Outages

[Resolved] Support Ticket Emails

We are currently aware of an issue that is causing emails sent to our ticket system to bounce back, we are currently investigating the issue and hope to have it resolved shortly.

Update: This has now been resolved.

Categories
Outages

[Resolved] Web1 Issues

Issues we are currently aware of an issue with our Web1 server and are investigating.

Updates will be provided here when we have them.

Update: This has now been resolved.