[Resolved] XN1 Outage

We are aware of an issue with our XN1 node, we have identified the issue and are working to resolve it as quickly as possible.

Updates will be provided here.

Update @ 16:52: DC Staff are running slow, frustrating but nothing we can do unfortunately to speed this up.

Update: This is now resolved.

[Resolved] xn3 outage

xn3.pcsmarthosting.co.uk currently has issues with the RAID controller which we are working to resolve. Further updates will follow.

Update: We’ve traced this to a bug in the upstream Linux 4.9 kernel. The raid controller and array appear to be healthy. We’ll bring the server back up on a secondary kernel to restore service. Further investigations will be carried out in our test environment.

[Resolved] XN17 – Planned Maintenance 2017/01/27 – 22:00GMT

Update @ 21:55 – This has now begun.

Update @ 22:28 – This has now been completed and VPS are now booting back up

This is an advanced notification of upcoming maintenance on hostnode XN17, please see below for further details.

When Is this happening?
It will begin on Friday, 27th January 2017 at 22:00 GMT

How long will it take?
We are hoping this should take no longer than 60 minutes.

Is any Downtime expected?
Yes, the server will need to be powered down so downtime is to be expected for the duration.

Updates will be posted here.

[Resolved] XN10 Outage

We have been made aware of an issue with XN10 by monitoring and are currently investigating the issue, further updates will be provided here when we have them

Update: This has now been resolved.

[Resolved] Web1 Read Only

Our web1 server has currently gone into a read only state and our technicians are currently investigating this.

Updates will be provided here when they’re available.

Update: Onsite staff are currently hooking up a crash cart to this machine.

Update: It appears a hard disk in the RAID10 array has failed, and caused the controller to hang. This is rare but it can happen. There are some filesystem inconsistencies we are working to repair then the server will be brought back online.

Update: Damage has been repaired and didn’t look too bad, just doing a 2nd pass now to be absolutely sure and will then boot.

Update: Server is now back online and the RAID array is rebuilding.

[Resolved] VZ1 Outage

We are currently aware of an issue with our VZ1 Node, we are investigating this and hope to have service restored in the next 30 minutes.

Update: This has now been resolved and the machine is back online.

[Resolved] Billing and Ticketing DDoS Outage

We are currently facing a large DDoS attack on the offsite DataCentre we use for our billing and ticketing systems, the incident is being treated as a priority 1 and we will post updates here as we get them.

We offload our billing and ticketing systems to a different DataCentre for redundancy so should our primary DataCentre be unavailable access to your billing and ticketing systems would be unaffected.

Update: This has now been mitigated and the issue resolved.

[Resolved] XN10 Unresponsive

XN10 has currently gone unresponsive and is being investigated, due to the history of recent outages on this machine it’s likely we will be replacing this machine in the next few minutes.

Updates will be provided here as usual.

Update: XN10 is now back online and all VPS up and running, the move to new hardware has been postponed for a couple of days.

[Resolved] xn10 down

xn10 has suffered a primary drive failure in the RAID10 array and is currently unbootable, it looks like everything in /boot is damaged.

We are currently waiting on remote hands to attach rescue media and will work to repair this and restore service as quickly as possible.

Update: The operating system is too badly damaged and will require a reinstall. Unfortunately it also looks like two drives are failing in the same RAID1 set. We will reinstall and may need to wait for the rebuild to complete before putting the server back in service to ensure complete recovery of customer data. Further updates will follow.

Update 2 (17:45 GMT): The server has been reinstalled and we have been able to recover the raid array, it’s now rebuilding and currently at 60%. Once it finishes we will reboot the server into Xen and run a fsck against each VM individually, as soon as this completes it will be booted. When all VM’s have been booted, solusvm will be reconnected and we will make arrangements to how-swap out the other bad drive.

Update 3 (18:05 GMT): RAID Rebuild at 72%

Update 4 (18:30 GMT): RAID Rebuild at 85%

Update 5 (18:50 GMT): RAID Rebuild at 95%

Update 6 (19:17 GMT): VM’s are now being checked and booted one by one.

Update 7 (21:29 GMT): 50% of VM’s are now online.

Because Uptime Matters