We are aware of an issue with our XN1 node, we have identified the issue and are working to resolve it as quickly as possible.
Updates will be provided here.
Update @ 16:52: DC Staff are running slow, frustrating but nothing we can do unfortunately to speed this up.
Update: This is now resolved.
xn3.pcsmarthosting.co.uk currently has issues with the RAID controller which we are working to resolve. Further updates will follow.
Update: We’ve traced this to a bug in the upstream Linux 4.9 kernel. The raid controller and array appear to be healthy. We’ll bring the server back up on a secondary kernel to restore service. Further investigations will be carried out in our test environment.
We have been made aware of an issue with XN10 by monitoring and are currently investigating the issue, further updates will be provided here when we have them
Update: This has now been resolved.
Our web1 server has currently gone into a read only state and our technicians are currently investigating this.
Updates will be provided here when they’re available.
Update: Onsite staff are currently hooking up a crash cart to this machine.
Update: It appears a hard disk in the RAID10 array has failed, and caused the controller to hang. This is rare but it can happen. There are some filesystem inconsistencies we are working to repair then the server will be brought back online.
Update: Damage has been repaired and didn’t look too bad, just doing a 2nd pass now to be absolutely sure and will then boot.
Update: Server is now back online and the RAID array is rebuilding.
We are currently aware of an issue with our VZ1 Node, we are investigating this and hope to have service restored in the next 30 minutes.
Update: This has now been resolved and the machine is back online.
We are currently facing a large DDoS attack on the offsite DataCentre we use for our billing and ticketing systems, the incident is being treated as a priority 1 and we will post updates here as we get them.
We offload our billing and ticketing systems to a different DataCentre for redundancy so should our primary DataCentre be unavailable access to your billing and ticketing systems would be unaffected.
Update: This has now been mitigated and the issue resolved.
XN10 has currently gone unresponsive and is being investigated, due to the history of recent outages on this machine it’s likely we will be replacing this machine in the next few minutes.
Updates will be provided here as usual.
Update: XN10 is now back online and all VPS up and running, the move to new hardware has been postponed for a couple of days.
xn10 has suffered a primary drive failure in the RAID10 array and is currently unbootable, it looks like everything in /boot is damaged.
We are currently waiting on remote hands to attach rescue media and will work to repair this and restore service as quickly as possible.
Update: The operating system is too badly damaged and will require a reinstall. Unfortunately it also looks like two drives are failing in the same RAID1 set. We will reinstall and may need to wait for the rebuild to complete before putting the server back in service to ensure complete recovery of customer data. Further updates will follow.
Update 2 (17:45 GMT): The server has been reinstalled and we have been able to recover the raid array, it’s now rebuilding and currently at 60%. Once it finishes we will reboot the server into Xen and run a fsck against each VM individually, as soon as this completes it will be booted. When all VM’s have been booted, solusvm will be reconnected and we will make arrangements to how-swap out the other bad drive.
Update 3 (18:05 GMT): RAID Rebuild at 72%
Update 4 (18:30 GMT): RAID Rebuild at 85%
Update 5 (18:50 GMT): RAID Rebuild at 95%
Update 6 (19:17 GMT): VM’s are now being checked and booted one by one.
Update 7 (21:29 GMT): 50% of VM’s are now online.
We are currently experiencing an issue with this server which we have now narrowed down to a faulty Power Supply Unit and is in the process of being replaced now, all updates will be provided here.
Update @ 15:51 GMT: This is proving to be more than just a simple PSU swap, the onsite team are still working on the machine and hope to have it back online within the next 60 minutes.
Update @ 17:07 GMT: An update from the onsite staff, it seems there has been a short inside the machine which may have destroyed multiple hardware parts, we are still investigating this but as of now there is no ETA when this machine will be back online however once we have reached a conclusion will obviously update here.
Update @ 02:00GMT: This has now been resolved after multiple hardware failures caused by a faulty Power Supply Unit.
We are currently facing an issue with XN17 which is affecting connectivity to all VPS on that node, we are working with on site staff to get this resolved as quickly as possible and all updates will be posted here as soon as we have them.
Update: This has now been resolved.