[Resolved] Node1 – Drive Replacement

We are currently experiencing a drive failure in Node 1 and it will be replaced shortly.

No downtime was experienced.

Update: The RAID array has been rebuilt and performance is back to normal.

Update: The drive has been replaced and the RAID array is rebuilding, you may experience slower than normal Disk IO during this time.

[Resolved] node1 down

We are aware that node1.pcsmartgroup.com is currently down, the server is showing no link on the network interfaces so we suspect a switch/cable issue. We are contacting the datacentre and will provide updates ASAP.

Update: We are waiting for remote hands to check the server.

Update 2: This was caused by a fault Network Cable which has been replaced.

[COMPLETE] Planned VPS Maintenance

As per the emails sent earlier this week, we are carrying out maintenance on our Xen/OpenVZ VPS infrastructure 28/06/2019 starting at 20:00 UTC. During this work your VPS will be rebooted.

We will update this page if we encounter any issues, and when this work is completed.

Update 20:40 UTC – This work is now complete across all servers.

Thank You

[resolved] Support system down – license issue

We are currently facing an issue with the license for our support system, we are in contact with their support team however things are taking longer than we would like to get resolved.

We can only apologise for this however as I hope you can appreciate this is completely outside our control.

All updates will be posted here.

This has now been resolved.

Update – You can now submit tickets via our billing system, simply login to your account and click on Open Ticket in the menu bar

Update – We are currently setting up a temporary support system as we’re not getting a response regarding the license issue.

[Resolved] Xen VPS Performance Issues

We are aware of performance issues with our Xen VPS which has been introduced by security patches applied yesterday. You may see increased load averages and general sluggishness inside your VPS caused by CPU Steal.

We have a fix to resolve this, and will be rebooting each VPS node shortly to apply a fix. A further update will be posted here when complete.

Update: As of 19:00 UTC, this is completed and all VPS are operating normally.

[Complete] Essential VPS Maintenance 23/02/2019

We are currently carrying out essential unplanned maintenance on our VPS infrastructure, to address recently released security vulnerabilities. This will involve a reboot of all VPS host machines.

We apologise for the lack of notice, however this is for the benefit of all customers to ensure the ongoing security and stability of our infrastructure.

This is completed.

[Resolved] xn9 outage

xn9 appears to have gone unresponsive, we are currently waiting on remote hands to check the server.

Update 1: It appears the underlying partition of the LVM volume group containing VPS filesystems has gone away, despite the RAID controller, drives and array being healthy. We are currently investigating recovery opportunities.

Update 2: We have been able to manually reconstruct the underlying partition and LVM metadata, however after several attempts we are unable to get it assembled in such a way that VM filesystems are accessible. The root cause of why the partition disappeared is unclear, we suspect the size of the volume may have changed due to a bug/defect within the raid controller. It is possible that with further examination we may be able to recover complete or partial data, we cannot make any guarantees, at this time no data is available. If there is any data which is of particular importance and you can provide the complete filename, we will do our best to recover it through some alternate methods.

We will now begin a recovery operation to re-create VPS based on XN9, onto alternative host machines. Managed servers will include restoration of backups where available.

We sincerely apologise for this inconvenience and will continue to work with our customers to restore service as quickly as possible.

Update 3: All VPS have been migrated to alternate hardware.

[Resolved] XN10 Packet loss

Starting at approximately 22:20 GMT+1, xn10 was experiencing high packet loss. Due to limited access to the server we gracefully shutdown all VM’s and brought them back up to make some configuration changes. Apologies for the reboot.

We have now identified one VM is the destination of a low bandwidth, high concurrency DoS attack which has now been null routed and we continue to monitor.

Because Uptime Matters