PBX / Trunk Status
Affected PBX: PBX2 – Hardware and software upgrade
May 26, 2020, 01:00 PDT
PBX2 was offline from 01:00 to 02:00 to run a capture. A new server was setup and capture was applied.
June 6, 2020. 05:00 PDT
05:00 – The old sever was de-commission and was replace by a new server. All recordings like greetings, voicemail and announcement was transfer to the new server.
06:00 – All extensions are online and registered but only extension configured for chan_sip are operational.
11:00 – Issue with chan_pjsip was resolve. All extension are operational
14:00 – UCP stop working due to conflict. Issue was resolve by 16:00
June 8, 2020. 08:00 PDT
FOP support was able to transfer the license by 8:30am. FOP2 was configured and online by 10am. New password was created for all users.
Affected PBX: PBX4 – Hosting Issue
July 30, 2019. 09:57 CDT
The hosting engineers observed a server audible alarm. Our initial assessment determined that a RAID member from this server’s 8 member RAID-10 array was marked “Missing” by the controller. Initial efforts were made to physically identify the slot containing the missing drive, however this proved inconclusive and the decision was made to perform emergency maintenance to positively identify the drive and replace it while the node was powered off to avoid the risk of data loss. To facilitate this, starting at 10:29 CDT our engineers suspended the virtual machines as a protective measure to both force an ACPI shutdown and prevent them from being rebooted during our node shutdown which could cause data corruption. The server was shut down, the drive was identified and replaced with a new drive, and the node was back online with all VMs by 11:24 CDT. The entire maintenance window was 55 minutes, though actual downtime was likely less than that.
Update July 30, 2019. 18:22 CDT
The server operating system re-mounted the file system as read-only and the rebuild process abruptly stopped. An engineer was dispatched to assess the situation and found the RAID device was reporting multiple missing members and showing offline. The RAID was brought online with all missing members replaced and the node booted cleanly. The rebuild was verified and monitored until it once again stopped at 00:02 on 31 July 2019. At this point, the decision was made to attempt to rebuild the RAID without the overhead of the host operating system + virtual machines. The rebuild appeared more successful, however at approximately the 2.5-hour mark would fail.
Update July 31, 2019. 10:10 CDT
The decision was made to switch to recovery efforts and the first aspect of this was moving the IPs of the node so they could be provisioned on new VMs.
Update Aug. 2, 2017, 16:48 EST
PBX4 has been online and stable for more than 48 hrs. Since the server as fully restored we anticipated some of the features might not be operational. Please bear with us while we continue with the troubleshooting. FOP2 is back on line. The issue was due to licensing requiring the old MAC address. Address Book for Cisco 79xx is Fixed. The issue was due to the new MYSQL having a new credentials
Update Aug 9, 2019. (VA Ticket 176)
Client reported Phones Apps (Transfer to VM) Is not working. Phone Apps is only available to Sangoma Phones. After further troubleshooting, we discovered all the Phones apps are not available. The issue was escalated requiring developer support but unfortunately developer are not available on the weekend
Update Aug 13, 2019.
The developer resolve the issue with the Phone Apps (VA #176 / Sangoma #915209) and User Control Panel. Issue was due to mysql table with mixed version due to data restore. All known issue are now resolve.
Affected PBX: PBX2 and PBX3
Feb 1, 2018, 07:00 PST
We are performing a hardware refresh that requires PBX2 and PBX3 to be migrated to a new host node. Replacing servers and other critical hardware allows us to deploy updated equipment intended to improve reliability, enable new and anticipated capabilities, and minimized potential downtime. The migration process does not require you to do anything, however the PBX will be inaccessible during the migration.
Schedule:
- PBX2: vps1468029953 [Est. Migration Time: 90 Minutes on Feb 18, 2018 between 5am and 8am Central US Time]
- PBX3: vps1455915753 [Est. Migration Time: 90 Minutes on Feb 25, 2018 between 5am and 8am Central US Time.]
Update Feb 18, 2018 , 06:55 EST
The migration of PBX2: vps1468029953 has been completed and the server is back online.
Update Feb 25, 2018 , 07:18 EST
The migration of PBX3: vps1455915753 has been completed and the server is back online.
Affected PBX: PBX4
October 4, 2017, 15:52 EST
VoipMS Chicago Trunk experienced issues which caused the Chicago servers to be unreachable for a brief period of time. Chicago Data Center and Servers are now reachable however our team is working in order to solve different issues with the service that surfaced due to this event. You may experience slow response from customer portal and/or problems connecting calls. The engineering team is putting all effort into solving this matter as soon as possible. We sincerely apologize for any inconvenience this may cause.
Update Oct. 4, 2017, 16:02 EST
The current issue with calls and registration is affecting more than just the Chicago Servers.
Update Oct. 4, 2017, 17:03 EST
Call are connecting successfully now. Our team is still working in the overall performance and responsiveness of the customer portal and servers.
Update Oct. 4, 2017, 17:58 EST
All services are 100% functional by now. Our team is working in order to apply a different set of back up measures that will mitigate and prevent issues in case an event like this happens again.
Affected Server: PBX2 Incoming and Outgoing. PBX4 (Backup Trunk Only)
August 11, 2017, 0:00 EST
August 11th, VOIPMS (Trunk) we will perform a short maintenance on our Los-Angeles server, from 0:00 AM to 0:30 AM EDT. During this maintenance window, our Los-Angeles server will be offline for a period of no more than 30 minutes Our other Los-Angeles 2 server is unaffected by this maintenance..We apologize for any inconvenience this maintenance may cause.
Update Aug. 11, 2017, 0:18 EST
This maintenance has been performed as scheduled. Thank you for your understanding and patience.
Affected PBX: PBX4
June 14, 2017, 15:45 CDT
Wednesday June 14th, 2017 CyberLynk’s Milwaukee Datacenter (MKE1) suffered a large scale distributed denial-of-service attack (DDoS). This attack’s sole purpose was to block legitimate traffic to our network. There were no breaches of data or customer information. This large scale attack saturated all (5) five bandwidth providers we have in our Milwaukee Datacenter (MKE1). These network providers include: ATT, NTT, Cogent, Level3 and Spectrum.
At approximately 3:45PM CDT (GMT-5) CyberLynk’s monitoring systems detected the attacks and within minutes notified our NOC engineers. The attack seemed to only affect certain racks and servers within our Milwaukee Datacenter. Engineers began working on mitigation and blocking incoming traffic from the DDoS.
At 4:04pm CDT (GMT-5) connectivity to certain network segments in our Milwaukee Datacenter (MKE1) seem to be restored. At this time certain customers were able to reach their servers again. Certain parts of Milwaukee Datacenter still had intermittent connectivity issues while our engineers continued to mitigate additional DDoS attacks and high traffic loads across certain parts of the network.
Between 4:12pm and 6:08 CDT (GMT-5) the DDos attacks escalated to over 2,240 unique IP addresses from all over the world attacking a handful of servers throughout our Milwaukee datacenter. CyberLynk NOC engineers engaged all of our up stream bandwidth providers to assist in blocking the additional DDoS traffic.
As of 6:26pm CDT (GMT-5) connectivity to our Milwaukee Datacenter (MKE1) was stable again. Depending on the path your traffic takes across the Internet to our Milwaukee datacenter you may have seen your service restored around6:48pm CDT.