We've confirmed that all systems are back to normal with no customer impact as of 05/04, 01:30 UTC. Our logs show the incident started on 05/03, 22:000 UTC and that during the 4 hours that it took to resolve the issue, customers in the West Central US region experienced issues in loading maps in Azure and OMS portals.
- Root Cause: Root cause has been isolated to a larger Azure network and storage issue in the West Central US region.
- Incident Timeline: 3 Hours & 30 minutes - 05/03, 22:000 UTC through 05/04, 01:30 UTC.
We understand that customers rely on Service Map as a critical service and apologize for any impact this incident caused.
Root cause has been isolated to Azure network issue which was impacting maps in Azure and OMS portals. Network issue has been mitigated in US West Central region and the queued up data during the impact is being processed now. Some customers may experience error while loading maps in the portals and we estimate another 3 hours before all the data is available on the portals.
- Work Around: None
- Next Update: Before 05/04 04:30 UTC
We are aware of issues within Service Map and are actively investigating. Customers might not be able to load maps in Azure and OMS portals and would see the error message as - 'Oops... looks like we couldn't get the data'. Impact is limited to US West Central region.
- Work Around: None. Network metrics data will not be lost. Once the network issue is resolved, the queued up data will be accessible.
- Next Update: Before 05/04 01:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
This post first appeared on MSDN Blogs | Get The Latest Information, Insights, Announcements, And News From Microsoft Experts And Developers In The MSDN Blogs., please read the originial post: here