Instant Cloud Connectivity issues in AWS Regions
Incident Report for Gurobi Optimization
Postmortem

Here are the updates from AWS:

[9:37 AM PST] We are seeing impact to multiple AWS APIs in the US-EAST-1 Region. This issue is also affecting some of our monitoring and incident response tooling, which is delaying our ability to provide updates. We have identified the root cause and are actively working towards recovery.

[10:12 AM PST] We are seeing impact to multiple AWS APIs in the US-EAST-1 Region. This issue is also affecting some of our monitoring and incident response tooling, which is delaying our ability to provide updates. We have identified root cause of the issue causing service API and console issues in the US-EAST-1 Region, and are starting to see some signs of recovery. We do not have an ETA for full recovery at this time.

[11:26 AM PST] We are seeing impact to multiple AWS APIs in the US-EAST-1 Region. This issue is also affecting some of our monitoring and incident response tooling, which is delaying our ability to provide updates. Services impacted include: EC2, Connect, DynamoDB, Glue, Athena, Timestream, and Chime and other AWS Services in US-EAST-1. The root cause of this issue is an impairment of several network devices in the US-EAST-1 Region. We are pursuing multiple mitigation paths in parallel, and have seen some signs of recovery, but we do not have an ETA for full recovery at this time. Root logins for consoles in all AWS regions are affected by this issue, however customers can login to consoles other than US-EAST-1 by using an IAM role for authentication.

[12:34 PM PST] We continue to experience increased API error rates for multiple AWS Services in the US-EAST-1 Region. The root cause of this issue is an impairment of several network devices. We continue to work toward mitigation, and are actively working on a number of different mitigation and resolution actions. While we have observed some early signs of recovery, we do not have an ETA for full recovery. For customers experiencing issues signing-in to the AWS Management Console in US-EAST-1, we recommend retrying using a separate Management Console endpoint (such as https://us-west-2.console.aws.amazon.com/). Additionally, if you are attempting to login using root login credentials you may be unable to do so, even via console endpoints not in US-EAST-1. If you are impacted by this, we recommend using IAM Users or Roles for authentication. We will continue to provide updates here as we have more information to share.

[2:04 PM PST] We have executed a mitigation which is showing significant recovery in the US-EAST-1 Region. We are continuing to closely monitor the health of the network devices and we expect to continue to make progress towards full recovery. We still do not have an ETA for full recovery at this time.

[2:43 PM PST] We have mitigated the underlying issue that caused some network devices in the US-EAST-1 Region to be impaired. We are seeing improvement in availability across most AWS services. All services are now independently working through service-by-service recovery. We continue to work toward full recovery for all impacted AWS Services and API operations. In order to expedite overall recovery, we have temporarily disabled Event Deliveries for Amazon EventBridge in the US-EAST-1 Region. These events will still be received & accepted, and queued for later delivery.

[3:03 PM PST] Many services have already recovered, however we are working towards full recovery across services. Services like SSO, Connect, API Gateway, ECS/Fargate, and EventBridge are still experiencing impact. Engineers are actively working on resolving impact to these services.

Posted Dec 07, 2021 - 16:18 PST

Resolved
AWS has resolved the underlying issue.
Posted Dec 07, 2021 - 16:17 PST
Update
AWS has identified an impact to multiple AWS APIs in the US-EAST-1 Region. A root cause has been identified and they are actively working towards recovery.
Posted Dec 07, 2021 - 11:47 PST
Update
AWS has identified an API, elevated error rates for EC2 APIs and console issues in the US-EAST-1 Region. A root cause has been identified and they are actively working towards recovery.
Posted Dec 07, 2021 - 09:00 PST
Update
AWS has identified an API, elevated error rates for EC2 APIs and console issues in the US-EAST-1 Region. A root cause has been identified and they are actively working towards recovery.
Posted Dec 07, 2021 - 08:56 PST
Update
We are currently experiencing connectivity issues in AWS Region starting at 7:30 AM PST. We strongly recommend our customers to use Azure pending issue is swiftly resolved. We appreciate your patience on this.
Posted Dec 07, 2021 - 08:27 PST
Investigating
We are currently experiencing connectivity issues in AWS Region starting at 7:30 AM PST. We strongly recommend our customers to use Azure pending issue is swiftly resolved. We appreciate your patience on this.
Posted Dec 07, 2021 - 08:25 PST
This incident affected: Regions (Gurobi AWS us-east-1) and Instant Cloud Provisioning - Global.