Around 1930Z we began to see some task launch recovery and have been able to return to normal operating compute levels. We expect the backlog processing to continue over the next several hours to bring reporting and check reopen to real-time and are not forcing this any faster because there are still some AWS imposed API rate limits for initiating some tasks.
Thank you for your continued diligence in limiting excess platform use so we preserve service use for critical business operations until this is fully recovered and closed.
Expect an additional update around 0130Z 2025-10-21 as long as things continue to progress smoothly into the evening.
We are continuing to see platform impact from the inability to scale tasks during the AWS event, and anticipate the delay of reporting information to continue to grow. We currently see about a 1.5 hour backlog of message processing for check close events to become available at reporting API and through the POS check search.
Generally, all other services are processing and we appreciate your patience and continue to ask that you limit any major configuration or menu changes while scaling capacity is limited.
We will continue to provide status updates as they become available.
As expected, without the ability to scale during mid-day, we are beginning to see latency of real-time check processing to reporting. This is currently at about 25 minutes delay, all queues are still processing at reduced rate.
Qu customers can expect to see a lag in real-time data availability from our APIs and impact to check reopening from the POS terminal check search as this event continues to unfold.
We are continuing to monitor systems as this is becoming a multi-hour event. We continue to see good platform availability even though ability to launch tasks are still impaired.
Again, we ask for your patience and diligence and to refrain from any major configuration or menu changes while this event is underway.
New task and capacity launch is still impacted currently, however our platform remains available. We are still experiencing reduced capacity in some services.
As task launches are impacted, there may be some delay in DSP store status updates, out-of-stock updates, and menu updates, however ordering processing should remain unaffected at least from a Qu perspective. We ask that you refrain from any major menu updates during this time just to simplify operations.
We continue to experience issues launching tasks and expect if the situation at AWS is not resolved in the next 1-2 hours, Qu customers may begin seeing delay of reporting data as daily volume picks up and we are unable to scale due to AWS control plane issues.
We will continue to update hourly as this event progresses.
We continue to see recovery in our platform as AWS had reported restoration of most major services. We are still seeing issues launching some tasks but this should not be affecting the overall function at this time, however we will continue to monitor the situation.
Expect another update within 1 hour.
We are currently monitoring the situation and recovery of a service event at AWS. While our services are still generally available, we are seeing some degradation to real-time message processing and tasks that are dynamically launched.
Expect another status update within 1 hour.