Rate limits
Pluggy's API implements a rate limiter to maximize its stability when dealing with large bursts of incoming requests. The rate limiter keeps a counter of the amount of requests you've made to a particular endpoint in one minute from the same IP, and if it exceeds the maximum allowed amount it returns a 429 error.
Rate limits per endpoint
The following rate limits apply in Pluggy API:
Endpoint | Max requests per minute per IP |
---|---|
POST /auth | 360 |
GET /transactions or GET /transactions/{id} | 360 |
GET /investments or GET /investments/{id} | 360 |
GET /investments/{id}/transactions | 360 |
PATCH /items | 20 .This is meant for users' triggered updates. If you need item updates on a daily basis, you must use our auto-sync feature. |
Each limit is applied independently from the others, e.g. you can reach the limit in POST /auth
but still be able to use GET /transactions
. If a limit spans more than one endpoint, requests to either endpoint count towards the limit.
Handling rate limiting errors
When you exceed the max requests per minute for an endpoint, you will get a 429 Too Many Requests
error:
{
"message": "Too many requests. Please try again later (see Retry-After header in seconds)",
"code": 429
}
Any subsequent requests will fail with the same error until the rate limiter counter resets, after one minute passes.
More precise information is given in the response headers:
{
"RateLimit-Limit": "360", // The max requests per minute for this endpoint
"RateLimit-Reset": "45", // How many seconds remain until the limit is reset and you can request the endpoint again.
"Retry-After": "60" // Standard field for HTTP client retry behaviour. Always returns 60.
}
By reading these headers you can handle this scenario by waiting RateLimit-Reset
seconds and retrying the request.
Some HTTP clients like got come with standard retry behavior that reads the Retry-After
header when they receive a 429
response and waiting that many seconds to try again, which also works.
I keep hitting the limit!
If you are repeatedly hitting the rate limit for an endpoint, make sure to check the following:
- If you are doing some sort of batch process, make sure that you don't have too many parallel invocations of the same endpoint, and try adding waits between each call to avoid flooding the API too fast.
- If you reach the limit during normal operation of your application, make sure you are not duplicating requests by mistake and that you are properly reusing API Keys between calls (to avoid hitting the
/auth
limit). - If you still require a higher rate of invocation than what we allow for your application to work, please contact our support team to solve your particular use case.
Updated over 1 year ago