Rate Limits
Gainly uses rate limits to ensure stable performance and fair usage of the API. Rate limits set the maximum number of API requests you can make over a given time period.
If you exceed the rate limit, you will receive a 429
error response.
Default Rate Limits¶
Rate limits are applied per API key. The default rate limits are as follows:
Plan | Mode | Requests per minute (per API key) |
---|---|---|
Free | Test | 12 |
Free | Live | 12 |
Paid | Test | 12 |
Paid | Live | 1,200 to 12,000 |
If you need higher rate limits, please contact us with an explanation of your use case. We're able to increase them on a case-by-case basis.
Common Causes¶
Common reasons you might hit rate limits:
- Making too many requests in quick succession, such as through a loop
- Running multiple concurrent processes that share the same API key
- Testing or debugging scripts without rate limiting controls
- Infinite loops or bugs in code causing unintended API calls
To avoid rate limits, consider:
- Adding delays between requests when processing large datasets through a loop
- Following best practices for handling rate limits, as described below
Rate Limit Headers¶
The following headers are returned along with a 429
error response:
Header | Description |
---|---|
retry-after |
The number of seconds to wait before making a new API request. |
x-ratelimit-limit |
The maximum number of requests allowed per minute. |
x-ratelimit-remaining |
The number of requests remaining in the current rate limit window. |
x-ratelimit-reset |
The number of seconds until the rate limit resets. |
x-ratelimit-window |
The length of the rate limit window. |
How to Handle Errors¶
If you receive a 429
error response, you should wait for the retry-after
period before making a new API request.
Best Practices¶
- Watch for
429
errors and build in retry logic - Use the
retry-after
header to determine the minimum wait time before retrying - Implement exponential backoff for retries to handle high-traffic situations
- Start with the
retry-after
value - Double the wait time for each subsequent retry
- Set a maximum number of retry attempts
- Start with the
In addition to implementing these best practices at the client level, you can also implement global rate limiting at the server level using approaches like token bucket algorithms. Implementations of token bucket are available in most major programming languages.