Requests are managed on a fixed quota system with different limits based on your server configuration.

Standard Rate Limits

All servers receive a base allocation of API requests with standard throttling policies.
HeaderValueMeaning
X-RateLimit-Limit1000Maximum requests allowed per hour
X-RateLimit-Remaining847Requests remaining in current window
X-RateLimit-Used153Requests consumed in current window
X-RateLimit-Reset3600Seconds until rate limit window resets
Retry-After1Seconds to wait before next request (when rate limited)
Rate limits are calculated per server on every request. Each server operates independently with its own quota allocation.

Premium Rate Limits

Servers with premium subscriptions receive enhanced rate limits and priority processing.
HeaderValueMeaning
X-RateLimit-Limit5000Maximum requests allowed per hour
X-RateLimit-Remaining4723Requests remaining in current window
X-RateLimit-Used277Requests consumed in current window
X-RateLimit-Reset2890Seconds until rate limit window resets
X-RateLimit-Window3600Duration of rate limit window (always 1 hour)
Retry-After1Seconds to wait before next request (when rate limited)
Premium limits are subject to fair usage policies. Sustained high-volume usage may require additional rate limit discussions.

Rate Limit Headers

All API responses include rate limiting headers to help you manage request flow:
HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 847
X-RateLimit-Used: 153
X-RateLimit-Reset: 3600
Content-Type: application/json

{
  "id": "server_12345",
  "name": "Example Server"
}

Handling Rate Limits

async function apiRequest(url, options = {}) {
  const response = await fetch(url, {
    ...options,
    headers: {
      'Authorization': 'Bearer your_token',
      ...options.headers
    }
  });
  
  if (response.status === 429) {
    const retryAfter = response.headers.get('Retry-After');
    console.log(`Rate limited. Retry after ${retryAfter} seconds`);
    
    // Wait and retry
    await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
    return apiRequest(url, options);
  }
  
  return response;
}

Best Practices

Monitor Headers

Always check rate limit headers in responses to proactively manage request flow

Implement Backoff

Use exponential backoff strategies for handling 429 responses

Batch Operations

Group related requests and use pagination efficiently to minimize API calls

Cache Responses

Cache frequently accessed data to reduce unnecessary API requests
Rate Limit Planning: For high-volume integrations, monitor your usage patterns and consider implementing request queuing to stay within limits consistently.