Skip to main content

Rate Limiting

The Agentix API enforces rate limits to protect the platform and ensure fair usage. Limits are applied per IP address and persist across server restarts.

Rate Limit Tiers

The API has three tiers of rate limiting, applied from most restrictive to least:

Sensitive Endpoints

High-security endpoints that handle password resets and credential operations:
EndpointLimitWindow
POST /api/auth/forget-password3 requests1 hour
POST /api/auth/reset-password3 requests1 hour

Authentication Endpoints

All endpoints under /api/auth/*:
EndpointLimitWindow
POST /api/auth/sign-in/email5 requests60 seconds
POST /api/auth/sign-up/email3 requests5 minutes
All other /api/auth/*10 requests60 seconds
Sign-in has additional brute force protection: 5 failed attempts per email+IP triggers a 15-minute lockout, independent of the rate limit counter. See Authentication for details.

General API

All endpoints under /api/* (applied after auth-specific limits):
ScopeLimitWindow
All /api/* endpoints100 requests60 seconds

Response Headers

When rate limits are active, responses include standard rate limit headers (draft-7):
HeaderDescriptionExample
RateLimit-LimitMaximum requests allowed in the window100
RateLimit-RemainingRequests remaining in the current window87
RateLimit-ResetSeconds until the window resets42

When You Exceed a Limit

If you exceed a rate limit, the API returns a 429 Too Many Requests response:
{
  "error": "Too many requests, please try again later"
}
The response includes a Retry-After header indicating how many seconds to wait before retrying:
HTTP/1.1 429 Too Many Requests
Retry-After: 30
RateLimit-Limit: 100
RateLimit-Remaining: 0
RateLimit-Reset: 30

{"error": "Too many requests, please try again later"}

Best Practices

  1. Respect Retry-After — When you receive a 429, wait the specified number of seconds before retrying
  2. Implement exponential backoff — For automated scripts, use exponential backoff with jitter to avoid thundering herd effects
  3. Cache responses — Reduce API calls by caching GET responses on your end
  4. Use pagination efficiently — Fetch only the data you need with appropriate limit values
  5. Monitor headers — Track RateLimit-Remaining to proactively slow down before hitting limits

Example: Handling Rate Limits in Node.js

async function fetchWithRateLimit(url, options) {
  const response = await fetch(url, options);

  if (response.status === 429) {
    const retryAfter = parseInt(response.headers.get('Retry-After') || '60', 10);
    console.log(`Rate limited. Retrying in ${retryAfter} seconds...`);
    await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
    return fetchWithRateLimit(url, options); // Retry
  }

  return response;
}