The Lunch Money API implements rate limiting to ensure fair usage and prevent abuse of API resources. This guide explains how rate limiting works and how to monitor your rate limit status using response headers.
Rate limiting is applied to all requests to the /v1/ and /v2/ endpoints and limits request from a single source to 100 requests per minute.
If you exceed this limit, you will receive a 429 Too Many Requests response.
When you exceed a rate limit, the API returns a 429 Too Many Requests response:
429 Too Many Requests
The response body follows the standard v2 error format:
{
"message": "Too Many Requests",
"errors": [
{
"errMsg": "Too many requests, please try again later."
}
]
}
The API includes rate limit information in response headers for all requests (both successful and rate-limited). You can inspect these headers to monitor your current rate limit status and determine when you can make additional requests.
The API includes standard rate limit headers in all responses:
RateLimit-Limit: The maximum number of requests allowed per windowRateLimit-Remaining: The number of requests remaining in the current windowRateLimit-Reset: The time (in seconds since Unix epoch) when the rate limit window resetsExample headers:
RateLimit-Limit: 100
RateLimit-Remaining: 3
RateLimit-Reset: 1704067200
For backward compatibility, the API also includes legacy X-RateLimit-* headers:
X-RateLimit-Limit: The maximum number of requests allowed per windowX-RateLimit-Remaining: The number of requests remaining in the current window X-RateLimit-Reset: The time (in seconds since Unix epoch) when the rate limit window resetsExample headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 3
X-RateLimit-Reset: 1704067200
When you receive a 429 Too Many Requests response, the API includes a Retry-After header indicating how many seconds you should wait before retrying:
Retry-After: 45
This value represents the time until the rate limit window resets (rounded up to the nearest second).
Retry-After header value when implementing retry logic. Waiting for the specified duration ensures you don't waste requests on premature retries.You can monitor your rate limit status by inspecting the headers in every API response, even successful ones. This allows you to proactively slow down your request rate before hitting the limit.
async function makeRequest(url, options) {
const response = await fetch(url, options);
// Check rate limit status from headers
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0');
const limit = parseInt(response.headers.get('X-RateLimit-Limit') || '0');
const resetTime = parseInt(response.headers.get('X-RateLimit-Reset') || '0');
if (remaining < 5) {
console.warn(`Rate limit warning: ${remaining}/${limit} requests remaining`);
const waitTime = resetTime - Math.floor(Date.now() / 1000);
if (waitTime > 0) {
console.log(`Rate limit resets in ${waitTime} seconds`);
}
}
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
console.error(`Rate limited! Retry after ${retryAfter} seconds`);
throw new Error(`Rate limited: retry after ${retryAfter}s`);
}
return response;
}
import requests
import time
def make_request(url, headers):
response = requests.get(url, headers=headers)
# Check rate limit status
remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
limit = int(response.headers.get('X-RateLimit-Limit', 0))
reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
if remaining < 5:
print(f"Rate limit warning: {remaining}/{limit} requests remaining")
wait_time = reset_time - int(time.time())
if wait_time > 0:
print(f"Rate limit resets in {wait_time} seconds")
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
print(f"Rate limited! Retry after {retry_after} seconds")
raise Exception(f"Rate limited: retry after {retry_after}s")
return response
# Make a request and capture headers
response=$(curl -s -D /tmp/headers.txt -o /tmp/body.txt -w "%{http_code}" \
-H "Authorization: Bearer YOUR_TOKEN" \
"https://api.lunchmoney.dev/v2/me")
# Extract rate limit headers
remaining=$(grep -i "^X-RateLimit-Remaining" /tmp/headers.txt | cut -d' ' -f2 | tr -d '\r')
limit=$(grep -i "^X-RateLimit-Limit" /tmp/headers.txt | cut -d' ' -f2 | tr -d '\r')
reset=$(grep -i "^X-RateLimit-Reset" /tmp/headers.txt | cut -d' ' -f2 | tr -d '\r')
echo "Status: $response"
echo "Rate Limit: $remaining/$limit remaining"
echo "Resets at: $reset"
# Check if rate limited
if [ "$response" = "429" ]; then
retry_after=$(grep -i "^Retry-After" /tmp/headers.txt | cut -d' ' -f2 | tr -d '\r')
echo "Rate limited! Wait $retry_after seconds before retrying"
fi
When you receive a 429 response, implement exponential backoff with jitter to avoid overwhelming the API:
async function makeRequestWithRetry(url, options, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status !== 429) {
return response;
}
// Get retry-after header or use exponential backoff
const retryAfter = parseInt(response.headers.get('Retry-After') || '0');
const waitTime = retryAfter > 0
? retryAfter * 1000 // Convert seconds to milliseconds
: Math.min(1000 * Math.pow(2, attempt) + Math.random() * 1000, 30000);
console.log(`Rate limited. Waiting ${waitTime}ms before retry ${attempt + 1}/${maxRetries}`);
await new Promise(resolve => setTimeout(resolve, waitTime));
}
throw new Error('Max retries exceeded due to rate limiting');
}
Don't wait for a 429 response. Check rate limit headers on every request and adjust your request rate accordingly:
// Pseudo-code example
if (remaining < threshold) {
const waitTime = (resetTime - currentTime) * 1000;
await sleep(waitTime);
}
For applications that need to make many requests, implement a queue system that respects rate limits:
class RateLimitedQueue {
constructor() {
this.queue = [];
this.remaining = 100; // Start with max
this.resetTime = Date.now() + (15 * 60 * 1000);
}
async enqueue(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject });
this.processQueue();
});
}
async processQueue() {
if (this.queue.length === 0 || this.remaining <= 0) {
if (this.remaining <= 0 && this.resetTime > Date.now()) {
// Wait until reset
setTimeout(() => this.processQueue(), this.resetTime - Date.now());
}
return;
}
const { requestFn, resolve, reject } = this.queue.shift();
try {
const response = await requestFn();
// Update rate limit status from headers
this.remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0');
this.resetTime = parseInt(response.headers.get('X-RateLimit-Reset') || '0') * 1000;
resolve(response);
this.processQueue(); // Process next item
} catch (error) {
reject(error);
}
}
}
Reduce the number of API calls by caching responses locally:
const cache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
async function getCachedRequest(url, options) {
const cacheKey = `${url}:${JSON.stringify(options)}`;
const cached = cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const response = await fetch(url, options);
const data = await response.json();
cache.set(cacheKey, {
data,
timestamp: Date.now()
});
return data;
}
Use bulk endpoints when available to reduce the number of requests:
// ❌ Inefficient: 5 separate requests
for (const category_id of categories_in_transactions) {
await fetch(`/v2/categories/{category_id}`, {
method: 'GET',
});
}
// ✅ Efficient: Single request
await fetch('/v2/categories', {
method: 'GET',
});
// Process response and then map ids to categories
Issue: Rate limits resetting unexpectedly
Solution: Rate limits are tracked per IP address. If you're behind a proxy or load balancer, multiple clients may share the same IP and exhaust the shared quota.
If you're experiencing rate limiting issues that can't be resolved through the techniques described above: