Skip to main content

Rate Limits & Tokens

The JsonCut API uses both rate limiting and token-based usage tracking to ensure fair usage and maintain service quality. This page explains how both systems work and how to optimize your usage.

Current Limits (Free Plan)

Since JsonCut is currently available only on the Free Plan, here are the current limits:

Rate Limits

Rate Limit TypeFree Plan LimitWindow
API Requests1,000 requests15 minutes
Authentication5 attempts15 minutes
File Uploads10 uploads1 minute
Job Creation10 jobs1 minute
API Key Creation5 keys1 hour
Concurrent Jobs5 jobs-

File & Processing Limits

Limit TypeFree Plan LimitDescription
File Size100 MBMaximum size per uploaded file
File TTL (Time to Live)1-48 hoursFiles are automatically deleted after TTL expires (default: 1 hour)
Image Dimensions4096 × 4096 pxMaximum width and height for images
Image Layers50 layersMaximum layers per image job
Video Dimensions3840 × 2160 pxMaximum width and height (4K)
Video Duration300 secondsMaximum video length (5 minutes)
Video FPS60 fpsMaximum frames per second
Video Clips100 clipsMaximum clips per video job
Layers per Clip20 layersMaximum layers per video clip

Supported File Types

CategorySupported Formats
ImagesPNG, JPEG, JPG, GIF, WebP
VideosMP4, WebM, MOV
AudioMP3, WAV, M4A, AAC
FontsTTF, OTF, WOFF, WOFF2

File Storage & TTL (Time to Live)

JsonCut uses a temporary file storage system to ensure efficient resource usage:

  • Default TTL: All uploaded files expire after 1 hour by default
  • Configurable TTL: You can extend the TTL up to 48 hours during upload
  • Automatic Cleanup: Files are automatically deleted once their TTL expires
  • Use Case: Perfect for processing workflows where files are only needed temporarily

Specifying TTL during upload:

curl -X POST "https://api.jsoncut.com/api/v1/files/upload" \
-H "x-api-key: YOUR_API_KEY" \
-F "file=@image.png" \
-F "category=image" \
-F "ttlHours=24"
File Expiration

Once a file expires and is deleted, it cannot be recovered. Plan your workflows accordingly and ensure you download results before expiration.

Custom Plans Available

Need higher limits or custom quotas? Contact us through the dashboard contact form to discuss custom plans tailored to your needs.

Token System

JsonCut uses a token-based system to track and bill for processing resources. Tokens are consumed for various operations:

Token Pricing

OperationToken CostCalculation Method
File Upload1 tokenFixed cost per file
Image Processing2 + layersBase cost + 1 token per layer
Video ProcessingVariable10 base + 10 tokens per MB output

Token Allocation

PlanMonthly TokensPricing
Free Plan5,000 tokensFree
Custom PlansVariableContact sales

Detailed Token Calculation

Image Jobs

  • Base cost: 2 tokens (BASE_IMAGE_TOKENS)
  • Per layer: +1 token (TOKENS_PER_IMAGE_LAYER)
  • Formula: BASE_IMAGE_TOKENS + (layer_count × TOKENS_PER_IMAGE_LAYER)
  • Example: Image with 5 layers = 2 + (5 × 1) = 7 tokens

Video Jobs

  • Pre-estimation: 100 tokens reserved during job creation (RESERVED_VIDEO_TOKENS)
  • Final cost: Based on output file size
  • Formula: VIDEO_TOKEN_BASE_COST + (output_size_MB × VIDEO_TOKEN_PER_MB)
  • Minimum cost: 20 tokens (VIDEO_TOKEN_MIN_COST)
  • Examples:
    • Small video (1MB): 10 + (1 × 10) = 20 tokens (minimum applied)
    • Medium video (5MB): 10 + (5 × 10) = 60 tokens
    • Large video (20MB): 10 + (20 × 10) = 210 tokens
Token Charging

Tokens are only charged upon successful job completion. If a job fails, no tokens are consumed (except the 1 token for file uploads already used).

How Limits Work

Rate Limiting

Rate limits are applied per API key and use a sliding window algorithm:

  • Request Rate Limit: Maximum number of API requests per time window (RATE_LIMIT_MAX per RATE_LIMIT_WINDOW)
  • Concurrent Jobs: Maximum number of jobs processing simultaneously (MAX_CONCURRENT_JOBS_PER_USER)
  • Time-based Windows: Different limits for different operations with configurable windows

Processing Limits

Processing limits are enforced during job validation to ensure system stability:

  • File Size Validation: Checked during upload (MAX_FILE_SIZE)
  • Dimension Limits: Applied to prevent excessive resource usage
  • Layer Limits: Prevent overly complex jobs that could impact performance
  • Duration Limits: Ensure reasonable processing times for video content

Rate Limit Headers

Every API response includes headers showing your current rate limit status:

HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1642584000
X-RateLimit-Window: 900
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the current window
X-RateLimit-RemainingNumber of requests remaining in current window
X-RateLimit-ResetUnix timestamp when the rate limit resets
X-RateLimit-WindowRate limit window duration in seconds

Handling Rate Limits

429 Too Many Requests

When you exceed the rate limit, the API returns a 429 status code:

{
"success": false,
"error": "Too many requests, please try again later"
}

Monitoring Your Usage

Token Usage Tracking

Monitor your token consumption through the API:

curl -H "x-api-key: YOUR_API_KEY" \
https://api.jsoncut.com/v1/auth/stats

Response includes current token usage:

{
"success": true,
"data": {
"tokensUsed": 1250,
"tokensRemaining": 8750,
"resetDate": "2024-02-01T00:00:00Z"
}
}

Best Practices

1. Implement Exponential Backoff

async function apiRequest(url, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
const response = await fetch(url, options);

if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After') || Math.pow(2, i);
await sleep(retryAfter * 1000);
continue;
}

return response;
} catch (error) {
if (i === maxRetries - 1) throw error;
await sleep(Math.pow(2, i) * 1000);
}
}
}

function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}

2. Monitor Rate Limit Headers

function checkRateLimit(response) {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const reset = parseInt(response.headers.get('X-RateLimit-Reset'));

if (remaining < 10) {
const waitTime = (reset * 1000) - Date.now();
console.warn(`Rate limit low. ${remaining} requests remaining. Reset in ${waitTime}ms`);
}
}

3. Optimize File Uploads

Group file uploads when possible to reduce API calls:

// Upload files efficiently
const uploadFiles = async (files) => {
const uploads = [];

for (const file of files) {
uploads.push(uploadFile(file));
}

// Wait for all uploads with proper error handling
const results = await Promise.allSettled(uploads);
return results;
};

4. Efficient Job Status Monitoring

Use smart polling with exponential backoff:

const pollJobStatus = async (jobId) => {
let delay = 2000; // Start with 2 seconds
const maxDelay = 30000; // Cap at 30 seconds

while (true) {
const job = await getJob(jobId);

if (job.status === 'COMPLETED' || job.status === 'FAILED') {
return job;
}

await sleep(delay);
delay = Math.min(delay * 1.5, maxDelay); // Exponential backoff
}
};

Optimization Strategies

1. Optimize Job Configuration

Create efficient jobs to minimize token usage:

// Inefficient: Multiple jobs for related content
await createJob({ type: 'image', config: { layers: [textLayer1] }});
await createJob({ type: 'image', config: { layers: [textLayer2] }});
await createJob({ type: 'image', config: { layers: [textLayer3] }});

// Efficient: Single job with multiple outputs
await createJob({
type: 'image',
config: {
layers: [textLayer1, textLayer2, textLayer3]
}
});

2. Right-size Your Media

Choose appropriate dimensions to avoid wasting tokens:

// For social media posts
const socialConfig = {
width: 1080, // Standard Instagram size
height: 1080,
layers: [...]
};

// For web banners
const bannerConfig = {
width: 1920, // Full HD width
height: 600,
layers: [...]
};

3. Validate Before Processing

Use the validation endpoint to catch errors early and avoid hitting limits:

// Validate job configuration first
const validation = await fetch('/api/v1/jobs/validate', {
method: 'POST',
headers: { 'x-api-key': API_KEY },
body: JSON.stringify({ type: 'image', config: jobConfig })
});

const validationResult = await validation.json();

if (validationResult.data.isValid) {
// Only create job if validation passes
await createJob({ type: 'image', config: jobConfig });
} else {
console.error('Job validation failed:', validationResult.data.errors);
// Handle specific limit violations
validationResult.data.errors.forEach(error => {
if (error.code === 'DIMENSION_LIMIT_EXCEEDED') {
console.log('Reduce image dimensions to 4096x4096 or less');
}
if (error.code === 'LAYER_LIMIT_EXCEEDED') {
console.log('Reduce layers to 50 or less for images');
}
});
}

Custom Plans & Enterprise

For production applications requiring higher limits or custom configurations:

Contact Options:

Available Customizations:

  • Higher rate limits for all endpoints
  • Increased concurrent job limits
  • Custom token allocations
  • Priority processing queues
  • Dedicated support channels

Troubleshooting Common Issues

Rate Limit Exceeded

  1. Check your current usage in the dashboard
  2. Implement exponential backoff in your code
  3. Consider upgrading to a custom plan

Token Depletion

  1. Monitor your monthly token usage
  2. Optimize job configurations to use fewer tokens
  3. Consider smaller output dimensions for appropriate use cases

Job Processing Delays

  1. Check concurrent job limits (currently 5 jobs)
  2. Monitor current queue status
  3. Implement efficient polling strategies

File Upload Failures

  1. File too large: Ensure files are under 100MB (MAX_FILE_SIZE)
  2. Unsupported format: Check supported file types above
  3. Invalid dimensions: Reduce image size to 4096×4096px or less
  4. TTL out of range: Ensure ttlHours parameter is between 1 and 48

File Access Issues

  1. File not found (410 Gone): File has expired and been automatically deleted
  2. Invalid file reference: Check that file IDs are correct and files haven't expired
  3. Job configuration errors: Ensure referenced files in job configs haven't expired

Job Validation Errors

  1. Too many layers: Reduce to 50 layers for images, 20 per video clip
  2. Video too long: Keep videos under 5 minutes (300 seconds)
  3. Invalid dimensions: Ensure video dimensions don't exceed 3840×2160px (4K)
  4. High frame rate: Keep video FPS at 60 or below

Environment Configuration

All limits are configurable via environment variables for custom deployments:

Rate Limiting Variables

RATE_LIMIT_WINDOW=900000          # 15 minutes in milliseconds
RATE_LIMIT_MAX=1000 # Max requests per window
AUTH_RATE_LIMIT_WINDOW=900000 # 15 minutes in milliseconds
AUTH_RATE_LIMIT_MAX=5 # Max authentication attempts per window
UPLOAD_RATE_LIMIT_WINDOW=60000 # 1 minute in milliseconds
UPLOAD_RATE_LIMIT_MAX=10 # Max uploads per minute
JOB_RATE_LIMIT_WINDOW=60000 # 1 minute in milliseconds
JOB_RATE_LIMIT_MAX=10 # Max jobs per minute
API_KEY_RATE_LIMIT_WINDOW=3600000 # 1 hour in milliseconds
API_KEY_RATE_LIMIT_MAX=5 # Max API keys per hour
EMAIL_RATE_LIMIT_MINUTES=1 # Email cooldown in minutes
MAX_CONCURRENT_JOBS_PER_USER=5 # Max concurrent jobs

File & Processing Limits

MAX_FILE_SIZE=104857600           # 100MB in bytes
MAX_IMAGE_WIDTH=4096 # Max image width
MAX_IMAGE_HEIGHT=4096 # Max image height
MAX_IMAGE_LAYERS=50 # Max layers per image
MAX_VIDEO_WIDTH=3840 # Max video width (4K)
MAX_VIDEO_HEIGHT=2160 # Max video height (4K)
MAX_VIDEO_FPS=60 # Max frames per second
MAX_VIDEO_CLIPS=100 # Max clips per video
MAX_VIDEO_DURATION=300 # Max duration in seconds
MAX_LAYERS_PER_CLIP=20 # Max layers per video clip

Token Calculation Variables

BASE_IMAGE_TOKENS=2               # Base cost for image processing
TOKENS_PER_IMAGE_LAYER=1 # Cost per image layer
RESERVED_VIDEO_TOKENS=100 # Pre-reserved tokens for video jobs
VIDEO_TOKEN_BASE_COST=10 # Base cost for video processing
VIDEO_TOKEN_PER_MB=10 # Cost per MB of output video
VIDEO_TOKEN_MIN_COST=20 # Minimum cost for any video job