What is auto-retry?
Auto-retry automatically retries failed API calls to the same provider before switching to a backup provider. This reduces false failovers caused by temporary network issues, transient errors, or brief provider hiccups. Use auto-retry when you need:- Resilience against temporary network issues
- Reduced failover frequency
- Better handling of transient provider errors
- Optimized provider usage
Auto-retry is available on Pro plans and above. View pricing โ
How it works
When an API call fails, auto-retry follows this process:- Initial request fails - Timeout, 5xx error, or network issue
- Wait interval - Configurable delay before retry (default: 1 second)
- Retry attempt - Same request to same provider
- Repeat if needed - Up to configured max retries (default: 3)
- Failover - If all retries fail, switch to next provider in priority list
Why auto-retry matters
Without auto-retry
A single transient error immediately triggers failover:- Wastes backup provider quota
- Increases costs (backup may be more expensive)
- Alchemy might have succeeded on retry
- Unnecessary provider switching
With auto-retry
Transient errors are handled gracefully:- Uses primary provider successfully
- No unnecessary failover
- Lower costs
- Better provider utilization
Configuration
Auto-retry behavior can be customized in the Uniblock dashboard:Retry count
Number of retry attempts before failover.| Setting | Description | Use case |
|---|---|---|
| 1 retry | Quick failover | Time-sensitive applications |
| 3 retries (default) | Balanced approach | Most applications |
| 5 retries | Maximum resilience | Critical operations |
Retry interval
Delay between retry attempts.| Setting | Description | Use case |
|---|---|---|
| 500ms | Fast retry | Low-latency requirements |
| 1 second (default) | Balanced approach | Most applications |
| 2 seconds | Slower retry | Rate-limit sensitive |
Retry strategy
How retry intervals are calculated.| Strategy | Description | Example (3 retries) |
|---|---|---|
| Fixed | Same interval each time | 1s, 1s, 1s |
| Linear | Increases linearly | 1s, 2s, 3s |
| Exponential (default) | Doubles each time | 1s, 2s, 4s |
Error types and retry behavior
Auto-retry handles different error types intelligently:Retryable errors
These errors trigger auto-retry:| Error type | HTTP code | Retry? | Reason |
|---|---|---|---|
| Timeout | - | โ | Temporary network issue |
| Server error | 500-599 | โ | Provider internal error |
| Rate limit | 429 | โ | May succeed after delay |
| Service unavailable | 503 | โ | Temporary outage |
| Gateway timeout | 504 | โ | Temporary network issue |
Non-retryable errors
These errors skip retry and failover immediately:| Error type | HTTP code | Retry? | Reason |
|---|---|---|---|
| Bad request | 400 | โ | Invalid parameters |
| Unauthorized | 401 | โ | Invalid API key |
| Forbidden | 403 | โ | Permission denied |
| Not found | 404 | โ | Resource doesnโt exist |
| Invalid method | 405 | โ | Wrong HTTP method |
Real-world examples
Example 1: Transient timeout
Scenario: Network hiccup causes temporary timeout.Request flow
- Request to Alchemy times out (5s)
- Wait 1 second
- Retry Alchemy โ Success โ
Example 2: Provider server error
Scenario: Provider returns 500 internal server error. What happens:- Request to Alchemy returns 500 error
- Wait 1 second
- Retry Alchemy โ 500 error
- Wait 2 seconds (exponential backoff)
- Retry Alchemy โ 500 error
- Wait 4 seconds
- Retry Alchemy โ 500 error
- Switch to Infura โ Success โ
Example 3: Rate limit
Scenario: Provider rate limit exceeded. What happens:- Request to Alchemy returns 429 (rate limit)
- Wait 1 second
- Retry Alchemy โ 429 (still rate limited)
- Wait 2 seconds
- Retry Alchemy โ 429 (still rate limited)
- Switch to Infura โ Success โ
Example 4: Invalid API key
Scenario: Provider API key is invalid. What happens:- Request to Alchemy returns 401 (unauthorized)
- Skip retry (non-retryable error)
- Switch to Infura immediately โ Success โ
Key benefits
Enhanced reliability
Increases success rate by handling transient errors gracefully.
Reduced failovers
Fewer unnecessary provider switches saves costs and quota.
Better provider utilization
Maximizes use of primary (often cheaper) providers.
Seamless operation
Handles errors automatically without manual intervention.
Monitoring
Track auto-retry metrics in the Uniblock dashboard:- Retry count - Total number of retries across all requests
- Retry success rate - Percentage of retries that succeeded
- Average retries per request - How many retries per failed request
- Retry latency - Additional time added by retries
- Optimize retry configuration
- Identify problematic providers
- Understand error patterns
- Tune retry intervals
Best practices
Use exponential backoff - Gives providers time to recover from temporary issues without excessive delays.
Set reasonable retry counts - 3 retries is optimal for most use cases. More retries add latency.
Monitor retry rates - High retry rates indicate provider issues. Consider switching primary provider.
Combine with backup providers - Auto-retry + backup providers = maximum reliability.
Latency considerations
Auto-retry adds latency when requests fail:| Retry count | Strategy | Total added latency |
|---|---|---|
| 1 retry | Fixed (1s) | 1 second |
| 3 retries | Fixed (1s) | 3 seconds |
| 3 retries | Exponential (1s) | 7 seconds (1+2+4) |
| 5 retries | Exponential (1s) | 31 seconds (1+2+4+8+16) |
Latency only applies to failed requests. Successful requests have zero retry overhead.
- Use lower retry counts for time-sensitive operations
- Use fixed intervals for predictable latency
- Configure shorter intervals for low-latency requirements
Next steps
Backup providers
Learn how auto-retry works with backup provider failover.
Routing optimization
Optimize provider selection for cost and performance.
Data consensus
Verify data accuracy across multiple providers.
Dashboard
Configure retry settings and monitor retry metrics.