Skip to main content

What is auto-retry?

Auto-retry automatically retries failed API calls to the same provider before switching to a backup provider. This reduces false failovers caused by temporary network issues, transient errors, or brief provider hiccups. Use auto-retry when you need:
  • Resilience against temporary network issues
  • Reduced failover frequency
  • Better handling of transient provider errors
  • Optimized provider usage
Auto-retry is available on Pro plans and above. View pricing โ†’

How it works

When an API call fails, auto-retry follows this process:
  1. Initial request fails - Timeout, 5xx error, or network issue
  2. Wait interval - Configurable delay before retry (default: 1 second)
  3. Retry attempt - Same request to same provider
  4. Repeat if needed - Up to configured max retries (default: 3)
  5. Failover - If all retries fail, switch to next provider in priority list
Request โ†’ Provider A (fail)
       โ†’ Wait 1s
       โ†’ Provider A (fail)
       โ†’ Wait 1s
       โ†’ Provider A (fail)
       โ†’ Wait 1s
       โ†’ Provider A (fail)
       โ†’ Switch to Provider B (success)

Why auto-retry matters

Without auto-retry

A single transient error immediately triggers failover:
Request โ†’ Alchemy (timeout) โ†’ Infura (success)
Problems:
  • Wastes backup provider quota
  • Increases costs (backup may be more expensive)
  • Alchemy might have succeeded on retry
  • Unnecessary provider switching

With auto-retry

Transient errors are handled gracefully:
Request โ†’ Alchemy (timeout)
       โ†’ Retry Alchemy (success)
Benefits:
  • Uses primary provider successfully
  • No unnecessary failover
  • Lower costs
  • Better provider utilization

Configuration

Auto-retry behavior can be customized in the Uniblock dashboard:

Retry count

Number of retry attempts before failover.
SettingDescriptionUse case
1 retryQuick failoverTime-sensitive applications
3 retries (default)Balanced approachMost applications
5 retriesMaximum resilienceCritical operations

Retry interval

Delay between retry attempts.
SettingDescriptionUse case
500msFast retryLow-latency requirements
1 second (default)Balanced approachMost applications
2 secondsSlower retryRate-limit sensitive

Retry strategy

How retry intervals are calculated.
StrategyDescriptionExample (3 retries)
FixedSame interval each time1s, 1s, 1s
LinearIncreases linearly1s, 2s, 3s
Exponential (default)Doubles each time1s, 2s, 4s

Error types and retry behavior

Auto-retry handles different error types intelligently:

Retryable errors

These errors trigger auto-retry:
Error typeHTTP codeRetry?Reason
Timeout-โœ…Temporary network issue
Server error500-599โœ…Provider internal error
Rate limit429โœ…May succeed after delay
Service unavailable503โœ…Temporary outage
Gateway timeout504โœ…Temporary network issue

Non-retryable errors

These errors skip retry and failover immediately:
Error typeHTTP codeRetry?Reason
Bad request400โŒInvalid parameters
Unauthorized401โŒInvalid API key
Forbidden403โŒPermission denied
Not found404โŒResource doesnโ€™t exist
Invalid method405โŒWrong HTTP method

Real-world examples

Example 1: Transient timeout

Scenario: Network hiccup causes temporary timeout.
Request flow
curl --location \
'https://api.uniblock.dev/uni/v1/token/balances?chainId=1&address=0x742d35Cc6634C0532925a3b844Bc9e7595f0bEb' \
--header 'x-api-key: YOUR_API_KEY'
What happens:
  1. Request to Alchemy times out (5s)
  2. Wait 1 second
  3. Retry Alchemy โ†’ Success โœ…
Result: Request succeeds without failover. Total time: ~6 seconds.

Example 2: Provider server error

Scenario: Provider returns 500 internal server error. What happens:
  1. Request to Alchemy returns 500 error
  2. Wait 1 second
  3. Retry Alchemy โ†’ 500 error
  4. Wait 2 seconds (exponential backoff)
  5. Retry Alchemy โ†’ 500 error
  6. Wait 4 seconds
  7. Retry Alchemy โ†’ 500 error
  8. Switch to Infura โ†’ Success โœ…
Result: After 3 retries fail, failover to backup provider succeeds.

Example 3: Rate limit

Scenario: Provider rate limit exceeded. What happens:
  1. Request to Alchemy returns 429 (rate limit)
  2. Wait 1 second
  3. Retry Alchemy โ†’ 429 (still rate limited)
  4. Wait 2 seconds
  5. Retry Alchemy โ†’ 429 (still rate limited)
  6. Switch to Infura โ†’ Success โœ…
Result: Retries give rate limit time to reset, but eventually failover.

Example 4: Invalid API key

Scenario: Provider API key is invalid. What happens:
  1. Request to Alchemy returns 401 (unauthorized)
  2. Skip retry (non-retryable error)
  3. Switch to Infura immediately โ†’ Success โœ…
Result: No retry for authentication errors. Immediate failover.

Key benefits

Enhanced reliability

Increases success rate by handling transient errors gracefully.

Reduced failovers

Fewer unnecessary provider switches saves costs and quota.

Better provider utilization

Maximizes use of primary (often cheaper) providers.

Seamless operation

Handles errors automatically without manual intervention.

Monitoring

Track auto-retry metrics in the Uniblock dashboard:
  • Retry count - Total number of retries across all requests
  • Retry success rate - Percentage of retries that succeeded
  • Average retries per request - How many retries per failed request
  • Retry latency - Additional time added by retries
Use these metrics to:
  • Optimize retry configuration
  • Identify problematic providers
  • Understand error patterns
  • Tune retry intervals

Best practices

Use exponential backoff - Gives providers time to recover from temporary issues without excessive delays.
Set reasonable retry counts - 3 retries is optimal for most use cases. More retries add latency.
Monitor retry rates - High retry rates indicate provider issues. Consider switching primary provider.
Combine with backup providers - Auto-retry + backup providers = maximum reliability.

Latency considerations

Auto-retry adds latency when requests fail:
Retry countStrategyTotal added latency
1 retryFixed (1s)1 second
3 retriesFixed (1s)3 seconds
3 retriesExponential (1s)7 seconds (1+2+4)
5 retriesExponential (1s)31 seconds (1+2+4+8+16)
Latency only applies to failed requests. Successful requests have zero retry overhead.
Optimization tips:
  • Use lower retry counts for time-sensitive operations
  • Use fixed intervals for predictable latency
  • Configure shorter intervals for low-latency requirements

Next steps


Common pitfalls

Too many retries - Excessive retries add significant latency. Use 3 retries for most cases.
Retrying non-retryable errors - Auto-retry skips errors like 400/401/403. Donโ€™t expect retries for invalid requests.
Ignoring retry metrics - High retry rates indicate provider problems. Monitor and adjust your configuration.
Not accounting for latency - Retries add latency to failed requests. Factor this into timeout settings.