Cycles Management
Canisters on ICP pay for compute and storage using cycles. Cycles are paid by the canister, not the caller: developers fund their own canisters, and users interact for free. See Cycles for a full explanation of the billing model.
This guide covers everything you need to manage cycles in production: acquiring them, monitoring balances, setting thresholds, and deploying to mainnet.
Local vs mainnet cycles
Local development uses fabricated cycles: canisters on a local network start with a large balance and never actually run out. Code that works locally can fail on mainnet if the canister is underfunded. Always test with realistic cycle amounts before deploying.
Acquiring cycles
To run canisters on mainnet you need ICP tokens, which you convert to cycles via the CMC.
Step 1: Create a mainnet identity
icp identity new mainnet-deployericp identity default mainnet-deployericp identity principal# Output: xxxxx-xxxxx-xxxxx-xxxxx-xxxSave your seed phrase: it is shown only once. Without it, you permanently lose access to the identity and any funds it controls.
Step 2: Get ICP tokens
Purchase ICP on an exchange. When withdrawing, use your principal as the destination address (or icp identity account-id if the exchange requires an account identifier).
Verify arrival:
icp token balance -n icStep 3: Convert ICP to cycles
# Convert 5 ICP to cyclesicp cycles mint --icp 5 -n ic
# Or request a specific cycle amount (ICP is calculated automatically)icp cycles mint --cycles 5T -n icVerify your cycles balance:
icp cycles balance -n ic# Output: ~5T cyclesBudget guidance: Plan for 1–2T cycles per canister as a starting balance. A simple backend canister with moderate traffic costs roughly 0.1–0.5T cycles per month, though this varies with storage and call volume. See the cycles costs reference for per-operation pricing.
Checking canister cycle balances
Only controllers can view a canister’s cycle balance via icp canister status.
Via icp-cli
# Check a canister in your projecticp canister status backend -e ic
# Check any canister by IDicp canister status ryjl3-tyaaa-aaaaa-aaaba-cai -n icExample output:
Status: RunningControllers: xxxxx-xxxxx-xxxxx-xxxxx-xxxMemory allocation: 0Compute allocation: 0Freezing threshold: 2_592_000Balance: 9_811_813_913_485 CyclesThe Balance line shows the current cycle balance. The Freezing threshold shows how many seconds of idle cycles the canister must retain before freezing (see Freezing threshold below).
Programmatically
Canisters can check their own balance at runtime:
import Cycles "mo:core/Cycles";
persistent actor { public query func getBalance() : async Nat { Cycles.balance() };}use ic_cdk::query;use candid::Nat;
#[query]fn get_balance() -> Nat { Nat::from(ic_cdk::api::canister_cycle_balance())}Topping up canisters
Anyone can top up any canister: you do not need to be its controller.
# Top up by canister name (in your project environment)icp canister top-up backend --amount 1T -e ic
# Top up by canister ID (no project context required)icp canister top-up --amount 1T ryjl3-tyaaa-aaaaa-aaaba-cai -n icAmounts use human-readable suffixes: T = trillion, b = billion, m = million, k = thousand.
To convert ICP and top up in sequence:
icp cycles mint --icp 1.0 -n icicp canister top-up backend --amount 1T -e icAccepting cycles in your canister
Canisters can also accept cycles sent with an inter-canister call. This pattern is used for “tip jar” flows and payment routing:
import Cycles "mo:core/Cycles";import Runtime "mo:core/Runtime";
persistent actor { public func deposit() : async Nat { let available = Cycles.available(); if (available == 0) { Runtime.trap("No cycles sent with this call") }; Cycles.accept<system>(available) };}use ic_cdk::update;use candid::Nat;
#[update]fn deposit() -> Nat { let available = ic_cdk::api::msg_cycles_available(); if available == 0 { ic_cdk::trap("No cycles sent with this call"); } let accepted = ic_cdk::api::msg_cycles_accept(available); Nat::from(accepted)}Freezing threshold
The freezing threshold is a canister setting that defines how long (in seconds) a canister can survive on its current balance while idle. When the canister’s balance would fall below the estimated cost of running for that many seconds, the canister is frozen: it stops processing update calls but still serves query calls.
The default is 2,592,000 seconds (30 days). Increase it for production canisters or those with large stable memory:
# Set freezing threshold to 90 days (7,776,000 seconds)icp canister settings update backend --freezing-threshold 7776000 -e ic
# Or use icp.yaml to apply it per-environmentIn icp.yaml:
environments: - name: production network: ic canisters: [backend] settings: backend: freezing_threshold: 90dSee Canister settings for all available settings and their syntax.
When a canister is frozen:
- Update calls return an error immediately
- Query calls still succeed (read-only)
- The canister is not deleted yet: top it up to unfreeze
When a frozen canister runs out of cycles entirely:
- The canister is deleted along with all its state
- This is irreversible
Creating and funding canisters programmatically
You can also read and configure the freezing threshold from canister code:
import Principal "mo:core/Principal";
persistent actor Self {
type CreateCanisterSettings = { controllers : ?[Principal]; compute_allocation : ?Nat; memory_allocation : ?Nat; freezing_threshold : ?Nat; };
type CanisterId = { canister_id : Principal };
let ic = actor ("aaaaa-aa") : actor { create_canister : shared { settings : ?CreateCanisterSettings } -> async CanisterId; deposit_cycles : shared { canister_id : Principal } -> async (); };
// Create a new canister with 1T cycles and a 30-day freezing threshold public func createWithThreshold() : async Principal { let result = await (with cycles = 1_000_000_000_000) ic.create_canister({ settings = ?{ controllers = ?[Principal.fromActor(Self)]; compute_allocation = null; memory_allocation = null; freezing_threshold = ?2_592_000; // 30 days }; }); result.canister_id };
// Top up another canister programmatically public func topUp(canisterId : Principal, amount : Nat) : async () { await (with cycles = amount) ic.deposit_cycles({ canister_id = canisterId }); };}use candid::{Nat, Principal};use ic_cdk::update;use ic_cdk::management_canister::{ create_canister_with_extra_cycles, deposit_cycles, CreateCanisterArgs, DepositCyclesArgs, CanisterSettings,};
#[update]async fn create_with_threshold() -> Principal { let caller_principal = ic_cdk::api::canister_self();
let settings = CanisterSettings { controllers: Some(vec![caller_principal]), compute_allocation: None, memory_allocation: None, freezing_threshold: Some(Nat::from(2_592_000u64)), // 30 days reserved_cycles_limit: None, log_visibility: None, wasm_memory_limit: None, wasm_memory_threshold: None, environment_variables: None, };
let result = create_canister_with_extra_cycles( &CreateCanisterArgs { settings: Some(settings) }, 1_000_000_000_000u128, // 1T cycles ) .await .expect("Failed to create canister");
result.canister_id}
#[update]async fn top_up_canister(canister_id: Principal, amount: u128) { deposit_cycles(&DepositCyclesArgs { canister_id }, amount) .await .expect("Failed to deposit cycles");}Multi-environment deployment
For production, use separate environments for staging and production to avoid accidentally affecting live canisters. Configure environments in icp.yaml:
environments: - name: staging network: ic canisters: [frontend, backend] settings: backend: freezing_threshold: 30d environment_variables: LOG_LEVEL: "debug"
- name: production network: ic canisters: [frontend, backend] settings: backend: freezing_threshold: 90d environment_variables: LOG_LEVEL: "error"Deploy to each environment independently:
# Deploy to staging firsticp deploy -e staging
# Verify, then deploy to productionicp deploy -e productionEach environment maintains separate canister IDs. Mainnet IDs are stored in .icp/data/mappings/<environment>.ids.json and should be committed to version control. See Managing environments for full configuration options.
Production deployment checklist
Before deploying to mainnet, verify each of the following:
- Fund canisters: Top up all canisters with at least 2–5T cycles each before deploying
- Set a freezing threshold: Use 90 days (
7776000seconds) or more for production - Add a backup controller: Without a backup, losing your identity means losing the canister permanently:
Terminal window icp canister settings update backend --add-controller BACKUP_PRINCIPAL -e ic - Verify cycle balance after deploy: Check immediately after
icp deploy -e ic:Terminal window icp canister status backend -e ic - Enable reproducible builds: See Reproducible builds to ensure your WASM is verifiable
- Review canister settings: See Canister settings for memory allocation, compute allocation, and access controls
- Review security: See Canister upgrades security for safe upgrade patterns
Monitoring cycle balances
There is no built-in alerting for low balances: monitoring is your responsibility. Options:
Manual monitoring: Check regularly via icp-cli:
# Check all canisters in an environment at onceicp canister status -e icAutomated monitoring services: Third-party services can monitor balances and alert or auto-top-up:
- CycleOps: Onchain monitoring with automated top-ups and email notifications
- Canistergeek: Cycles, memory, and log monitoring in one place
Automated top-up libraries:
- Rust: canfund: DFINITY-maintained library for automated canister funding
- Motoko: cycles-manager: Permissioned multi-canister cycles management
Common mistakes
Sending cycles to the wrong canister: Cycles transferred to the wrong principal cannot be recovered. Double-check canister IDs before topping up.
Using the wrong flag (-n vs -e): Use -e ic for canister operations by name; use -n ic for token/cycles operations and canister IDs:
# Correcticp canister top-up backend --amount 1T -e icicp cycles balance -n ic
# Incorrect (fails: canister name requires -e)icp canister top-up backend --amount 1T -n icForgetting to add a backup controller: Your identity is the only controller by default. If you lose access to it, the canister cannot be managed, upgraded, or deleted.
Confusing local and mainnet cycles: Local deployments use fabricated cycles and never freeze. Test with realistic amounts on a staging environment before going to production.
Using ExperimentalCycles in Motoko: In mo:core, the module is Cycles, not ExperimentalCycles. import ExperimentalCycles "mo:base/ExperimentalCycles" will fail with mo:core. Use import Cycles "mo:core/Cycles".
Next steps
- Canister settings: Freezing threshold, memory allocation, compute allocation
- Canister lifecycle: Create, install, upgrade, and delete canisters
- Cycles costs reference: Exact cost tables per operation
- Cycles: Why canisters pay for execution
- Reproducible builds: Verify your WASM is trustworthy before deploying
- icp-cli docs: Full command reference