May 6, 2026 — Overlock team
Introducing the Akash Provider for Crossplane
The Akash Provider for Crossplane brings Akash workloads into a Kubernetes control-plane model: declared intent, continuous reconciliation, and no runbooks for leases, bids, or providers.
[ Read ]
💡 Akash Network is a decentralized compute marketplace. You publish a deployment manifest, independent providers around the world bid on it, you pick a bid and sign a lease, and your workload runs on their hardware with payments settling on-chain in AKT. A deployment isn't a server you own - it's an on-chain contract with a counterparty whose behavior affects your uptime.
Automating Akash has gotten easier every year. There's the web console for clicking, provider-services for scripting, and a community Terraform provider for teams who already think in HCL. Each handles the publish a deployment part of the lifecycle well.
What stays harder is everything that happens after publish. An Akash deployment isn't an inert resource you create once and forget - leases drain and need topping up, bid windows close, providers can go offline, and the on-chain state keeps moving whether or not anyone is watching. You can detect drift on a schedule and re-apply by hand, but at some point you want a system that closes those gaps on its own.
That's the problem the Akash Provider for Crossplane is built around: bringing Akash into a control-plane model where workloads keep reconciling against your declared intent without anyone typing a command.
💡 New to Crossplane? It's an open-source project that turns the Kubernetes API into a control plane for arbitrary resources. The rough mapping for Terraform users: provider → Provider, resource → Managed Resource, module → Composition, state file → the Kubernetes API server itself. State stops being a thing you store and starts being a thing the system is - live, queryable, continuously reconciled.
The Akash Provider for Crossplane
The provider exposes Akash workloads as Kubernetes objects: you declare what you want in a manifest, apply it once, and the controllers handle the on-chain work of publishing deployments, picking bids, and opening leases.
The three pieces a user authors are the workload itself, the on-chain deployment that publishes it, and the policy that picks a provider:
apiVersion: akash.overlock.network/v1alpha1
kind: SDL
metadata:
name: example-sdl
spec:
providerConfigRef:
name: akash-provider
forProvider:
version: "2.0"
services:
web:
image: nginx:1.21.6
expose:
- port: 80
as: 80
to:
- global: true
profiles:
compute:
web:
resources:
cpu: "0.5"
memory: "512Mi"
storage:
- size: "1Gi"
placement:
westcoast:
attributes:
region: us-west
pricing:
web:
amount: 100
deployment:
web:
profile: westcoast
count: 1
---
apiVersion: akash.overlock.network/v1alpha1
kind: Deployment
metadata:
name: example-deployment
labels:
app: web-server
tier: frontend
spec:
providerConfigRef:
name: akash-provider
forProvider:
sdlRef:
name: example-sdl
deposit: 4500000
writeConnectionSecretToRef:
name: example-deployment-connection
namespace: default
---
apiVersion: akash.overlock.network/v1alpha1
kind: BidPolicy
metadata:
name: example-bidpolicy
spec:
providerConfigRef:
name: akash-provider
forProvider:
selector:
matchLabels:
app: web-server
tier: frontend
maxPrice: 500
excludedProviders:
- "akash1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
requiredAttributes:
- key: "region"
value: "us-west"
selectionStrategy: "lowest-price"
autoAccept: true
maxBids: 15
After that, the cluster takes over. If the workload definition changes, the deployment republishes. If a deployment closes while your spec still asks for it, the controllers reopen it. If a lease ends or a provider goes offline, the controllers reconcile toward your declared intent - no scripts, no runbooks, no apply rerun.
What this unlocks
A few of the patterns we're seeing early:
GitOps for decentralized compute. The same Argo CD or Flux pipeline that ships to your EKS cluster now ships to Akash. No separate tooling, no separate state.
Hybrid manifests. One YAML can declare a Postgres on RDS, an object store on R2, and a stateless workload on Akash, with shared secrets tying them together. One source of truth, one reconciliation loop.
Self-service Akash inside an organization. Expose Akash as a custom resource type with guardrails - provider exclusions, max prices, required attributes like region - and developers consume it without ever learning SDL or what a bid window is.
Continuous lease management. Renewals, top-ups, provider migration on failure become controller logic, not on-call runbooks.
Summary
The Akash Provider for Crossplane isn't a replacement for the CLI, the console, or the Terraform provider. Each still has its place. What it adds is a shape that didn't exist before - Akash resources that behave like first-class Kubernetes resources, with all the reconciliation and RBAC that implies.
What's next: Akash Provider support
So far the work has covered the consumer side of the marketplace - publishing deployments, picking bids, holding leases. The other side of that marketplace is the Akash Provider itself: the compute operator that publishes capacity, accepts deployments, and hosts the workloads. Setting up and operating one today happens outside Kubernetes.
That's the next focus. We're working on resources to bring Akash Provider setup and operation into the same control-plane model - declared intent, continuous reconciliation, no runbooks - so the same cluster that consumes Akash compute can also operate it.
Where to find it
The provider is open source, hosted at github.com/overlock-network/provider-akash, with packaged builds published to the Upbound Marketplace. The API is currently on v1alpha1, with development continuing from there.