Skip to main content
The Workloads section in Ankra gives you complete visibility and control over all running applications in your Kubernetes clusters.

Overview

Workloads are the applications running in your cluster. Ankra provides a unified interface to view, manage, and troubleshoot all workload types:
  • Deployments - Stateless applications with declarative updates
  • Pods - The smallest deployable units
  • StatefulSets - Stateful applications with stable identities
  • DaemonSets - Pods that run on every node
  • ReplicaSets - Maintain replica pod counts
  • Jobs - Run-to-completion tasks
  • CronJobs - Scheduled jobs
  • Horizontal Pod Autoscalers (HPAs) - Automatic scaling

Accessing Workloads

Navigate to your cluster and click Kubernetes in the sidebar. The Workloads section includes:
ResourcePath
DeploymentsKubernetes → Deployments
PodsKubernetes → Pods
StatefulSetsKubernetes → StatefulSets
DaemonSetsKubernetes → DaemonSets
ReplicaSetsKubernetes → ReplicaSets
JobsKubernetes → Jobs
CronJobsKubernetes → CronJobs
HPAsKubernetes → Horizontal Pod Autoscalers
Or use the Command Palette (⌘+K) to jump directly to any resource type.

Deployments

Deployments manage ReplicaSets and provide declarative updates for Pods.

Viewing Deployments

The Deployments list shows:
ColumnDescription
NameDeployment name
NamespaceKubernetes namespace
ReadyReady replicas / Desired replicas
Up-to-datePods running the latest spec
AvailablePods available for traffic
AgeTime since creation

Deployment Details

Click a deployment to view:
  • Status - Current rollout state and conditions
  • Replicas - Desired, current, ready, and available counts
  • Strategy - Rolling update or recreate
  • Pod Template - Container specs, resources, environment
  • Events - Recent Kubernetes events
  • Managed Pods - List of pods owned by this deployment

Actions

  • Scale - Adjust replica count
  • Restart - Trigger a rolling restart
  • View YAML - See the full resource definition
  • Troubleshoot - AI-assisted diagnosis
  • Delete - Remove the deployment

Pods

Pods are groups of containers that share storage and network.

Viewing Pods

The Pods list shows:
ColumnDescription
NamePod name
NamespaceKubernetes namespace
ReadyReady containers / Total containers
StatusRunning, Pending, Failed, etc.
RestartsContainer restart count
AgeTime since creation
NodeNode the pod is scheduled on

Pod Details

Click a pod to view:
  • Containers - List of containers with status, image, and resources
  • Conditions - PodScheduled, ContainersReady, Ready
  • Events - Recent events (scheduling, pulling, started, etc.)
  • Labels & Annotations - Metadata
  • Volumes - Mounted volumes and claims

Pod Logs

View real-time and historical logs:
  1. Click on a pod
  2. Select the Logs tab
  3. Choose the container (for multi-container pods)
  4. Toggle Follow for real-time streaming
  5. Search within logs using the filter

Actions

  • View Logs - Stream container logs
  • View YAML - Full pod specification
  • Troubleshoot - AI analysis of pod issues
  • Delete - Remove the pod (will be recreated by controllers)

StatefulSets

StatefulSets manage pods with persistent identities and stable storage.

Key Features

  • Stable Pod Names - Pods have predictable names (app-0, app-1, etc.)
  • Persistent Storage - Volume claims are retained across restarts
  • Ordered Operations - Pods are created/deleted in order

Viewing StatefulSets

The list shows replica status, update status, and age. Details include:
  • Pod management policy
  • Update strategy
  • Volume claim templates
  • Managed pods

DaemonSets

DaemonSets ensure a pod runs on every (or selected) node.

Use Cases

  • Log collectors (Fluentd, Filebeat)
  • Node monitoring (Prometheus node exporter)
  • Network plugins (CNI, kube-proxy)
  • Storage plugins (CSI drivers)

Viewing DaemonSets

The list shows:
ColumnDescription
DesiredPods that should be scheduled
CurrentCurrently running pods
ReadyPods ready to serve
Up-to-datePods with latest spec
AvailablePods available

Jobs & CronJobs

Jobs

Jobs create pods that run to completion:
  • Completions - How many successful completions are needed
  • Parallelism - How many pods can run concurrently
  • Status - Active, succeeded, failed counts

CronJobs

CronJobs schedule Jobs on a time-based schedule:
  • Schedule - Cron expression (e.g., 0 */6 * * *)
  • Last Schedule - When it last ran
  • Active - Currently running jobs
  • Suspend - Whether scheduling is paused

Horizontal Pod Autoscalers

HPAs automatically scale workloads based on metrics.

Viewing HPAs

ColumnDescription
NameHPA name
ReferenceTarget deployment/statefulset
Min/MaxReplica bounds
ReplicasCurrent replica count
MetricsCPU/Memory utilization

HPA Details

  • Target metrics and current values
  • Scaling history
  • Conditions and events

Bulk Delete Resources

You can select and delete multiple Kubernetes resources at once across all resource types.

How to Bulk Delete

1

Navigate to a Resource List

Go to any Kubernetes resource list (Deployments, Pods, StatefulSets, Services, ConfigMaps, etc.).
2

Select Resources

Click the checkbox next to each resource you want to delete. Use the top checkbox to select all visible resources.
3

Click Delete

A toolbar appears showing the number of selected items. Click Delete Selected to begin.
4

Confirm Deletion

Review the list of resources to be deleted in the confirmation dialog. Confirm to proceed.
Resources are deleted sequentially with progress tracking. If any individual deletion fails, the error is reported per item and the remaining deletions continue.

Supported Resource Types

Bulk delete is available across all Kubernetes resource types in Ankra, including:
  • Workloads: Deployments, Pods, StatefulSets, DaemonSets, ReplicaSets, Jobs, CronJobs
  • Networking: Services, Ingresses, Network Policies, Endpoints
  • Configuration: ConfigMaps, Secrets
  • Storage: Persistent Volume Claims, Persistent Volumes, Storage Classes
  • RBAC: Roles, RoleBindings, ClusterRoles, ClusterRoleBindings, ServiceAccounts

Common Tasks

Troubleshooting a Failing Pod

  1. Navigate to Pods and find the failing pod
  2. Check the Status column for error indicators
  3. Click the pod and review:
    • Events for scheduling or pull errors
    • Logs for application errors
    • Conditions for readiness issues
  4. Click Troubleshoot for AI-assisted diagnosis

Scaling a Deployment

  1. Navigate to Deployments
  2. Click on the deployment to scale
  3. Click Scale and enter the new replica count
  4. Confirm the change

Restarting Pods

  1. Navigate to Deployments
  2. Click on the deployment
  3. Click Restart to trigger a rolling restart
  4. Monitor the rollout in Events

Nodes

The Nodes view under Kubernetes → Nodes shows all nodes in your cluster with their status, roles, resource usage, and scheduling state.

Node List

The node list displays:
ColumnDescription
NameNode hostname
StatusReady / NotReady and scheduling state
RolesControl plane, worker, etcd
VersionKubelet version
AgeTime since node joined the cluster
CPU / MemoryAllocatable resources
Nodes that are cordoned show a SchedulingDisabled label next to their status.

Node Actions

Cordon & Uncordon

Cordoning a node marks it as unschedulable — no new pods will be placed on it, but existing pods continue running. This is the first step when preparing to move workloads off a node.
  • Single node: Click a node to open its detail view, then click Cordon or Uncordon in the header.
  • Bulk action: Select multiple nodes using the checkboxes, then click Cordon or Uncordon in the floating action bar. The action bar dynamically shows the relevant buttons based on the scheduling state of selected nodes.

Drain

Draining a node cordons it and then evicts all running pods (ignoring DaemonSets and deleting emptyDir data). This is equivalent to:
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
  • Single node: Click a node, then click Drain in the header.
  • Bulk action: Select one or more nodes, then click Drain in the floating action bar.
The drain operation shows real-time progress: cordoning, fetching pods, and evicting each pod sequentially. Failed evictions are reported per pod while the remaining evictions continue.

Moving Workloads to New Node Groups

To migrate workloads from old nodes to a new node group:
1

Create a new node group

Add a new node group in Cluster Settings → Nodes with your desired instance type, labels, and taints.
2

Cordon old nodes

In Kubernetes → Nodes, select the old nodes and click Cordon to prevent new scheduling.
3

Drain old nodes

With the same nodes selected, click Drain to evict all running pods. Pods managed by Deployments or StatefulSets will be automatically rescheduled onto the new node group.
4

Remove old node group

Once all pods have migrated, delete the old node group from Cluster Settings → Nodes.

Tips

Use Namespace Filters: Filter by namespace to focus on specific applications or environments.
Watch Events: Kubernetes events often explain why pods are failing to start.
Check Resource Limits: Many pod failures are due to insufficient CPU or memory limits.

Still have questions? Join our Slack community and we’ll help out.