Azure Container Apps : The Simplicity of Serverless, The Power of Kubernetes

When Kubernetes burst onto the scene, it was hailed as a groundbreaking shift in how we manage and orchestrate containerized applications. It offered everything developers and platform teams could dream of—self-healing infrastructure, declarative deployments, autoscaling, rolling upgrades, and a modular ecosystem.

But that power came at a price.

For organizations just starting their cloud-native journey, Kubernetes can feel like trying to pilot a spacecraft to drive across town. While its capabilities are unmatched, the learning curve and operational complexity can be overwhelming.

Here are some of the real-world challenges many teams face when trying to adopt Kubernetes:

    • Operational Overhead: You need to manage cluster provisioning, node pools, updates, scaling configurations, ingress controllers, certificate management, monitoring agents, and more.

    • Steep Learning Curve: Kubernetes has a rich API surface and a sea of concepts—Pods, Deployments, ReplicaSets, StatefulSets, DaemonSets, ConfigMaps, Secrets, CRDs, Admission Controllers—the list goes on. It takes months of learning to become truly comfortable.

    • Talent Dependency: Running Kubernetes in production requires a skilled team that understands its internals. It’s not just about deploying apps; it’s about understanding etcd, kubelet behavior, CNI plugins, and workload scheduling nuances.

    • Risk of Misuse or Overkill: For small to mid-size apps or internal tools, Kubernetes can be massive over-engineering. Simple deployments become YAML-heavy operations, requiring continuous maintenance, security hardening, and monitoring.

    • Slow Dev Velocity: Developers are often blocked by infrastructure complexity. Waiting for platform teams to configure namespaces, define policies, or debug Helm charts leads to inefficiencies.

Kubernetes is not the problem—it’s the operational burden that comes with it.

And that’s where Azure Container Apps comes into the picture.


What is Azure Container Apps: Kubernetes Without the Baggage

Azure Container Apps is Microsoft’s answer to those who want the benefits of Kubernetes—scalability, event-driven architecture, microservices communication, observability—but without managing Kubernetes itself.

At its core, Azure Container Apps is a fully managed serverless container platform that runs on Kubernetes under the hood, abstracted away from the user. Built using a combination of Kubernetes, KEDA, Dapr, and Envoy, it offers a powerful, production-grade platform where developers can deploy apps without worrying about the complexities of infrastructure.

Let’s break down what makes Container Apps special:

      • No Infrastructure to Manage: No need to create or maintain clusters. You don’t worry about nodes, VM SKUs, patching, or scaling mechanics.

      • App-Centric Deployment: You deploy apps, not Pods or ReplicaSets. You focus on the container image and its configuration.

      • Built-in Autoscaling: Autoscaling is powered by KEDA and works out-of-the-box. You define triggers, not HPA policies.

      • Zero to Hero: Container Apps can scale to zero when idle and instantly scale up when traffic arrives.

      • Integrated with the Azure Ecosystem: Identity, VNet, Key Vault, and Monitoring are all natively integrated—no extra sidecars or configuration gymnastics required.

In essence, Azure Container Apps gives you Kubernetes-like power in a PaaS experience, reducing complexity and increasing developer agility.


Core Features Explained in Depth

1) VNet Integration: Secure Connectivity the Right Way

Azure Container Apps supports both internal-only and public-facing applications, with deep integration into Azure Virtual Networks (VNets). This capability is crucial when your app needs to talk to other Azure services (like databases, queues, and APIs) that are not publicly accessible.

There are two VNet models supported:

    1. Microsoft-Managed VNet
      When you deploy a Container App Environment without explicitly configuring a VNet, Azure internally provisions a managed VNet. This simplifies setup, but you have limited control over IP ranges, NSGs, DNS configuration, or network peering.

    2. Customer-Provided VNet
      For advanced use cases, you can deploy Container Apps into a customer-managed VNet.

     Customer provided VNet allows :

      • Custom IP addressing
      • DNS integration with on-premises networks
      • NSG customization for inbound/outbound filtering
      • Peering with hub-and-spoke networks
      • Private access to other Azure resources (SQL, Storage, Service Bus, etc.)

Within a VNet, you can configure Ingress as Internal, meaning your app will not have a public endpoint—it is accessible only from within the VNet. This is ideal for backend services or APIs that shouldn’t be exposed to the internet.

You can also configure External Ingress, where the app receives a public IP and can serve requests from the internet. TLS certificates, routing, and scaling still work seamlessly in this mode.

2) Secrets and Azure Key Vault Integration

Handling secrets securely is vital in cloud-native applications. With Container Apps, you can inject secrets as environment variables from:

      • Azure Key Vault using Managed Identity

      • Or directly from App Secrets, stored and managed by the Container Apps environment itself

Here’s how it works with Key Vault:

      • You create a system-assigned or user-assigned managed identity for the Container App.

      • You grant that identity Reader or Secrets User access to Key Vault.

      • In your Container App definition, you refer to the secret URI and Azure will inject it into your environment variables.

This way, no secrets are stored in code, CI/CD pipelines, or configuration files. You get secure, centralized, and auditable secret management.

3) Built-in Autoscaler with KEDA: Truly Serverless

With Kubernetes—specifically AKS—you can configure autoscaling using the Horizontal Pod Autoscaler (HPA), which supports CPU and memory out-of-the-box.

If you want to scale on custom or event-based metrics, you need to install and configure KEDA separately or enable it as an AKS add-on.

But here’s the difference:

In Azure Container Apps, KEDA is built in—you don’t install anything, configure adapters, or write controller logic. You simply define scale rules as part of your app configuration, and it works out of the box.

Even more, Container Apps supports scale-to-zero natively, something that requires advanced configuration and tradeoffs in AKS environments.

So while AKS can support KEDA, in Container Apps:

      • KEDA is the default, not an add-on.
      • Operational burden is zero
      • Cold-starts are optimized by the platform

This makes Azure Container Apps the fastest and easiest way to build event-driven microservices on Azure.

KEDA supports over 40+ scale triggers, including:

      • CPU / Memory
      • HTTP Request Concurrency
      • Azure Service Bus
      • Azure Storage Queue
      • Kafka
      • Prometheus metrics
      • Custom external metrics via REST API

You can define a rule like:

{
  "type": "azure-servicebus",
  "queueName": "orders",
  "threshold": 100,
  "connection": "SB_CONNECTION"
}

And the app will automatically scale based on queue depth.

This makes it ideal for cost-efficient event-driven workloads and APIs that see fluctuating traffic.

4) Volume Mounts

Storage is often one of the first concerns when building production-grade applications. Whether it’s uploading files, caching temporary data, or reading configuration files, containers need storage—and in Azure Container Apps, this is handled through volume mounts.

As of now, Azure Container Apps supports Azure File Share (via SMB protocol) as the persistent storage backend. Here’s how it works in practice:

    1. Provision an Azure Storage Account and File Share
      You create an Azure File Share in a storage account, with an appropriate access key or identity-based access.

    2. Create a Storage Volume Resource
      In the Container App configuration, you define a volume with the required storage account credentials.

    3. Mount the Volume to a Path in Your Container
      Finally, you specify where in the container’s file system the volume should be mounted.

"volumeMounts": [
  {
    "volumeName": "sharedfiles",
    "mountPath": "/mnt/data"
  }
]

Behind the scenes, Azure ensures this path is mounted using the appropriate SMB share. Volumes can be either read-write or read-only, depending on use case.

⚠️ Note: These volumes are not block storage like Azure Disks. They are network-mounted file shares, which means you need to account for SMB performance characteristics, such as IOPS and latency, especially for high-throughput workloads.

While it’s not as granular as Kubernetes PersistentVolumeClaims (PVCs), for most apps needing shared storage across replicas or revisions, this setup is more than sufficient and much easier to manage.

5) Revisions and Canary Deployments

One of the most powerful aspects of Azure Container Apps is its revision-based deployment model.

Whenever you change a configuration parameter—be it container image, environment variable, or CPU/memory spec—Azure creates a new immutable revision of the app. This allows you to:

      • Maintain a history of deployments
      • Test new versions with limited traffic
      • Rollback instantly to any stable revision

By default, only one revision is active, but Container Apps allows traffic splitting between revisions, enabling canary deployments.

Let’s say:

      • Revision A is the current stable version
      • You deploy Revision B with updated logic

You can now configure:

      • 90% of traffic to go to Revision A
      • 10% to Revision B

If Revision B performs well, you gradually increase traffic. If issues arise, revert back to 100% traffic on Revision A.

This can be done visually in the Azure Portal, via ARM/Bicep/Terraform, or through Azure CLI.

Key benefits:

      • Zero-downtime rollout
      • Easy A/B testing
      • Built-in rollback without scripting

This is Kubernetes-style rollout with none of the YAML or complexity.

6) Ingress and Egress in Azure Container Apps

Ingress refers to incoming traffic to your container apps.

With Azure Container Apps, ingress is elegantly simplified, yet highly functional.

Azure Container App provides built-in ingress capabilities, allowing you to expose your applications to the internet or restrict them to internal networks.

Here’s how ingress works in ACA:

      • Each Container App can expose a specific port (e.g., 80, 8080, 5000).
      • You decide whether the app is public (internet-facing) or internal (only accessible within VNet).
      • TLS is automatically provisioned and renewed for your app’s domain.
      • You can configure custom domain bindings, along with TLS certificates.
      • An internal Envoy-based ingress controller handles routing, load balancing, and TLS termination.

You don’t need to set up:

      • Load Balancers
      • Ingress Controllers (like NGINX or Traefik)
      • Certificate Managers (like cert-manager)
      • Istio, Ambassador, or any other service mesh

You simply configure a few properties, and Azure handles everything behind the scenes.

It supports:

      • Auto-provisioned subdomains like myapp.mydomain.com
      • WebSocket and HTTP/2
      • Authentication policies (AAD, Twitter, GitHub, etc.) applied at the ingress layer

This is ingress-as-a-service done right.

Premium Ingress

With the introduction of Premium Ingress, you can now :

      • Customize ingress scaling to handle high-demand workloads.

      • Configure environment-level settings like termination grace periods and idle request timeouts.

      • Implement rule-based routing to direct traffic based on hostnames or paths, facilitating scenarios like A/B testing and blue-green deployments.

Egress

Egress pertains to outgoing traffic from your container apps. By default, the outbound IP addresses in ACA are not static. However, with the enhanced networking features:

      • You can integrate a NAT Gateway with your ACA environment. This provides a static public IP address for all outbound traffic, simplifying scenarios where external services require IP allowlisting.

      • The NAT Gateway ensures consistent and secure outbound connectivity, especially crucial for services that rely on fixed IP addresses for security reasons.

7) Authentication and RBAC

Security is not optional. Container Apps simplifies app-level authentication and access control using Azure-native services.

Built-in Authentication Providers

You can enable authentication using:

      • Azure Active Directory
      • GitHub
      • Google
      • Twitter
      • Facebook

This is done via the App Authentication blade, no code changes required. Azure takes care of token validation, redirect handling, and cookie/session management.

You can configure:

    • Whether unauthenticated users are redirected

    • Roles/claims required for access

    • Identity provider settings via Azure CLI or portal

RBAC for Management

Azure RBAC is fully integrated:

    • Developers can be granted permission to deploy or update apps

    • Platform engineers can restrict network, policy, or secret access

    • View-only roles for support teams

No need to manage Kubernetes RoleBindings, ClusterRoles, or ServiceAccounts. All access control is done via Microsoft Entre (Azure AD), Azure RBAC, and Managed Identities.

8) Monitoring and Observability

Container Apps integrates natively with Azure Monitor, which means you get:

    • Application logs and system logs streamed to Log Analytics

    • Metrics such as CPU, memory, instance count, etc.

    • Live console access to each app revision for real-time debugging

    • Alerts and dashboards built on top of Log Analytics

You can also:

    • Export logs to Storage/Event Hub/SIEMs

    • Integrate with App Insights via SDK

    • Use OpenTelemetry for custom traces

All of this is provided without installing any agents or modifying the container image. It’s a zero-instrumentation observability setup.

9) Dapr Integration

Dapr (Distributed Application Runtime) is an open-source sidecar architecture that simplifies building microservices with:

    • Service Invocation (e.g., call order-service without knowing its IP)

    • Pub/Sub (event-driven messaging)

    • State Management

    • Secrets Management

    • Bindings (trigger from external systems like Kafka, Mongo, etc.)

In Azure Container Apps, you can opt-in to Dapr per container by enabling it in the configuration. You don’t install or manage Dapr yourself—it’s baked into the platform.

Sample config:

"dapr": {
  "enabled": true,
  "appId": "orders-api",
  "appPort": 8080
}

Use cases include:

    • Event-driven apps that use Pub/Sub

    • Building fault-tolerant service meshes

    • Stateful microservices (with pluggable backend stores like Redis or Cosmos DB)

Dapr simplifies the developer experience for microservices while providing production-grade capabilities under the hood.

10) ACR Integration: Secure and Seamless Image Pulling

Azure Container Apps works natively with Azure Container Registry (ACR), enabling secure deployment of container images without the hassle of exposing your registry or managing secrets manually.

Here’s how image integration works:

    • You build and push your container image to an ACR repository.

    • When defining your Container App, you reference the image using its fully qualified name:
      myregistry.azurecr.io/myapp:latest

    • Azure uses a Managed Identity (system-assigned or user-assigned) to authenticate and pull the image securely.

There’s no need to create service principals or store access credentials. The identity is granted the AcrPull role on the registry, and Azure does the rest.

In essence, this gives you:

    • Tight security (no hardcoded credentials)

    • Ease of CI/CD integration with Azure DevOps, GitHub Actions, or other pipelines

    • Support for private registries using Docker Hub or other sources via secrets, if needed

Container Apps ensures that your build-to-deploy pipeline remains secure, streamlined, and enterprise-ready.


Azure Container App Environments: The Execution Boundary

Before deploying a Container App, you must choose or create an Azure Container App Environment — a logical boundary that groups related apps together. This environment determines networking, security, scaling boundaries, and whether you’re using the serverless Consumption plan or the Dedicated plan with Workload Profiles.

In simpler terms, the environment is like a container cluster boundary, but managed entirely by Azure.

🧱 What Does a Container App Environment Control?

    • Network Configuration
      You can associate the environment with a VNet (either Microsoft-managed or customer-managed), which allows:

          • Internal or external ingress
          • Access to private endpoints
          • Custom DNS and NSG controls
      • Scaling Scope
        All container apps inside the same environment scale independently, but share scaling infrastructure, such as the underlying KEDA controller or Dapr runtime.

      • Environment Variables and Secrets
        You can define shared secrets or Dapr components at the environment level, available across multiple apps.

      • Isolation
        Apps deployed in different environments are isolated from each other — they don’t share network, DNS, ingress rules, or resource pools.

🔀 Two Modes of Execution

Azure Container App Environments come in two execution models:

  1. Consumption (Serverless) Plan

      • You only pay for what you use (per second billing).
      • Supports scale-to-zero.
      • You cannot define Workload Profiles.
      • Infrastructure is entirely managed by Azure (compute is multi-tenant).
      • Good for event-driven or low-traffic workloads.
  1. Dedicated Plan with Workload Profiles

      • You pre-define resource pools (Workload Profiles).
      • Ideal for predictable workloads or higher traffic apps.
      • More cost control and isolation.
      • Still fully managed — you don’t deal with Kubernetes, but get more structure.

Workload Profiles in Azure Container Apps

As your applications scale and diversify, you may find that a one-size-fits-all compute model doesn’t meet every workload’s needs. Some apps are CPU-intensive, some are memory-hungry, while others are latency-sensitive. Azure Container Apps addresses this by introducing a feature called Workload Profiles — available when you use a Dedicated Container App Environment.

Workload Profiles allow you to define resource-optimized infrastructure pools and associate your Container Apps with them — without exposing node pools or forcing you to manage VMs like in AKS.

🔹 What Is a Workload Profile?

A Workload Profile is essentially a named compute profile that maps to a specific VM SKU under the hood. Each profile is provisioned and managed by Azure, and it comes with guaranteed CPU/memory configurations.

For example, you might define:

    • cpu-opt → 4 vCPU / 8 GB RAM optimized for compute workloads
    • memory-heavy → 2 vCPU / 16 GB RAM optimized for in-memory processing
    • general → 1 vCPU / 2 GB for average workloads

You can then deploy individual container apps into specific profiles based on their resource characteristics.

🔧 How It Works

    • You define Workload Profiles when creating or updating a Dedicated Environment.

    • Azure provisions the underlying infrastructure automatically.

    • At deployment time, you associate your Container App with the desired profile using its name.

    • Multiple apps can share the same profile, enabling efficient multi-tenancy within a dedicated environment.

✅ Benefits of Workload Profiles

    • Better Cost Optimization: Allocate only what’s needed — avoid overprovisioning.

    • Workload Segregation: Separate latency-sensitive or bursty apps from steady, background processes.

    • Infrastructure Abstraction: No need to manage node pools or VM SKUs directly.

    • Fine-Tuned Scaling: Each profile can scale independently based on demand.

⚠️ Important Considerations

    • Workload Profiles are only supported in Dedicated Container App Environments, not in the serverless Consumption plan.

    • You still don’t control things like node affinity, taints, or Pod-level scheduling — it’s all abstracted.

    • All apps within the same profile share the underlying compute pool — so noisy neighbor issues should be considered when planning workloads.


What You Can’t Do in Container Apps

Note: Azure Container Apps does not expose the Kubernetes control plane or API. You cannot use kubectl or manage cluster resources directly.

Container Apps is a high-level abstraction—meaning it hides many of the granular controls that Kubernetes exposes. While this greatly simplifies development and operations, it also introduces some limitations:

No Low-Level Pod Customization

  • You can’t define:

      • priorityClassName

      • tolerations, taints, affinity, or nodeSelector

      • initContainers or sidecars (though Dapr is an exception)

      • Pod lifecycle hooks or probes (custom HTTP health checks are supported though)

No Access to K8s Workloads

      • No StatefulSets, DaemonSets, or Jobs

      • No shared multi-container Pods with separate containers for logging, monitoring, etc.

      • No access to persistent volumes via PVCs (except SMB-based file shares)

⚙️ Resource Requests and Limits: Partially Supported

    • You can configure CPU and Memory for each container, like:

      • 0.5 vCPU, 1 GB RAM

      • 1 vCPU, 2 GB RAM

    • But there’s no GPU support, and you can’t schedule containers based on priority or custom policies.

🔍 Monitoring and Logging is Integrated but Not Extensible

  • You get native Log Analytics integration, but can’t deploy custom logging agents or tweak log aggregation behavior.

These limitations are intentional—Container Apps trades away control for simplicity and opinionated defaults. For teams that require granular workload management, AKS is a better fit.


What Happened to Pods ?

Have you noticed that when we talk about Azure Container Apps, the term “container” is used everywhere — but there’s no mention of Pods?

But wait — Kubernetes doesn’t understand containers directly. It schedules Pods, which may include one or more containers. So how is Azure Container Apps functioning without exposing this fundamental Kubernetes concept?

The answer lies in the design philosophy of Container Apps.

While Azure Container Apps is built on top of Kubernetes, Microsoft has created a fully managed platform abstraction that hides all the underlying Kubernetes components from the user.

You don’t see Pods, Deployments, ReplicaSets, or any kube-native objects — because you’re never interacting with Kubernetes directly.

Instead:

    • When you define a container app, Azure’s internal control plane translates it into Kubernetes-native resources behind the scenes.

    • Your container is indeed wrapped inside a Pod, but this Pod is managed entirely by the platform, and you don’t get access to it via kubectl or API.

    • Scaling is still powered by KEDA, ingress is managed by Envoy, and sidecars like Dapr are injected — all within a Kubernetes cluster — but those mechanics are abstracted from the developer.

This approach gives you the power of Kubernetes (like autoscaling, sidecar patterns, revision management), without exposing its operational complexity. You deploy containers — not Pods, not YAML files, and not Helm charts.

It’s a layer of thoughtful abstraction — making Kubernetes serve developers, rather than making developers learn Kubernetes.


Azure Container Apps vs AKS: Choosing the Right Platform

Let’s now break down when you should pick Container Apps and when AKS (Azure Kubernetes Service) makes more sense.

Criteria Azure Container Apps AKS (Kubernetes)
Skill Requirement Low – focus on container image and configuration High – K8s concepts, YAML, Helm, controllers
Infrastructure Management None – fully managed Full responsibility for control plane and nodes
Autoscaling Built-in with KEDA, scales to zero HPA/KEDA needs setup; no scale-to-zero by default
Ingress Simple, Envoy-based with TLS Requires setup (Ingress Controller, cert-manager)
Secret Management Easy integration with Key Vault Custom setup needed (CSI driver, etc.)
Use Case Fit Web APIs, microservices, event-driven workloads, Dev/Test Complex, stateful apps, multi-container pods, custom schedulers
Custom Networking Supported via VNet injection Full CNI, NetworkPolicy, and routing control
Observability Integrated Azure Monitor & Logs Needs custom instrumentation
Pod-Level Control Not available Full control over affinity, taints, tolerations
Stateful Workloads Limited (Azure Files only) Full support via PVCs and StatefulSets
Cost Control Great for bursty, idle workloads (consumption plan) Reserved infrastructure, higher fixed cost

✅ Use Container Apps When:

      • You want rapid deployment of APIs or backend services
      • You prefer serverless-style scaling
      • Your team lacks deep Kubernetes expertise
      • You want to integrate tightly with Azure without managing infra

✅ Use AKS When:

      • You need full control over Kubernetes APIs and workloads
      • You’re running stateful, multi-container, or GPU-based workloads
      • Your app requires fine-grained scheduling, affinity, and custom CNI
      • Your team is already comfortable with managing Kubernetes clusters

There’s no right or wrong platform—only the right abstraction for your needs.


Final Thoughts

Azure Container Apps represents a major step forward in how we think about deploying modern applications. It acknowledges that while Kubernetes is a powerful platform, not every team wants—or needs—to manage its intricacies.

With Container Apps, you get:

    • A developer-friendly experience

    • Enterprise-grade capabilities (KEDA, Dapr, VNet, Auth, Key Vault)

    • A clear, operationally simple path to production

    • Tight integration with the Azure ecosystem

You lose some control—but you gain velocity, simplicity, and focus.

It’s not Kubernetes-lite.

It’s Kubernetes refined for outcomes.

 

Leave a comment