Cloud providers
We deploy production workloads across all three major cloud providers (AWS, Google Cloud Platform, and Microsoft Azure) and pick between them based on the constraints of the engagement: existing investment, regulatory requirements, regional coverage, managed-service availability, and pricing for the specific workload shape. Cross-cloud and hybrid deployments are common when latency, cost, or data-residency requirements pull the architecture in different directions.
The deployments are typically multi-AZ from day one. Web tier behind a load balancer (ALB, Cloud Load Balancing, Azure Load Balancer) with auto-scaling groups across availability zones. Database tier on managed Postgres or MySQL (RDS, Cloud SQL, Azure Database) with automated backups, point-in-time recovery, and read replicas in additional AZs for failover. In-memory cache on managed Redis (ElastiCache, Memorystore, Azure Cache for Redis) for session state, hot-read caching, and rate limiting. Search backends on managed OpenSearch or Elasticsearch (Amazon OpenSearch Service, Elastic Cloud on GCP, Azure AI Search) with index replication across AZs. Object storage on the provider's native service (S3, GCS, Azure Blob) for static assets, file uploads, and backup destinations. Queues and pub/sub on managed services (SQS, Pub/Sub, Service Bus) for async work and integration plumbing.
The pattern that holds across providers: stateless application tier, stateful data tier on managed services where the provider's failure-handling is better than what we'd build, and an operational layer (logs, metrics, alerts, distributed tracing) that gives the on-call engineer the same visibility regardless of which cloud is underneath.