LocalStack: Complete AWS Local Development and Testing Guide
LocalStack is a cloud service emulator that runs in a single container on your laptop or CI environment, providing a fully functional local AWS cloud stack. It enables developers to develop and test cloud applications offline, reducing costs and dramatically speeding up the development feedback loop. This guide covers LocalStack setup, service configuration, and production workflow patterns.
What is LocalStack?
LocalStack emulates AWS cloud services locally for development and testing:
Key Features
- AWS Service Emulation: 80+ AWS services including S3, Lambda, DynamoDB, SQS, SNS
- Docker-Based: Single container deployment with docker-compose
- AWS CLI Compatible: Use standard AWS CLI and SDKs
- Terraform/CDK/CloudFormation: Full IaC support
- CI/CD Integration: GitHub Actions, GitLab CI, Jenkins
- Cloud Pods: Save and share state across environments
- Hot Reloading: Lambda code changes without redeployment
LocalStack Community vs Pro
| Feature | Community (Free) | Pro |
|---|---|---|
| Core Services (S3, SQS, SNS, DynamoDB, Lambda) | Yes | Yes |
| API Gateway | Basic | Full (REST, HTTP, WebSocket) |
| IAM Enforcement | No | Yes |
| RDS/Aurora | No | Yes |
| ECS/EKS | No | Yes |
| Cognito | No | Yes |
| CloudWatch | Basic | Full |
| Step Functions | No | Yes |
| Cloud Pods | No | Yes |
| CI Analytics | No | Yes |
| Persistence | Basic | Advanced |
LocalStack vs Alternatives
| Feature | LocalStack | Moto | ElasticMQ | DynamoDB Local | AWS SAM Local |
|---|---|---|---|---|---|
| Multi-Service | 80+ services | 100+ mocks | SQS only | DynamoDB only | Lambda + API GW |
| Infrastructure | Docker container | Python library | Docker | Docker/JAR | Docker |
| IaC Support | Full | No | No | No | SAM templates |
| Real HTTP Endpoints | Yes | Test-only | Yes | Yes | Yes |
| AWS CLI Compatible | Yes | No | Partial | Yes | Yes |
| Language Agnostic | Yes | Python only | Yes | Yes | Limited |
Installation
Docker Compose
version: '3.8'
services: localstack: image: localstack/localstack:3.4 container_name: localstack ports: - "4566:4566" # Gateway - "4510-4559:4510-4559" # External services environment: - SERVICES=s3,sqs,sns,lambda,dynamodb,apigateway,iam,sts,cloudformation,ssm,secretsmanager,logs,events,stepfunctions - DEBUG=0 - LAMBDA_EXECUTOR=docker-reuse - DOCKER_HOST=unix:///var/run/docker.sock - PERSISTENCE=1 volumes: - "./volume:/var/lib/localstack" - "/var/run/docker.sock:/var/run/docker.sock"docker compose up -dAWS CLI Configuration
# Install awslocal wrapperpip install awscli-local
# Or configure a profileaws configure --profile localstack# AWS Access Key ID: test# AWS Secret Access Key: test# Default region: us-east-1# Default output format: json
# Use with --endpoint-urlaws --endpoint-url=http://localhost:4566 s3 ls
# Or use awslocal (auto-configures endpoint)awslocal s3 lsCore Services
S3
# Create a bucketawslocal s3 mb s3://my-bucket
# Upload filesawslocal s3 cp ./data.json s3://my-bucket/data.json
# List bucketsawslocal s3 ls
# Enable versioningawslocal s3api put-bucket-versioning \ --bucket my-bucket \ --versioning-configuration Status=Enabled
# Configure bucket notification to SQSawslocal s3api put-bucket-notification-configuration \ --bucket my-bucket \ --notification-configuration '{ "QueueConfigurations": [{ "QueueArn": "arn:aws:sqs:us-east-1:000000000000:s3-events", "Events": ["s3:ObjectCreated:*"] }] }'Lambda
import json
def handler(event, context): body = json.loads(event.get('body', '{}')) name = body.get('name', 'World') return { 'statusCode': 200, 'body': json.dumps({'message': f'Hello, {name}!'}) }# Package and deploy Lambdazip function.zip lambda_function.py
awslocal lambda create-function \ --function-name hello-function \ --runtime python3.12 \ --handler lambda_function.handler \ --zip-file fileb://function.zip \ --role arn:aws:iam::000000000000:role/lambda-role
# Invoke Lambdaawslocal lambda invoke \ --function-name hello-function \ --payload '{"body": "{\"name\": \"LocalStack\"}"}' \ output.json
cat output.jsonDynamoDB
# Create a tableawslocal dynamodb create-table \ --table-name Orders \ --attribute-definitions \ AttributeName=orderId,AttributeType=S \ AttributeName=customerId,AttributeType=S \ --key-schema \ AttributeName=orderId,KeyType=HASH \ --global-secondary-indexes \ 'IndexName=CustomerIndex,KeySchema=[{AttributeName=customerId,KeyType=HASH}],Projection={ProjectionType=ALL}' \ --billing-mode PAY_PER_REQUEST
# Put an itemawslocal dynamodb put-item \ --table-name Orders \ --item '{ "orderId": {"S": "order-001"}, "customerId": {"S": "cust-123"}, "amount": {"N": "99.99"}, "status": {"S": "pending"} }'
# Query itemsawslocal dynamodb query \ --table-name Orders \ --key-condition-expression "orderId = :id" \ --expression-attribute-values '{":id": {"S": "order-001"}}'SQS and SNS
# Create SQS queueawslocal sqs create-queue --queue-name order-events
# Create SNS topicawslocal sns create-topic --name order-notifications
# Subscribe SQS to SNSawslocal sns subscribe \ --topic-arn arn:aws:sns:us-east-1:000000000000:order-notifications \ --protocol sqs \ --notification-endpoint arn:aws:sqs:us-east-1:000000000000:order-events
# Publish a messageawslocal sns publish \ --topic-arn arn:aws:sns:us-east-1:000000000000:order-notifications \ --message '{"orderId": "order-001", "event": "created"}'
# Receive messagesawslocal sqs receive-message \ --queue-url http://localhost:4566/000000000000/order-eventsTerraform with LocalStack
terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } }}
provider "aws" { access_key = "test" secret_key = "test" region = "us-east-1"
s3_use_path_style = true skip_credentials_validation = true skip_metadata_api_check = true skip_requesting_account_id = true
endpoints { s3 = "http://localhost:4566" sqs = "http://localhost:4566" sns = "http://localhost:4566" lambda = "http://localhost:4566" dynamodb = "http://localhost:4566" iam = "http://localhost:4566" apigateway = "http://localhost:4566" cloudformation = "http://localhost:4566" ssm = "http://localhost:4566" }}resource "aws_s3_bucket" "data" { bucket = "data-bucket"}
resource "aws_sqs_queue" "events" { name = "order-events" visibility_timeout_seconds = 30 message_retention_seconds = 86400}
resource "aws_dynamodb_table" "orders" { name = "Orders" billing_mode = "PAY_PER_REQUEST" hash_key = "orderId"
attribute { name = "orderId" type = "S" }}
resource "aws_lambda_function" "processor" { function_name = "order-processor" runtime = "python3.12" handler = "handler.main" filename = "lambda.zip" role = aws_iam_role.lambda_role.arn
environment { variables = { TABLE_NAME = aws_dynamodb_table.orders.name QUEUE_URL = aws_sqs_queue.events.url } }}# Using tflocal wrapperpip install terraform-localtflocal inittflocal plantflocal apply -auto-approveCI/CD Integration
GitHub Actions
name: Integration Testson: [push, pull_request]
jobs: test: runs-on: ubuntu-latest services: localstack: image: localstack/localstack:3.4 ports: - 4566:4566 env: SERVICES: s3,sqs,dynamodb,lambda DEBUG: 0
steps: - uses: actions/checkout@v4
- name: Wait for LocalStack run: | pip install awscli-local timeout 60 bash -c 'until awslocal s3 ls 2>/dev/null; do sleep 2; done'
- name: Setup infrastructure run: | awslocal s3 mb s3://test-bucket awslocal dynamodb create-table \ --table-name TestTable \ --attribute-definitions AttributeName=id,AttributeType=S \ --key-schema AttributeName=id,KeyType=HASH \ --billing-mode PAY_PER_REQUEST
- name: Run tests run: pytest tests/integration/ -v env: AWS_ENDPOINT_URL: http://localhost:4566 AWS_ACCESS_KEY_ID: test AWS_SECRET_ACCESS_KEY: test AWS_DEFAULT_REGION: us-east-1Testing Patterns
Python Integration Tests
import boto3import pytestimport os
@pytest.fixturedef aws_clients(): endpoint = os.environ.get('AWS_ENDPOINT_URL', 'http://localhost:4566') kwargs = { 'endpoint_url': endpoint, 'region_name': 'us-east-1', 'aws_access_key_id': 'test', 'aws_secret_access_key': 'test', } return { 's3': boto3.client('s3', **kwargs), 'sqs': boto3.client('sqs', **kwargs), 'dynamodb': boto3.resource('dynamodb', **kwargs), }
def test_order_processing(aws_clients): # Setup table = aws_clients['dynamodb'].create_table( TableName='Orders', KeySchema=[{'AttributeName': 'orderId', 'KeyType': 'HASH'}], AttributeDefinitions=[{'AttributeName': 'orderId', 'AttributeType': 'S'}], BillingMode='PAY_PER_REQUEST' ) table.wait_until_exists()
# Test table.put_item(Item={'orderId': 'test-001', 'amount': 99}) response = table.get_item(Key={'orderId': 'test-001'})
assert response['Item']['orderId'] == 'test-001' assert response['Item']['amount'] == 99Production Best Practices
Checklist
- Use
docker-compose.ymlto standardize LocalStack across the team - Pin LocalStack version for reproducible environments
- Use
awslocal/tflocalwrappers for cleaner commands - Configure only needed services via
SERVICESenv var - Enable persistence for faster restart cycles
- Use init hooks (
/etc/localstack/init/ready.d/) for automated setup - Mirror production Terraform with separate provider configs
- Run integration tests against LocalStack in CI/CD
- Use environment variables to switch between LocalStack and real AWS
- Document service limitations vs real AWS behavior
- Test IAM policies locally before deploying to production
Accelerate Your Cloud Development
Building efficient local development workflows with LocalStack requires understanding of AWS services and testing patterns. At chavkov.com, I deliver hands-on AWS and DevOps training that includes local development best practices.
Contact me to discuss training options for your team.