Aug 29, 2025

How to fine-tune the memory for a Lambda function

author's image Danilo Desole
6 minutes read

How to fine-tune the memory for a Lambda function header image

Background

As a developer, it can be challenging to determine the right amount of memory for a Lambda function. Developers use local development environments to develop and test code for AWS Lambda. While there are tools to simulate that environment, such as the SAM local invoke command and Docker, it remains challenging to predict how memory-intensive the Lambda function will be.

Understanding Memory-CPU Allocation

A crucial aspect of Lambda memory optimization is understanding that memory allocation directly determines CPU power. AWS Lambda uses a linear relationship between memory and CPU:

Key Technical Details:

  1. Memory-CPU Scaling:
  • At 1,769 MB of memory, your function receives the equivalent of 1 full vCPU
  • Below 1,769 MB, you get a proportional share of a vCPU
  • Above 1,769 MB, you can get up to 6 vCPUs at maximum memory (10,240 MB)
  1. Memory Increments:
  • Memory can be configured in 1MB increments from 128 MB to 10,240 MB
  • Each increment affects both memory availability and CPU allocation
  1. Performance Impact:
  • CPU-intensive functions may benefit from higher memory allocation even if they don’t need the RAM
  • I/O-bound functions may not see performance improvements with increased memory
  • Multi-threaded applications can leverage multiple vCPUs at higher memory allocations
  1. Cost Implications:
  • You pay for memory allocation, not actual usage
  • Higher memory = higher CPU = potentially faster execution = potentially lower total cost
  • The optimal allocation point balances memory cost against execution duration

Example Scenarios:

  • CPU-Intensive Function: A data processing function using 200 MB of RAM but intensive CPU work may benefit from 1,769 MB allocation to get full vCPU access
  • Memory-Heavy Function: An image processing function needing 2 GB of RAM automatically gets more than 1 vCPU, which can improve performance
  • Simple Function: A basic API endpoint may only need 256 MB and will not benefit from higher allocations

Using CloudWatch Lambda Insights

One way to collect memory metrics from a Lambda function is to use CloudWatch Lambda Insights. The service collects, aggregates, and summarizes system-level metrics including CPU time, memory, disk and network usage. It also collects, aggregates, and summarizes diagnostic information such as cold starts and Lambda worker shutdowns to help you isolate issues with your Lambda functions and resolve them quickly. The service provides a dashboard that presents all collected metrics and logs. This solution requires some developer work and might slightly impact Lambda performance because it uses a Lambda extension. It also comes with a cost following the CloudWatch pricing.

Using AWS Lambda Power Tuning

For automated and comprehensive memory optimization, consider using the AWS Lambda Power Tuning tool, an open-source utility that:

  • Automated Testing: Runs your function with different memory configurations (128MB to 10,240MB)
  • Cost-Performance Analysis: Provides detailed cost vs performance trade-off analysis
  • Visual Reports: Generates charts showing optimal memory allocation based on your specific workload
  • Easy Deployment: Available as a SAR (Serverless Application Repository) application for one-click deployment

To use the Power Tuning tool:

  1. Deploy it from the AWS Serverless Application Repository
  2. Execute the state machine with your Lambda function ARN
  3. Analyze the generated report to find the optimal memory configuration
  4. Apply the recommended memory setting to your function

This tool is particularly valuable for production workloads where precise optimization can result in significant cost savings.

Using CloudWatch Logs Insights

Another way to collect memory metrics from a Lambda function is to use CloudWatch Logs Insights. AWS Lambda uses CloudWatch to send logs, these contain information about the execution time, maximum memory used and cold start times. This approach comes with no additional costs (you pay for the logs stored in CloudWatch anyway) and no need to modify the Lambda code. The logs that inform about the max memory used are in the following form: REPORT RequestId: ABCD Duration: 58856.60 ms Billed Duration: 58857 ms Memory Size: 128 MB Max Memory Used: 127 MB Init Duration: 559.99 ms. These logs can be found in CloudWatch Logs Insights with the query shown below.

fields @timestamp, @message, @logStream, @log
| filter @message like /(?i)(max memory used)/
| sort @timestamp desc
| limit 10000

Users must select the Log Group to query, select a time period, and click the Run query button. Multiple Log Groups can be selected at the same time, although this approach is not recommended; instead, analyze one Lambda function at a time.

CloudWatch Logs Insight Console

To simplify the investigation, the Pattern tab can be used to select the pattern that matches the requirements. Typically there are 2 patterns: one containing the cold start time, and the other containing information about memory consumption for consecutive executions.

CloudWatch Logs Insight Console

Interpreting CloudWatch Logs Insights Results

When analyzing the CloudWatch Logs Insights results, the following steps should be followed:

  1. Sort by Token Value: Order the results by the “Max Memory Used” values to identify the function’s memory consumption patterns
  2. Identify Peak Usage: Identify the highest memory consumption values - these represent the function’s peak memory requirements
  3. Analyze Frequency: Use the “Event count” column to determine how often different memory levels are reached
  4. Calculate Safety Margin: Add 10-20% buffer to peak memory usage to account for variability
  5. Consider Cold Start Impact: Cold starts typically consume more memory, so this should be factored into allocation decisions

CloudWatch Logs Insight Console

Memory Optimization Best Practices

  1. Testing Strategy:
  • Test with realistic payloads that match production data sizes
  • Include both typical and peak load scenarios in your testing
  • Run tests over extended periods to capture memory usage patterns
  1. Monitoring Approach:
  • Set up CloudWatch alarms for memory utilization above 90%
  • Monitor both average and peak memory usage over time
  • Track memory trends after code changes or dependency updates
  1. Iterative Optimization:
  • Start with generous memory allocation (1GB) and optimize downward
  • Make incremental changes (reduce by 64-256MB at a time)
  • Test thoroughly after each adjustment to ensure performance isn’t degraded

Conclusion

Fine-tuning AWS Lambda memory is critical for cost optimization and performance, potentially delivering 20-50% cost savings while avoiding execution failures. With Lambda’s current memory range of 128MB to 10,240MB (10GB), choosing the right allocation significantly impacts both your AWS bill and application performance.

The three main approaches each serve different needs: CloudWatch Logs Insights offers cost-effective basic analysis, CloudWatch Lambda Insights provides comprehensive metrics with additional costs, and AWS Lambda Power Tuning delivers automated optimization for complex workloads.

Remember that memory allocation directly affects CPU performance and pricing, making optimization a balance between cost, performance, and reliability. Start with data-driven decisions, monitor continuously, and adjust iteratively based on real-world usage patterns.

Next Steps: Begin by analyzing your current Lambda functions using CloudWatch Logs Insights, identify your highest-cost or most frequently executed functions, and prioritize optimization efforts where you’ll see the greatest impact.

Additional Resources

We have the tools to understand your cloud and the guidance to make the most of it.

GET IN TOUCH

Schedule a call with a us and find out what Virtuability can do for you.

GET STARTED