S3 cost optimization: Automating savings with AWS CDK and lifecycle policies

 |  Eric Pinet

Amazon S3 is often one of the most overlooked AWS services when it comes to cost optimization. Yet in most AWS environments we review at Unicorne, S3 storage contains immediate and low risk savings opportunities. Noncurrent object versions that have accumulated for years, abandoned multipart upload fragments, and data that was never transitioned to cheaper storage classes all add up quietly over time.

According to observations from the FinOps community, organizations that apply disciplined cloud optimization practices typically achieve between 25 and 40 % annual savings on their cloud bills. For S3 specifically, lifecycle policies are one of the easiest levers available. They do not affect production data, they can be implemented in only a few lines of code, and they prevent unnecessary costs from accumulating in the future.

In this article, we look at the main sources of waste in S3, how to identify them using Storage Lens, and how to eliminate them systematically by codifying lifecycle policies with AWS CDK. Using Infrastructure as Code ensures consistency, governance, and repeatability across environments.

Identifying sources of waste with S3 Storage Lens

Before optimizing anything, you need visibility. Amazon S3 Storage Lens provides a consolidated view of S3 usage across an entire account or organization. The basic metrics are available at no cost. Two metrics deserve immediate attention.

Noncurrent Object Versions show how much storage is consumed by older versions of your objects. When versioning is enabled on a bucket, which is recommended for production data, S3 retains every version of every file ever uploaded. A bucket with 100 GB of active data can easily accumulate 500 GB or more of noncurrent versions if no expiration policy is defined. The storage price remains the same at $0.023 per GB per month in S3 Standard in the us-east-1 region.

Incomplete Multipart Uploads represent upload fragments that were never completed. Network interruptions, application crashes, or forgotten test scripts can leave these fragments behind. They serve no purpose, yet S3 continues to charge for them. This is pure waste.

To enable Storage Lens, open the S3 console and create a dashboard at the account or organization level. Keep the free metrics enabled. After 24 to 48 hours, review buckets that show a high ratio of noncurrent versions compared to total storage. Anything above 30 percent should be investigated. Also look for any nonzero volume of incomplete multipart uploads.

Codifying cleanup of noncurrent versions with CDK

Using Infrastructure as Code with AWS CDK allows lifecycle policies to be deployed in a consistent and version controlled way. Policies are reviewed through pull requests and applied automatically across environments.

For a new bucket with versioning enabled:

import * as s3 from ‘aws-cdk-lib/aws-s3’;

import * as cdk from ‘aws-cdk-lib’;

 

const bucket = new s3.Bucket(this, ‘DataBucket’, {

 versioned: true,

 lifecycleRules: [{

   noncurrentVersionExpiration: cdk.Duration.days(90),

   noncurrentVersionsToRetain: 3,

 }],

});

This configuration keeps the three most recent noncurrent versions as a safety buffer. Any older version is automatically removed after 90 days. This is a reasonable starting point for production buckets where rollback capability still matters.

For development or CI/CD buckets where historical versions have little value, the policy can be more aggressive:

lifecycleRules: [{

 noncurrentVersionExpiration: cdk.Duration.days(7),

 noncurrentVersionsToRetain: 1,

}],

The key benefit is that the policy is defined in code, stored in the repository, reviewed by the team, and applied consistently. Manual configuration in the console is no longer required.

Eliminating abandoned multipart uploads

Incomplete multipart uploads are one of the easiest problems to fix. In practice there is almost never a reason to keep them longer than a few days. A legitimate upload should complete within hours.

const bucket = new s3.Bucket(this, ‘UploadBucket’, {

 lifecycleRules: [{

   abortIncompleteMultipartUploadAfter: cdk.Duration.days(7),

 }],

});

Seven days is a conservative value that works well for most environments. For predictable workloads, one to three days may be sufficient. In accounts that never configured this rule, it can remove gigabytes of unused storage almost immediately.

Deployment best practices and pitfalls to avoid

Before applying these policies broadly, start with a small pilot. Choose two or three noncritical buckets and monitor the results in Storage Lens for one or two weeks. This confirms that behavior matches expectations.

Avoid over optimizing production data. S3 Standard storage at $0.023 per GB per month is relatively inexpensive. Optimization efforts should be proportional to the savings they generate.

Document standard retention policies and share them with engineering teams. Some buckets may require exceptions due to regulatory or compliance requirements. These exceptions can be managed with tagging strategies. Compliance requirements should always take precedence over cost optimization.

Finally, measure the impact. Track changes using both Storage Lens and AWS Cost Explorer. The impact of multipart upload cleanup is usually immediate. Reductions in noncurrent versions appear gradually as the lifecycle rules take effect.

Conclusion

S3 lifecycle policies are what the FinOps community often calls quick wins. The effort required is small and the results are measurable. Most importantly, they introduce no risk to production data.

By defining these policies with AWS CDK, a one time optimization becomes part of ongoing governance. Every new bucket automatically inherits the same rules and best practices.

However, lifecycle policies are only the starting point. Advanced S3 optimization can involve analyzing access patterns for intelligent tiering, optimizing request and transfer costs, or designing cross region replication strategies. These decisions require a deep understanding of workloads and AWS pricing.

This is where experience makes the difference between minor savings and meaningful long term optimization.

At Unicorne, we help organizations approach AWS cost optimization in a structured way. That includes initial analysis with Storage Lens and automated implementation using Infrastructure as Code. Every dollar saved on storage can be reinvested into building and shipping better products.

Learn more:

AWS Documentation — S3 Lifecycle Configuration : https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html

AWS CDK API Reference — S3 LifecycleRule : https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3.LifecycleRule.html

AWS Documentation — S3 Storage Lens : https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens.html

AWS S3 Pricing — Tarification par classe de stockage : https://aws.amazon.com/s3/pricing/

FinOps Foundation — Framework et principes FinOps : https://www.finops.org/framework/