What your first few days on Stable really reveal
You’ve just connected your AWS account to Stable.
In less than 24 hours, the dashboard starts to fill in.
Resources fall into place. Costs get attributed. And sometimes… an unpleasant surprise rises to the surface.
For many of our clients, the first few weeks on Stable have been a real eye-opener.
Not because their infrastructure was a mess. But because no one had ever shown them so clearly what was really happening inside their AWS account.

What your first few days on Stable really reveal: 3 stories from clients who uncovered major opportunities
AWS cost optimization is often described as a long, complex process reserved for large teams with a dedicated FinOps department. Our clients’ experience tells a different story: the biggest gains often show up quickly, and in places no one was expecting.
Here are three real cases experienced by clients during their onboarding on Stable.
Case #1 — A $60,000 USD CloudWatch surprise
One B2B SaaS client integrated the Datadog monitoring tool into its AWS environment. The intention was sound: improve observability, centralize alerts, and gain better operational visibility.
What the team had not anticipated was that Datadog’s default configuration was querying custom Amazon CloudWatch metrics at a very high frequency. Every CloudWatch metric queried comes at a cost. Multiply that frequency by the number of enabled metrics, and the bill rises quickly, without anyone on the team connecting the addition of the tool to the increase in cloud costs.
Observed result: a $60,000 USD annualized increase attributed to CloudWatch.
It was only after onboarding onto Stable that the client saw CloudWatch rise to the top of its most expensive resources, with an abnormal growth trend that aligned exactly with the Datadog rollout. By adjusting the Datadog metric configuration, reducing granularity, and disabling non-essential metrics, the bill was brought back under control within days.
The lesson: adding an observability tool can become a cost issue in itself if the configuration is not adapted to the target AWS environment.
Case #2 — $14,000 USD recovered from 4 Lambda functions
Another client had been running a serverless application in production for several years. The Lambda architecture was working. The teams were happy with the performance. No one was touching the original configurations.
Except “it works” is not the same as “it’s optimized.”
During the initial analysis on Stable, four Lambda functions were quickly identified as anomalies: their memory allocation was significantly oversized compared to their actual usage, and they were running on x86 architecture.
Two fixes were applied:
Memory right-sizing: reducing the allocation to match what the functions were actually consuming, based on historical execution metrics.
ARM migration: Lambda functions running on ARM architecture can offer up to 20% lower costs at equivalent performance, according to AWS official documentation.
Savings achieved: more than $14,000 USD per year on just four functions.
This case illustrates a common reality in serverless environments: the initial configurations defined during development or launch are rarely revisited. The infrastructure grows, but the original settings stay frozen in place.
Case #3 — An oversized OpenSearch cluster for a feature very few people use
A SaaS software company in the HR space hosts a multi-module application on AWS. One of those modules includes an advanced search feature powered by an Amazon OpenSearch Service cluster, a powerful technology designed to index and query massive volumes of unstructured data.
After connecting to Stable, the client discovered that OpenSearch ranked first in its cost breakdown by resource.
When that information was compared against actual usage data, the conclusion was clear: the feature powered by OpenSearch was barely being used by end customers. The cost-to-value ratio was deeply unbalanced.
The recommendation was to evaluate lower-cost technology alternatives for that specific feature, whether through native search in Amazon RDS with full-text indexes, or a lighter managed service better suited to the actual data volume.
The lesson here is not technical. It is strategic: some AWS services are precision tools designed for heavy workloads. Using them for secondary, low-traffic features creates structurally high costs compared to the value delivered. The visibility Stable provides into the usage-to-cost ratio makes it possible to make these decisions based on objective data.
What these three cases have in common
These situations are technically different. But they all share the same root cause: a lack of real-time visibility into what the AWS environment is actually consuming.
None of these clients had made a major mistake. Each had made reasonable decisions in context. What was missing was simply a tool that connects every AWS resource to its real cost, its real usage, and how it evolves over time.
That is exactly what Stable brings from the very first hours after connection.
Want to see what Stable reveals in your account?
These discoveries are not exceptions. They are representative of what our teams regularly observe during early onboardings.
If you manage a growing AWS environment, an initial analysis on Stable can quickly identify the most impactful optimization opportunities, without commitment and without a long process.
Contact the Unicorne team to start your Stable onboarding and get an initial read on your AWS infrastructure.