As SOC professionals we deal with massive amounts of data every day. Every endpoint, server, firewall, and application generates logs. They are crucial for monitoring, but it is important to choose them carefully to ensure they provide value and a cost-efficient adequate monitoring of your environment. Many organizations simply onboard every log source into their SIEM, assuming more data equals higher visibility.

Not all log sources provide equally useful information. Some applications record all events within their scope, resulting in large volumes of data. For example, a firewall typically logs all traffic information, leading to significant amounts of data when many users access the internet simultaneously. With such high data volumes, it can be challenging to identify important information when monitoring your environment. Additionally, increased data volume usually leads to higher operational costs, as most cloud SIEM solutions charge for both data ingestion and retention.

On the other hand, some organizations lack awareness as to what they should monitor for, resulting in limited access to critical information. In such cases, cyber incidents may go undetected until it is too late. Furthermore, during post-incident forensic analysis, analysts often discover the absence of logs or insufficient visibility within the environment due to inadequate logging practices.

Hence, what can you do to make sure an organization is in neither scenario and has the right amount of information coming from their log sources?

1. Identify critical assets
Start by identifying your crown jewels, such as financial systems, sensitive customer data, critical servers, or identity management systems.

Work closely with business stakeholders to ensure these systems, data, and processes truly reflect what is most critical to your organization. Focus monitoring and alerts on these first to address the highest-impact threats.

Tip: Map your environment into domains (endpoints, servers, network, identity, and data) and identify which assets have the highest business impact. Engage business owners to validate your list of crown jewels and to keep it updated.

2. Review your log sources
Evaluate the logging capabilities of your current solutions. What events are being generated? Which systems are under-monitored? Are there gaps in coverage for your high-value assets?

Collaborate with network and infrastructure teams, application owners, or external suppliers to determine available data sources and ingestion methods, as well as possible gaps. This ensures key events are captured across all systems. Focus on high-fidelity logs from tools like EDR, NDR, firewalls, or IDS to maintain clear visibility into threats.

Tip: Regularly evaluate your current log sources to ensure your assets are correctly monitored for threats. Identify any gaps in monitoring and consider additional solutions or internal logging options.

3. Create a targeted logging strategy
Prioritize logging across three layers, as summarized in the table below. Use this framework to decide which data to collect, where to store it, and how to set priorities.

Log Ingestion Strategy Matrix

Log Type Destination Purpose Priority Notes

High-fidelity alerts (EDR, NDR, IDS, et cetera)

Crown jewel monitoring logs

SIEM Real-time threat detection High Feed SIEM directly; actionable alerts only
Raw network / application / system logs Data lake Forensics / threat hunting Medium Ingest selectively; minimal cost storage
Low-value / noisy events Optional Contextual information Low Avoid SIEM ingestion; keep for reference

The key principle is signal over noise. Prioritize high-fidelity alerts and logs used for monitoring crown jewels in the SIEM for real-time monitoring. Use these alerts directly for monitoring and develop targeted use cases, with higher priority for threat detection related to the crown jewels. Store supplementary logs in a data lake for deeper investigation. Filter out low-value or voluminous logs.

This approach ensures that SOC teams can focus on actionable data while keeping historical data available for forensics, threat hunting, or compliance investigations without overloading the SIEM.

Tip: Use this framework to reduce noise, lower costs, and maintain clarity across monitoring and investigative workflows. But as with the previous points, never design this in isolation, and consult with business and system owners.

4. Define retention policies
Retention is often overlooked or inconsistently implemented, but it is crucial for operational efficiency and cost management. Retaining everything indefinitely in the SIEM increases storage costs, slows queries, and makes alert triage difficult. Retaining too little means critical historical data may not be available for investigations or audits.

A balanced approach is to keep high-priority events in the SIEM for the minimum period required for detection and investigation. Store supplementary logs in a data lake with longer retention to support forensic or threat hunting activities. Proper retention policies maintain a low total cost of ownership, while ensuring critical data is available when needed.

Tip: Implement retention tiers. Keep SIEM alerts for ninety days and retain raw endpoint and network logs in the data lake for 12 to 24 months depending on regulatory or forensic requirements. This approach ensures efficiency, cost control, and investigative readiness.

5. Continuously review and improve
Environments, threats, and monitoring requirements change constantly. A logging strategy that worked six months ago may no longer be sufficient. SOC teams should perform quarterly reviews of log sources, alert use cases, and coverage.

Use a framework like MaGMa1 for measuring the effectiveness of detection use cases. For example, a SIEM rule designed to detect brute-force attempts on critical servers might have generated three hundred alerts in the last quarter, but only five were actionable. This insight allows analysts to tune thresholds, refine correlation rules, or remove noisy alerts.

Tip: Use quarterly reviews to optimize log sources and assign accountability. For each detection rule, assign an analyst to tune it, verify its effectiveness, and update it if infrastructure changes. Use MaGMa to identify rules needing adjustments for continuous improvement.

The bottom line

Effective security monitoring is not about ingesting everything. It is about ingesting the right signals. Focus on high-value alerts, strategically store supplementary logs, and continuously tune your environment. High value from a business perspective and from a security perspective, you need both.

By doing this, SOC teams can improve threat detection, streamline investigations, support compliance, and make incident response more efficient. Clarity always beats volume.

Effective monitoring is essential to complement proper logging. In the next post of this series, we will discuss the importance of implementing an appropriate monitoring strategy. Stay tuned!