Getting Started with ezLog: Setup, Tips, and Best PracticesIntroduction
ezLog is a lightweight, developer-friendly logging solution designed to make application logging easier to set up, read, and act upon. Whether you’re building a small startup app or a complex distributed service, ezLog focuses on clear structure, minimal overhead, and actionable output so you can spend less time chasing logs and more time fixing the root cause.
Why choose ezLog?
- Simple configuration: minimal boilerplate to get logging across environments.
- Structured output: JSON and human-readable formats supported.
- Contextual logs: built-in support for attaching request, user, and trace metadata.
- Performance-conscious: asynchronous writers and size-limited rotating files.
- Extensible: custom formatters, sinks, and integrations (e.g., alerting, metrics).
Quick overview of core concepts
- Logger: the primary object components call to record events.
- Level: severity of the event (e.g., DEBUG, INFO, WARN, ERROR).
- Sink: destination for logs (console, file, remote).
- Formatter: controls log output format.
- Context: metadata attached to logs (request id, user id, trace id).
Installation
(Example shows common install methods for several ecosystems.)
-
Node (npm):
npm install ezlog
-
Python (pip):
pip install ezlog
-
Go (module):
go get github.com/ezlog/ezlog
Basic setup examples
Node.js
const ezlog = require('ezlog'); const logger = ezlog.createLogger({ level: 'info', sink: 'console', format: 'pretty' // or 'json' }); logger.info('Server started', { port: 3000 }); logger.error('Failed to connect to DB', { retry: true });
Python
from ezlog import create_logger logger = create_logger(level='INFO', sink='console', format='pretty') logger.info('Server started', extra={'port': 3000}) logger.error('Failed to connect to DB', extra={'retry': True})
Go
import "github.com/ezlog/ezlog" logger := ezlog.New(ezlog.Config{Level: ezlog.InfoLevel, Sink: "console", Format: "pretty"}) logger.Info("Server started", ezlog.Fields{"port":3000}) logger.Error("Failed to connect to DB", ezlog.Fields{"retry":true})
Configuration best practices
- Use environment-based configuration: keep levels and sinks configurable via environment variables (e.g., EZLOG_LEVEL).
- Use structured (JSON) logs in production for easier parsing by log aggregators; use pretty/human format locally.
- Keep log levels conservative in production (INFO or WARN); enable DEBUG dynamically when needed.
- Rotate files and cap retention to avoid unbounded disk usage.
- Send critical errors to an alerting sink (e.g., email, Slack, PagerDuty) with rate limiting to prevent alert storms.
Context and correlation
Attach request IDs, trace IDs, and user IDs to logs to correlate events across services.
Example (Node):
// attach context middleware app.use((req, res, next) => { req.logger = logger.child({ requestId: req.headers['x-request-id'] || generateId() }); next(); }); req.logger.info('handling request', { path: req.path });
Performance considerations
- Use asynchronous sinks or batching to avoid blocking request threads.
- Sample verbose logs (e.g., DEBUG) in high-throughput paths.
- Avoid logging large objects; serialize or truncate payloads intentionally.
- Offload heavy serialization to background workers if necessary.
Security and privacy
- Scrub or redact sensitive fields (passwords, tokens, credit card numbers) before logging.
- Avoid logging full PII unless necessary and ensure access controls on log storage.
- Mask or hash identifiers when logs are used for analytics where anonymity is required.
Testing and validation
- Unit-test that expected messages are emitted at correct levels.
- Use snapshot tests for formatter output to detect accidental format changes.
- Validate JSON logs with schema validators in CI to ensure downstream parsers won’t break.
Integrations and tooling
- Log aggregators: ELK/Elastic, Splunk, Datadog, Loki.
- Tracing: OpenTelemetry for trace IDs and spans.
- Alerting: webhook, Slack, PagerDuty sinks.
- Monitoring: emit metrics for error rates and logging throughput.
Example: Deploy-ready configuration
-
Production:
- Level: INFO
- Format: JSON
- Sinks: file (rotating) + remote aggregator
- Redaction: enabled
- Alerts: ERROR -> PagerDuty (rate-limited)
-
Development:
- Level: DEBUG
- Format: pretty
- Sinks: console
- Redaction: minimal (to aid debugging)
Troubleshooting common issues
- Missing logs: check level settings and sink availability.
- High I/O or CPU: switch to batched/asynchronous sinks and reduce verbose logging.
- Broken parsers: ensure consistent JSON schema across environments.
- Sensitive data leaks: audit logs with automated scanners and add redaction rules.
Tips & advanced patterns
- Use child loggers to attach module/service-specific context.
- Implement unified schema (timestamp, level, message, service, env, trace_id, request_id, extra).
- Correlate logs with traces and metrics for end-to-end observability.
- Implement log sampling and dynamic sampling rate adjustment.
- Provide runtime controls (feature flags, admin endpoints) to change logging levels without redeploy.
Conclusion
ezLog is built to reduce friction in application logging: clear configuration, structured logs, performance-aware sinks, and integration-ready features. Start small with console logging and grow into structured, rotated file or aggregator-backed setups as your needs evolve. With proper context, redaction, and alerting, ezLog helps you find and fix issues faster while keeping systems reliable.
Leave a Reply