Skip to content

Memory Management Guide

Overview

The Safeguards includes comprehensive memory management and caching features to optimize resource usage and improve performance. This guide covers the key components and their usage.

Memory Manager

The MemoryManager class provides centralized memory optimization and resource tracking:

from safeguards.core.memory_manager import MemoryManager

# Initialize memory manager
memory_manager = MemoryManager(gc_threshold=(700, 10, 10))

# Create object pool
pool = memory_manager.create_pool(
    name="request_pool",
    factory=lambda: Request(),
    max_size=100
)

# Use pooled objects
obj = pool.acquire()
try:
    # Use object
    process_request(obj)
finally:
    pool.release(obj)

# Track resources
resource = create_resource()
memory_manager.track_resource(resource)

# Cleanup
memory_manager.cleanup_resources()

Features

  1. Object Pooling
  2. Reuse objects to reduce allocation overhead
  3. Thread-safe pool operations
  4. Configurable pool sizes

  5. Resource Tracking

  6. Automatic cleanup of unused resources
  7. Weak reference tracking
  8. Asynchronous cleanup

  9. Cache Management

  10. Namespace-based caching
  11. Cache statistics tracking
  12. Selective cache clearing

Cache Manager

The CacheManager provides advanced caching strategies:

from safeguards.core.cache_manager import CacheManager

cache_manager = CacheManager()

# LRU Cache
cache_manager.create_lru_cache("results", capacity=1000)
cache_manager.put_in_cache("lru", "results", key, value)
result = cache_manager.get_from_cache("lru", "results", key)

# Timed Cache
cache_manager.create_timed_cache("api_results", ttl_seconds=300)
cache_manager.put_in_cache("timed", "api_results", key, value)
result = cache_manager.get_from_cache("timed", "api_results", key)

# Function Memoization
@cache_manager.memoize(ttl_seconds=60)
def expensive_operation(x, y):
    return x + y

Caching Strategies

  1. LRU (Least Recently Used)
  2. Maintains most recently used items
  3. Fixed capacity
  4. Thread-safe operations

  5. Timed Cache

  6. Time-based expiration
  7. Automatic cleanup of expired entries
  8. Configurable TTL

  9. Function Memoization

  10. Automatic caching of function results
  11. Support for both LRU and timed caching
  12. Handles complex argument types

Best Practices

  1. Memory Management
  2. Use object pools for frequently allocated objects
  3. Track resources that need cleanup
  4. Configure GC thresholds based on application needs

  5. Caching

  6. Choose appropriate cache type (LRU vs Timed)
  7. Monitor cache statistics
  8. Set reasonable capacities and TTLs
  9. Use memoization for expensive computations

  10. Resource Cleanup

  11. Always release pooled objects
  12. Regularly call cleanup methods
  13. Monitor memory usage

Configuration

Memory Manager

MemoryManager(
    gc_threshold=(700, 10, 10)  # Optional GC thresholds
)

Cache Types

  1. LRU Cache

    create_lru_cache(
        name="cache_name",
        capacity=1000  # Maximum items
    )
    

  2. Timed Cache

    create_timed_cache(
        name="cache_name",
        ttl_seconds=300  # Time to live
    )
    

Monitoring

Cache Statistics

# Get cache stats
stats = cache_manager.get_stats("cache_name")
print(f"Hits: {stats['hits']}, Misses: {stats['misses']}")

Memory Usage

  • Monitor object pool utilization
  • Track cache sizes
  • Watch for memory leaks
  • Use system monitoring tools

Error Handling

  • Handle pool exhaustion
  • Manage cache misses
  • Implement retry mechanisms
  • Log memory issues

Examples

See the examples directory for: - Object pool usage patterns - Caching strategies - Resource cleanup - Performance optimization