Observability
此内容尚不支持你的语言。
Observability
Section titled “Observability”All three runtimes provide observability features for production deployments.
Rust: Structured Logging
Section titled “Rust: Structured Logging”ai-lib-rust uses the tracing ecosystem:
use tracing_subscriber;
// Enable loggingtracing_subscriber::init();
// All AI-Lib operations emit structured log eventslet client = AiClient::new("openai/gpt-4o").await?;Log levels:
INFO— Request/response summariesDEBUG— Protocol loading, pipeline stagesTRACE— Individual frames, JSONPath matches
Rust: Call Statistics
Section titled “Rust: Call Statistics”Every request returns usage statistics:
let (response, stats) = client.chat() .user("Hello") .execute_with_stats() .await?;
println!("Model: {}", stats.model);println!("Provider: {}", stats.provider);println!("Prompt tokens: {}", stats.prompt_tokens);println!("Completion tokens: {}", stats.completion_tokens);println!("Total tokens: {}", stats.total_tokens);println!("Latency: {}ms", stats.latency_ms);Python: Call Statistics
Section titled “Python: Call Statistics”response, stats = await client.chat() \ .user("Hello") \ .execute_with_stats()
print(f"Tokens: {stats.total_tokens}")print(f"Latency: {stats.latency_ms}ms")TypeScript: Call Statistics
Section titled “TypeScript: Call Statistics”const { response, stats } = await client .chat() .user('Hello') .executeWithStats();
console.log(`Model: ${stats.model}`);console.log(`Provider: ${stats.provider}`);console.log(`Prompt tokens: ${stats.promptTokens}`);console.log(`Completion tokens: ${stats.completionTokens}`);console.log(`Total tokens: ${stats.totalTokens}`);console.log(`Latency: ${stats.latencyMs}ms`);Python: Metrics (Prometheus)
Section titled “Python: Metrics (Prometheus)”from ai_lib_python.telemetry import MetricsCollector
metrics = MetricsCollector()
client = await AiClient.builder() \ .model("openai/gpt-4o") \ .metrics(metrics) \ .build()
# After some requests...prometheus_text = metrics.export_prometheus()TypeScript: Metrics (Prometheus)
Section titled “TypeScript: Metrics (Prometheus)”import { MetricsCollector } from '@hiddenpath/ai-lib-ts/telemetry';
const metrics = new MetricsCollector();
const client = await AiClient.builder() .model('openai/gpt-4o') .metrics(metrics) .build();
// After some requests...const prometheusText = metrics.exportPrometheus();Tracked metrics:
ai_lib_requests_total— Request count by model/providerai_lib_request_duration_seconds— Latency histogramai_lib_tokens_total— Token usage by typeai_lib_errors_total— Error count by type
Python: Distributed Tracing (OpenTelemetry)
Section titled “Python: Distributed Tracing (OpenTelemetry)”from ai_lib_python.telemetry import Tracer
tracer = Tracer( service_name="my-app", endpoint="http://jaeger:4317",)
client = await AiClient.builder() \ .model("openai/gpt-4o") \ .tracer(tracer) \ .build()TypeScript: Distributed Tracing (OpenTelemetry)
Section titled “TypeScript: Distributed Tracing (OpenTelemetry)”import { Tracer } from '@hiddenpath/ai-lib-ts/telemetry';
const tracer = new Tracer({ serviceName: 'my-app', endpoint: 'http://jaeger:4317',});
const client = await AiClient.builder() .model('openai/gpt-4o') .tracer(tracer) .build();Traces include spans for:
- Protocol loading
- Request compilation
- HTTP transport
- Pipeline processing
- Event mapping
Python: Health Monitoring
Section titled “Python: Health Monitoring”from ai_lib_python.telemetry import HealthChecker
health = HealthChecker()status = await health.check()
print(f"Healthy: {status.is_healthy}")print(f"Details: {status.details}")TypeScript: Health Monitoring
Section titled “TypeScript: Health Monitoring”import { HealthChecker } from '@hiddenpath/ai-lib-ts/telemetry';
const health = new HealthChecker();const status = await health.check();
console.log(`Healthy: ${status.isHealthy}`);console.log(`Details: ${status.details}`);Python: User Feedback
Section titled “Python: User Feedback”Collect feedback on AI responses:
from ai_lib_python.telemetry import FeedbackCollector
feedback = FeedbackCollector()
# After getting a responsefeedback.record( request_id=stats.request_id, rating=5, comment="Helpful response",)TypeScript: User Feedback
Section titled “TypeScript: User Feedback”import { FeedbackCollector } from '@hiddenpath/ai-lib-ts/telemetry';
const feedback = new FeedbackCollector();
// After getting a responsefeedback.record({ requestId: stats.requestId, rating: 5, comment: 'Helpful response',});Resilience Observability
Section titled “Resilience Observability”Monitor circuit breaker and rate limiter state:
// Rustlet state = client.circuit_state(); // Closed, Open, HalfOpenlet inflight = client.current_inflight();# Pythonsignals = client.signals_snapshot()print(f"Circuit: {signals.circuit_state}")print(f"Inflight: {signals.current_inflight}")// TypeScriptconst signals = client.signalsSnapshot();console.log(`Circuit: ${signals.circuitState}`);console.log(`Inflight: ${signals.currentInflight}`);