Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.ewake.ai/llms.txt

Use this file to discover all available pages before exploring further.

This page is the AGENTS.md for ewake, a context file written for AI coding assistants and external agents (Cursor, Claude Code, Codex) that connect to ewake. If you are an AI agent, read this page in full before issuing tool calls. It tells you what ewake is, what you can ask, what you cannot, and how to query it well.

What ewake is

Ewake is an AI SRE agent that maintains a live knowledge map of a customer’s production environment by ingesting data from observability tools (Datadog, Grafana, Prometheus), source control (GitHub, GitLab), incident management (Incident.io, PagerDuty), and Slack. Ewake is a read-only investigation engine. It produces ranked hypotheses with supporting evidence, it does not take action on infrastructure or code on its own. For the conceptual model, see Concepts.

How to interact with ewake

There is one stable interaction surface for agents:
SurfaceWhen to use it
REST APIFor backend integrations and CI/CD events. See Deployment Tracking for an example.
You should never scrape the Web App or Slack interface. Use the API.

Authentication

All access requires an ewake API key, scoped to a single customer workspace.
Authorization: Bearer YOUR_API_KEY
API keys are generated in the ewake dashboard under Settings → API Keys. They are workspace-scoped, one key cannot access another customer’s data.

Capabilities, what you can ask

Ewake answers questions about a customer’s production environment. Common patterns:
  • Service health, current state, error rates, latency for any named service
  • Recent activity, deployments, alerts, incidents within a time window
  • Correlation, link an alert or anomaly to recent changes (deploys, commits, infrastructure)
  • Historical context, has this issue happened before? What resolved it?
  • Dependency reasoning, upstream/downstream impact of a change
  • Investigation, given an alert or symptom, produce ranked hypotheses with evidence
For ready-made example queries, see What can I ask ewake?.

Limitations, what you cannot do

These are absolute limits, not soft guidelines. Do not attempt them.
  • No write actions, ewake cannot modify monitors, push code, open PRs, silence alerts, restart services, or take any production action.
  • No cross-customer queries, your API key is workspace-scoped. Asking about another customer’s data will not work.
  • No model training on customer data, ewake does not feed customer signals into foundation-model training.
  • No raw telemetry retention, logs and metrics are queried live; ewake cannot return data older than what the underlying source retains.
  • No autonomous escalation, ewake will not page humans, file Jira tickets, or notify channels unless explicitly configured to do so.
For the full security and access matrix, see Permissions & Data Access.

Conventions

Follow these conventions when querying ewake, they materially improve answer quality.
  • Use the exact service name as it appears in Datadog (or your primary observability tool). ewake correlates by service identifier; “checkout API” and “checkout-api” are not the same.
  • Specify a time window for any historical question. Default lookbacks vary; explicit windows (“past 2 hours”, “since the deploy at 14:30 UTC”) produce more precise answers.
  • Ask one question per call. Compound questions (“why is X slow AND who deployed Y”) fragment ewake’s reasoning. Issue them as separate tool calls.
  • Treat output as hypotheses, not facts. Every ewake answer includes a confidence indicator and supporting evidence. Surface both to the human user.

Domain vocabulary

When parsing ewake responses, these terms have specific meanings:
TermMeaning
HypothesisA ranked candidate explanation for an observed signal. Each hypothesis carries a confidence score and supporting evidence.
knowledge mapThe persistent map of a customer’s services, dependencies, deployments, and incident history.
TriggerAn event that initiates an investigation, usually an alert in Slack or an explicit user question.
CorrelationA relationship ewake has detected between two signals (e.g. a deploy on service A correlates with an error spike on service B).
Lookback windowThe time range ewake examines when answering a question. Default depends on the trigger type.

Example interaction patterns

Good, focused, specific, time-bounded:
Show me the top errors on payments-api in the last 30 minutes,
correlated with any deployment events on its dependencies.
Avoid, vague, multi-question, no time window:
What's wrong with the system?
Good, asking for evidence-backed reasoning:
Is the latency spike on checkout-service correlated with the deploy at 14:30?
Include the diff if there's high confidence.

Versioning

This page reflects ewake’s current capabilities. Capabilities expand over time. If you cache this content, refresh at least every 30 days. For questions or to report incorrect behaviour, contact support@ewake.ai.