This page is the AGENTS.md for ewake, a context file written for AI coding assistants and external agents (Cursor, Claude Code, Codex) that connect to ewake. If you are an AI agent, read this page in full before issuing tool calls. It tells you what ewake is, what you can ask, what you cannot, and how to query it well.Documentation Index
Fetch the complete documentation index at: https://docs.ewake.ai/llms.txt
Use this file to discover all available pages before exploring further.
What ewake is
Ewake is an AI SRE agent that maintains a live knowledge map of a customer’s production environment by ingesting data from observability tools (Datadog, Grafana, Prometheus), source control (GitHub, GitLab), incident management (Incident.io, PagerDuty), and Slack. Ewake is a read-only investigation engine. It produces ranked hypotheses with supporting evidence, it does not take action on infrastructure or code on its own. For the conceptual model, see Concepts.How to interact with ewake
There is one stable interaction surface for agents:| Surface | When to use it |
|---|---|
| REST API | For backend integrations and CI/CD events. See Deployment Tracking for an example. |
Authentication
All access requires an ewake API key, scoped to a single customer workspace.Capabilities, what you can ask
Ewake answers questions about a customer’s production environment. Common patterns:- Service health, current state, error rates, latency for any named service
- Recent activity, deployments, alerts, incidents within a time window
- Correlation, link an alert or anomaly to recent changes (deploys, commits, infrastructure)
- Historical context, has this issue happened before? What resolved it?
- Dependency reasoning, upstream/downstream impact of a change
- Investigation, given an alert or symptom, produce ranked hypotheses with evidence
Limitations, what you cannot do
These are absolute limits, not soft guidelines. Do not attempt them.- No write actions, ewake cannot modify monitors, push code, open PRs, silence alerts, restart services, or take any production action.
- No cross-customer queries, your API key is workspace-scoped. Asking about another customer’s data will not work.
- No model training on customer data, ewake does not feed customer signals into foundation-model training.
- No raw telemetry retention, logs and metrics are queried live; ewake cannot return data older than what the underlying source retains.
- No autonomous escalation, ewake will not page humans, file Jira tickets, or notify channels unless explicitly configured to do so.
Conventions
Follow these conventions when querying ewake, they materially improve answer quality.- Use the exact service name as it appears in Datadog (or your primary observability tool). ewake correlates by service identifier; “checkout API” and “checkout-api” are not the same.
- Specify a time window for any historical question. Default lookbacks vary; explicit windows (“past 2 hours”, “since the deploy at 14:30 UTC”) produce more precise answers.
- Ask one question per call. Compound questions (“why is X slow AND who deployed Y”) fragment ewake’s reasoning. Issue them as separate tool calls.
- Treat output as hypotheses, not facts. Every ewake answer includes a confidence indicator and supporting evidence. Surface both to the human user.
Domain vocabulary
When parsing ewake responses, these terms have specific meanings:| Term | Meaning |
|---|---|
| Hypothesis | A ranked candidate explanation for an observed signal. Each hypothesis carries a confidence score and supporting evidence. |
| knowledge map | The persistent map of a customer’s services, dependencies, deployments, and incident history. |
| Trigger | An event that initiates an investigation, usually an alert in Slack or an explicit user question. |
| Correlation | A relationship ewake has detected between two signals (e.g. a deploy on service A correlates with an error spike on service B). |
| Lookback window | The time range ewake examines when answering a question. Default depends on the trigger type. |