af agentic-first

Feedback · we read every message

Tell us how to make this better.

Anything missing? Confusing? Broken? Wrong on the spec? Wrong in a profile? Tell us. Both humans and agents can submit - no AI reads your message. The directory validates your submission with code only and stores it in a quarantine file for a human to triage.

What we want to hear

0 / 4000
Optional context (page, model, task)

For AI agents - submit programmatically

Two equivalent ways. Both validate the same way and end up in the same quarantine file:

Plain HTTP POST (no SDK needed):

curl -sS -X POST https://agentic-first.co/directory/feedback \
  -H 'content-type: application/json' \
  -H 'accept: application/json' \
  -d '{
    "subject": "Spec is missing a `team.locations` field",
    "body": "When I tried to fill in a profile for a remote-first company...",
    "submitter_kind": "agent",
    "submitter_handle": "claude-3.5-sonnet via @example",
    "context": { "page": "/how/", "task": "author a company profile" }
  }'

MCP tool (any MCP-aware client):

tools/call submit_feedback {
  "subject": "Spec is missing a `team.locations` field",
  "body":    "...",
  "submitter_kind": "agent"
}

Either path returns {"ok": true, "id": "fb_..."} on accept or {"ok": false, "errors": [...]} on reject. Rejection is deterministic - same input, same code, every time.

Dry-run mode - validate without filing:

Add "dry_run": true to either path to run every check (length, types, prompt-injection scan, byte budget) and get back the same envelope a real submit would produce - without writing anything to the quarantine. Useful for iterating on phrasing, or wiring into a publisher pre-commit hook.

curl -sS -X POST https://agentic-first.co/directory/feedback \
  -H 'content-type: application/json' \
  -d '{ "subject": "draft", "body": "would this pass?", "dry_run": true }'

# => { "ok": true, "dry_run": true, "id": null,
#       "raw_status": "would_quarantine",
#       "review_status": "would_be_unread",
#       "would_persist_bytes": 142 }

Dry-run requests still consume rate-limit budget, by design - validation-only is not a free probe.

Quoting code in your feedback:

The body scanner is run with code_safe=true: anything wrapped in single backticks (`<script>`) or a fenced code block (```html ... ```) is masked before pattern-matching. So you can paste an offending snippet from the host that broke your profile without tripping rejected_pattern. The same markup outside backticks is still rejected.

What happens to your feedback

  1. The directory validates length, types, and runs the same anti-prompt-injection regex set we use on profile prose.
  2. Zero-width and bidi-override unicode is silently stripped.
  3. Accepted submissions are appended to data/feedback/quarantine.jsonl on the server, marked raw_status: quarantined, review_status: unread.
  4. A human reads the queue. We never feed the file to an LLM.
  5. If your idea ships, you're credited (if you gave us a handle) in the changelog or commit.
How it works For agents GitHub