Hello, Cape!
The Cape API is a privacy API offered through several HTTP endpoints.
Complete API documentation is available at api.capeprivacy.com
Get a Cape Token
To get started, you will need to signup for a Cape account and get an API key.
For this tutorial, we will access the API key through the CAPE_API_KEY
environment variable.
Authorization
All calls to the Cape API are authorized with an HTTP Bearer token. For all requests, you include the token in a header like
Authorization: Bearer <token>
- python
- curl
import os
import requests
resp = requests.post(
"https://api.capeprivacy.com/v1/chat/completions",
headers={"Authorization": f"Bearer {os.getenv('CAPE_API_KEY')}"},
json=your_request,
)
curl -X "POST" "https://api.capeprivacy.com/v1/chat/completions" \
-H "accept: application/json" \
-H "Authorization: Bearer $CAPE_API_KEY"
Call the Completions Endpoint with Auto-redaction
Calling the Cape Completions endpoint is very similar to calling OpenAI's Completions endpoint. With the setting of the format to redacted
, Cape Completions endpoint will automatically handle de-identification and re-identification.
The input is de-identified first before being sent off to the Large Language Models (LLM), then the response from the LLM will be re-identified with the redacted entities.
De-identification of text
De-identification removes Personal Identifiable Information (PII) from the input so that LLM can't see PII when processing a completion.
We will use the completions
endpoint at https://api.capeprivacy.com/v1/chat/completions
which will automatically remove
the PII from the passed in query. It's to be noted that redaction of PII will cause some context to be lost if asking about public figures as the LLM won't be able to see the name of person you are asking about so
generally it's good to provide some context regarding the entities for better responses.
Re-identification of response
The Completions endpoint will automatically re-identify the response from the LLM before being returned to the user, providing a seamless experience. of switching over to this API from OpenAI.
- python
- curl
import os
import requests
messages = [
{"role": "system", "content": "You are a helpful assistant. You are to be given a redacted document and you will answer questions about it. Use the redacted placeholders in your answer, do not say that you do not know"},
{"role": "user", "content": "Jack had a little frog. Who had a little frog?"},
]
req = {
"model": "gpt-4",
"messages": messages,
"format": "redacted",
}
resp = requests.post(
"https://api.capeprivacy.com/v1/chat/completions",
headers={"Authorization": f"Bearer {os.getenv('CAPE_API_KEY')}"},
json=req,
)
print(resp.json())
curl -X "POST" "https://api.capeprivacy.com/v1/chat/completions" \
-H "accept: application/json" \
-H "Authorization: Bearer $CAPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [{"role": "system", "content": "You are a helpful assistant. You are to be given a redacted document and you will answer questions about it. Use the redacted placeholders in your answer, do not say that you do not know"}, {"role": "user", "content": "Jack had a little frog. Who had a little frog?"}],
"format": "redacted"
}'
You should see an output similar to
{
"id": "capeapi-e3bab5b2-0371-4bc2-a641-19d237f4abcd",
"object": "chat.completion",
"created": 1690825888.0551286,
"model": "gpt-4",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Jack had a little frog",
"finish_reason": "stop",
}
}
]
}