We have implemented the following endpoints to be compatible with the OpenAI API, allowing you to use amberSearch as a drop-in replacement for OpenAI’s API. This means you can use amberSearch in applications that are designed to work with OpenAI’s API without any modifications. Upon successful integration, you will also be able to use our custom tools, such as amberSearch and webSearch, to enhance your application’s capabilities. The base URL for the API is:
https://customerDomain.ambersearch.com/api/beta/amberai
Let’s start with the endpoints that we support, the authentication method, and how to use the chat completion endpoint with our tools and agents.

Supported Endpoints

  • GET /models
  • GET /models/
  • POST /chat/completions
Other endpoints that are not listed here are not supported at the moment. We’ve also added the rest of the endpoints regarding the models and chat completions, but they are not functional and will return an error if you try to use them.

Authentication

Authorization for the endpoints is the same as other Endpoints. You need to provide an API key in the Authorization header of your request. The API key should be prefixed with Bearer .
curl "https://customerDomain.ambersearch.com/api/beta/amberai/models" \
  -H "Authorization: Bearer ambrs-exampletoken"

Models

Model endpoints are for getting the model identifiers, which are needed to create chat completions. Each model has a unique identifier, and you can retrieve the list of available models or details about a specific model using the following endpoints:
  • GET /models to list all available models.
  • GET /models/{model} to get details about a specific model.
For each model, you can also add a special suffix to the model identifier to specify if you want to use the amberai version of the model. To do this, simply append -amberai to the model identifier. For example, if you want to use the gpt-4o model with amberSearch, you would use gpt-4o-amberai. The amberai version of the model is optimized with our custom instructions, providing a better experience for your applications if you just want to use it without any modifications.

Chat Completions

The chat completions endpoint is used to generate responses based on a given set of messages. For getting chat completions, you can specify the model you want to use, the messages that form the conversation, and various parameters to control the generation process. Let’s take a look at the endpoints:

Create Chat Completion

The endpoint for creating chat completions is:
POST /chat/completions
We only support the following parameters in the request body:
  • messages: list
  • model: str
  • stream: bool
  • temperature: float
  • metadata: dict
  • tools: list

Messages

We follow the same structure as OpenAI for the messages parameter, which is a list of message objects. Each message object should have a role and content. The roles can be one of the following:
  • system
  • user
  • assistant

Tools

We have two tools supported at the moment: amberSearch and webSearch. The amberSearch tool is enabled by default unless you pass an empty array to the tools parameter. To specify which tool to be used, simply pass it in the request body as follows:
{
    "tools": [
        {
            "function": {
                "name": "webSearch"
            }
        }
    ]
}

Agents

To create chat completions with an agent, you need to specify the agent ID in the metadata of the request body as follows:
{
    "metadata": {
        "agent_id": "9676b2b0-9fdd-40b1-8a0a-0debe64b29ac"
    }
}

Language

To specify which language your messages are in, you can use the language parameter in the metadata of the request body as follows:
{
    "metadata": {
        "language": "en"
    }
}
The default value for the language parameter is de (German). If you don’t specify it, the system will assume that the messages are in German. If you want to use English, you need to explicitly set the language parameter to en.