Skip to content

Create connector

Currently, the most straightforward way is to duplicate an existing connector module, and modify the codes. This feature is currently not available on the Web UI and CLI.

All connectors inherit the super class Connector. We initalise this super class with certain variables that are common across various connectors (i.e. token, max_concurrency, etc). These variables come from another class called ConnectorEndpoint (An example of connector_endpoint JSON configuration file).

1. Create Your Connector Python File

Create a Python file and put it in your moonshot-data/connectors directory (i.e. new-custom-connector.py).

2. Initialise Connector Class

We will use a set of modified code from one of our connectors anthropic-connector as an example.

Copy and paste this into your codes:

class NewCustomConnector(Connector):
    def __init__(self, ep_arguments: ConnectorEndpointArguments):
        # Initialize super class
        super().__init__(ep_arguments)

        # TODO 1 (Optional): Instantiatiation when you initialise your connector. If there is nothing to instantiate,
        # remove the codes below and leave the rest of the codes.

        # The codes below are customised codes ONLY for the Anthropic connector to retrieve the API key, instantiate the 
        # Anthropic connector and assign it to self._client. You can replace them with your own codes, and assign them 
        # to anything (i.e. self.my_client = "hello world") which you can call later.
        api_key = self.token or os.getenv("ANTHROPIC_API_KEY") or ""
        self._client = anthropic.AsyncAnthropic(api_key=api_key)

3. Customise The Way Prompts Are Sent to The Endpoint/LLM

The get_response method is an abstract method that must be instantiated. Copy and paste this function into your codes below the __init__ function.

    @Connector.rate_limited
    @perform_retry
    async def get_response(self, prompt: str) -> ConnectorResponse:
        """

        Asynchronously sends a prompt to the endpoint/LLM and returns the generated response.

        This method constructs a request with the given prompt, optionally prepended and appended with
        predefined strings, and sends it to the endpoint/LLM. If a system prompt is set, it is included in the request. The 
        method then awaits the response from the API, processes it, and returns the resulting message content wrapped in a 
        ConnectorResponse object.

        Args:
            prompt (str): The input prompt to send to the endpoint/LLM.

        Returns:
            ConnectorResponse: An object containing the text response generated by the endpoint/LLM.
        """

        # TODO 2 (Optional): Modify the prompt (i.e. in this case we're adding in the pre and post prompts)
        connector_prompt = f"{self.pre_prompt}{prompt}{self.post_prompt}"

        # TODO 3 (Optional): Craft the parameters in the way that your endpoint/LLM is expecting. in this case we're 
        # also adding in all the optional parameters from the connector endpoint
        new_params = {
            **self.optional_params,
            "model": self.model,
            "prompt": f"{HUMAN_PROMPT}{connector_prompt}{AI_PROMPT}",
        }

        # TODO 4: Send your prompt (or params if you have followed step 3) to the endpoint/LLM
        response = await self._client.completions.create(**new_params)

        # TODO 5: Return the ConnectorResponse. Note that self._process_response is entirely optional (refer below). If you
        # don't require the process_response() method, you can return the response instead.
        return ConnectorResponse(response=await self._process_response(response))

4. (Optional) Additional Method to Process Response

Sometimes, the response from the endpoint/LLM may not be in the format you want. To make things modular, we have created an additional method in our connectors to help us process the responses. You can ignore this if you think there's no need to process the responses, or you can process the response directly in the get_response method.

    async def _process_response(self, response: Completion) -> str:
        """
        Process an HTTP response and extract relevant information as a string.

        This function takes an HTTP response object as input and processes it to extract
        relevant information as a string. The extracted information may include data
        from the response body, headers, or other attributes.

        Args:
            response (Completion): An HTTP response object containing the response data.

        Returns:
            str: A string representing the relevant information extracted from the response.
        """

        # TODO 6 (Optional): Replace with your codes
        return response.completion[1:]

5. Putting Everything Together

Your Connector should be ready after adding in your codes! You can copy and paste the snippet below into your Python file and modify them.

class NewCustomConnector(Connector):
    def __init__(self, ep_arguments: ConnectorEndpointArguments):
        # Initialize super class
        super().__init__(ep_arguments)

        # TODO 1 (Optional): Instantiatiation when you initialise your connector. If there is nothing to instantiate,
        # remove the codes below and leave the rest of the codes.

        # The codes below are customised codes ONLY for the Anthropic connector to retrieve the API key, instantiate the 
        # Anthropic connector and assign it to self._client. You can replace them with your own codes, and assign them 
        # to anything (i.e. self.my_client = "hello world") which you can call later.
        api_key = self.token or os.getenv("ANTHROPIC_API_KEY") or ""
        self._client = anthropic.AsyncAnthropic(api_key=api_key)

    @Connector.rate_limited
    @perform_retry
    async def get_response(self, prompt: str) -> ConnectorResponse:
        """

        Asynchronously sends a prompt to the endpoint/LLM and returns the generated response.

        This method constructs a request with the given prompt, optionally prepended and appended with
        predefined strings, and sends it to the endpoint/LLM. If a system prompt is set, it is included in the request. The 
        method then awaits the response from the API, processes it, and returns the resulting message content wrapped in a 
        ConnectorResponse object.

        Args:
            prompt (str): The input prompt to send to the endpoint/LLM.

        Returns:
            ConnectorResponse: An object containing the text response generated by the endpoint/LLM.
        """

        # TODO 2 (Optional): Modify the prompt (i.e. in this case we're adding in the pre and post prompts)
        connector_prompt = f"{self.pre_prompt}{prompt}{self.post_prompt}"

        # TODO 3 (Optional): Craft the parameters in the way that your endpoint/LLM is expecting. in this case we're 
        # also adding in all the optional parameters from the connector endpoint
        new_params = {
            **self.optional_params,
            "model": self.model,
            "prompt": f"{HUMAN_PROMPT}{connector_prompt}{AI_PROMPT}",
        }

        # TODO 4: Send your prompt (or params if you have followed step 3) to the endpoint/LLM
        response = await self._client.completions.create(**new_params)

        # TODO 5: Return the ConnectorResponse. Note that self._process_response is entirely optional (refer below). If you
        # don't require the process_response() method, you can return the response instead.
        return ConnectorResponse(response=await self._process_response(response))

    async def _process_response(self, response: Completion) -> str:
        """
        Process an HTTP response and extract relevant information as a string.

        This function takes an HTTP response object as input and processes it to extract
        relevant information as a string. The extracted information may include data
        from the response body, headers, or other attributes.

        Args:
            response (Completion): An HTTP response object containing the response data.

        Returns:
            str: A string representing the relevant information extracted from the response.
        """

        # TODO 6 (Optional): Replace with your codes
        return response.completion[1:]

List Your Newly Created Connector

The name of your connector will be your file name (i.e. the name of your connector will be new-custom-connector if your connector file name is new-custom-connector.py)

If you are using CLI, you should be able to see your connector when you list the connectors using the following command:

moonshot > list_connector_types

recipe added

What's Next

Once you are able to see your newly created connector, you can proceed to create a connector endpoint (i.e. configuration file) :