interface Commands {
    generateImage(model, prompt, resolution, subdir?): Chainable<string, any>;
    promptLlama(model, system, prompt, training, suffix?, folder?, max_tokens?): Chainable<string, any>;
}

Type Parameters

  • A extends any = any

Methods

  • This interfaces with chatgpt compatible api to generate images. It also caches them so you don't need to call the service everytime

    Parameters

    • model: string

      One of the available models

    • prompt: string

      The prompt to be sent

    • resolution: readonly [number, number]
    • Optional subdir: string

      A subdirectory to store this image into. You can use this to organize multiple images and manually change them

    Returns Chainable<string, any>

  • This implements a logic that generates a prompt ready for llama instruction calls. Take the explanations for each argument with a grain of salt. They are described based on my personal experience and may not be so efficient.

    Parameters

    • model: string

      The model to use

    • system: string

      This setups the system. Here you should describe what globally it can do and behave as.

    • prompt: string

      Your prompt/request.

    • training: readonly (readonly [string, string])[]

      Here you will pass example request/response to fine tune the model. For example, it can be used to guide to a response template format.

    • Optional suffix: string

      OPTIONAL I like to work with this to define how my prompt should return and fix little mistakes the model will be making. If provided, it will simply concatenate to the end of the prompt and traning prompts

    • Optional folder: string
    • Optional max_tokens: number

    Returns Chainable<string, any>

Generated using TypeDoc