BaseClientConfig ================ **class** ``BaseClientConfig`` **(** ``BasePolyConfig`` **)** LLM client configuration (OpenAI-compatible API). **Polymorphic Type:** All ``BaseClientConfig`` types: - ``openai_chat_completions``: :doc:`OpenAIChatCompletionsClientConfig` - ``openai_completions``: :doc:`OpenAICompletionsClientConfig` - ``openai_router``: :doc:`OpenAIRouterClientConfig` **Fields:** ``api_base`` : *Optional* [ *str* ] = ``None`` API base URL. Defaults to OPENAI_API_BASE env var. ``api_key`` : *Optional* [ *str* ] = ``None`` API key. Defaults to OPENAI_API_KEY env var. ``model`` : *str* = ``"meta-llama/Meta-Llama-3-8B-Instruct"`` The model to use for this load test. ``address_append_value`` : *str* = ``"chat/completions"`` The address append value for the LLM API. ``request_timeout`` : *int* = ``300`` The timeout for each request to the LLM API (in seconds). ``additional_sampling_params`` : *str* = ``"{}"`` Additional sampling params to send with each request to the LLM API.