BaseServerConfig ================ **class** ``BaseServerConfig`` **(** ``BasePolyConfig`` **)** Base configuration for server launch and management. **Polymorphic Type:** All ``BaseServerConfig`` types: - ``vllm``: :doc:`VllmServerConfig` - ``vajra``: :doc:`VajraServerConfig` - ``sglang``: :doc:`SglangServerConfig` **Fields:** ``env_path`` : *Optional* [ *str* ] = ``None`` Path to a Python environment directory (virtualenv/conda). ``model`` : *str* = ``"meta-llama/Meta-Llama-3-8B-Instruct"`` Model name or path. ``host`` : *str* = ``"localhost"`` Host address for the server ``port`` : *int* = ``8000`` Port number for the server ``api_key`` : *str* = ``"token-abc123"`` API key for server authentication ``gpu_ids`` : *Optional* [ *list* [ *int* ] ] = ``None`` List of GPU IDs to use (None means auto-assign) ``startup_timeout`` : *int* = ``300`` Timeout in seconds for server startup ``health_check_interval`` : *float* = ``2.0`` Interval in seconds between health checks ``require_contiguous_gpus`` : *bool* = ``True`` Require contiguous GPU allocation (e.g., GPUs 0,1,2 instead of 0,2,5) ``tensor_parallel_size`` : *int* = ``1`` Number of GPUs for tensor parallelism ``dtype`` : *str* = ``"auto"`` Data type for model weights (auto, float16, bfloat16, etc.) ``max_model_len`` : *Optional* [ *int* ] = ``None`` Maximum model context length ``additional_args`` : *Optional* [ *str* ] = ``"{}"`` Additional engine-specific arguments as JSON string, dict, or None.