🚀 Xinference v1.15.0

Release Notes

Official Website:

xinference.io

✅ Key Highlights

🧠 Continuous Expansion of Multi-Model Capabilities

  • DeepSeek-V3.2 model support with enhanced inference and tool calling capabilities
  • Z-Image-Turbo image model launch for faster generation experience
  • Official support for PaddleOCR-VL, covering OCR + visual understanding scenarios

⚙️ Enhanced Multi-Replica Capabilities

Support for running multiple model replicas on a single GPU, significantly improving resource utilization

🌐 Community Edition Updates

📦 Installation Methods

pip install: pip install 'xinference==1.15.0'

Docker: Pull the latest image or use pip update within the container

🎕 New Model Support

  • • Z-Image-Turbo
  • • DeepSeek-V3.2
  • • PaddleOCR-VL

✨ New Features

  • • llama.cpp supports json schema structured output
  • • Support for multi-replica operation on single GPU
  • • Support for starting models with --device cpu

🔧 Feature Enhancements

  • • More detailed explanations when engine is unavailable
  • • Enhanced GLM-4.5 tool calls support
  • • Enhanced vLLM structured output parameter capabilities
  • • Ongoing updates to model metadata (JSON)

🐛 Bug Fixes

  • • Fixed missing cached model management page
  • • Fixed incomplete removal of soft links
  • • Fixed same-name package conflicts in virtual environments
  • • Fixed multimodal video parameters not taking effect
  • • Failed registration of custom embedding models
  • • UI copy function and dropdown width issues
  • • Fixed Dockerfile.cu128 spelling errors

📚 Documentation Updates

  • • Added model documentation
  • • Supplemental v1.14.0 release notes

🏢 Enterprise Edition Updates

Model Hub Integration

Support for updating enterprise model lists from Xinference Model Hub, enabling synchronization of the latest models without waiting for version releases

Platform Stability

Fixed Ascend platform-related usage issues, improving stability and availability in enterprise environments