LLM Explorer v2.0: Release Announcement

It's been some time since we announced LLM Explorer. We've taken into account all of the user comments and feedback on the project, made the necessary fixes, and incorporated new features. Now, we're excited to present the new version (v2.0) of the service: https://llm.extractum.io.
Here are some of the notable improvements:

  • Multi-level filtering for LLMs: Combine the search box with quick filters from the dropdown menu and column header dropdown filters for enhanced search capabilities.
  • A "Better Alternatives" side panel that displays models with similar general parameters but with a higher HF rank, larger context size, or lower VRAM requirements. This allows you to quickly find a better model than the one you're currently using.
  • Shortcuts to the most popular model groups (like quantized, fine-tuned, those that fit within 8GB VRAM, etc.).
  • Database updates multiple times throughout the day.
  • Improved accuracy in the table data.
  • Expanded listings sorted by maintainer, architecture, and more.
We hope you find these enhancements useful. As always, we welcome your feedback, new feature requests, and bug reports with open arms. Cheers!

Made on