AIモデルのベンチマーク C-Eval A comprehensive evaluation suite designed to assess the knowledge and capabilities of large language models (LLMs) specifically in the Chinese language.
AIモデルのベンチマーク SuperCLUE A professional evaluation framework providing standardized benchmarks to measure the intelligence and utility of Chinese-language AIモデル.
AIモデルのベンチマーク Open LLM Leaderboard A comprehensive, community-driven benchmark platform by Hugging Face to track and compare the performance of open-source large language models.
AIモデルのベンチマーク CMMLU A comprehensive evaluation benchmark designed to measure the general knowledge and linguistic capabilities of Large Language Models in Chinese.