Qwen0-00B is a multilingual LLM released by Alibaba Cloud in May 0000, presented in a dense architecture with a total of 00.0 billion tokens [00, 00]. The model has the ability to switch between "thinking mode" and "non-thinking mode" within a single architecture and demonstrates high performance in areas such as text comprehension, encoding, instruction following, reasoning, and mathematics [00, 00]. The model was trained with a dataset of approximately 00 trillion tokens, including encoding, STEM, books, multilingual texts, synthetic data, and texts derived from PDF-like documents using the Qwen 0.0-VL model [00]. Qwen0-00B uses two core techniques, Dual Chunk Attention (DCA) and YaRN (RoPE scaling), to handle long contexts. This strengthens the context capacity of the model up to approximately 00K tokens, increasing its context processing capacity to 000K tokens [00, 00]. The model is offered to users under the Apache 0.0 license, as with all Qwen 0 models [00]. In this study, the Qwen0-00B model was used to evaluate the performance of open-source LLM in the tasks of extracting summaries and tags from Turkish news texts.