Skip to content

Commit 6e01f34

Browse files
authored
feat: add detailed help tooltips for chat features (#621)
* feat: add detailed help tooltips for chat features Signed-off-by: Bob Du <[email protected]> * docs: imporve docs Signed-off-by: Bob Du <[email protected]> --------- Signed-off-by: Bob Du <[email protected]>
1 parent 72fcd0c commit 6e01f34

File tree

8 files changed

+204
-17
lines changed

8 files changed

+204
-17
lines changed

README.en.md

Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,8 @@ Some unique features have been added:
3636

3737
[] VLLM API model support & Optional disable deep thinking mode
3838

39+
[] Context Window Control
40+
3941
> [!CAUTION]
4042
> This project is only published on GitHub, based on the MIT license, free and for open source learning usage. And there will be no any form of account selling, paid service, discussion group, discussion group and other behaviors. Beware of being deceived.
4143
@@ -324,6 +326,71 @@ PS: You can also run `pnpm start` directly on the server without packaging.
324326
pnpm build
325327
```
326328

329+
## Context Window Control
330+
331+
> [!TIP]
332+
> Context Window Control allows users to flexibly manage context information in AI conversations, optimizing model performance and conversation effectiveness.
333+
334+
### Features
335+
336+
- **Context Management**: Control the amount of chat history the model can reference
337+
- **Per-conversation Control**: Each conversation can independently enable or disable context window
338+
- **Real-time Switching**: Context mode can be switched at any time during conversation
339+
- **Memory Management**: Flexibly control AI's memory scope and continuity
340+
- **Configurable Quantity**: Administrators can set the maximum number of context messages
341+
342+
### How It Works
343+
344+
The context window determines the amount of chat history from the current session that the model can reference during generation:
345+
346+
- **Reasonable context window size** helps the model generate coherent and relevant text
347+
- **Avoid confusion or irrelevant output** caused by referencing too much context
348+
- **Turning off the context window** will cause the session to lose memory, making each question completely independent
349+
350+
### Usage
351+
352+
#### 1. Enable/Disable Context Window
353+
354+
1. **Enter Conversation Interface**: This feature can be used in any conversation session
355+
2. **Find Control Switch**: Locate the "Context Window" toggle button in the conversation interface
356+
3. **Switch Mode**:
357+
- **Enable**: Model will reference previous chat history, maintaining conversation coherence
358+
- **Disable**: Model will not reference history, treating each question independently
359+
360+
#### 2. Usage Scenarios
361+
362+
**Recommended to enable context window when:**
363+
- Need continuous dialogue and context correlation
364+
- In-depth discussion of complex topics
365+
- Multi-turn Q&A and step-by-step problem solving
366+
- Need AI to remember previously mentioned information
367+
368+
**Recommended to disable context window when:**
369+
- Independent simple questions
370+
- Avoid historical information interfering with new questions
371+
- Handling multiple unrelated topics
372+
- Need a "fresh start" scenario
373+
374+
#### 3. Administrator Configuration
375+
376+
Administrators can configure in system settings:
377+
- **Maximum Context Count**: Set the number of context messages included in the conversation
378+
- **Default State**: Set the default context window state for new conversations
379+
380+
### Technical Implementation
381+
382+
- **Context Truncation**: Automatically truncate specified number of historical messages
383+
- **State Persistence**: Each conversation independently saves context window switch state
384+
- **Real-time Effect**: Takes effect immediately for the next message after switching
385+
- **Memory Optimization**: Reasonably control context length, avoiding model limits
386+
387+
### Notes
388+
389+
- **Conversation Coherence**: Disabling context window will affect conversation continuity
390+
- **Token Consumption**: More context will increase token usage
391+
- **Response Quality**: Appropriate context helps improve answer quality
392+
- **Model Limitations**: Need to consider context length limits of different models
393+
327394
## VLLM API Deep Thinking Mode Control
328395

329396
> [!TIP]
@@ -336,6 +403,14 @@ pnpm build
336403
- **Real-time Switching**: Deep thinking mode can be switched at any time during conversation
337404
- **Performance Optimization**: Disabling deep thinking can improve response speed and reduce computational costs
338405

406+
### How It Works
407+
408+
After enabling deep thinking, the model will use more computational resources and take longer time to simulate more complex thinking chains for logical reasoning:
409+
410+
- **Suitable for complex tasks or high-requirement scenarios**, such as mathematical derivations and project planning
411+
- **Daily simple queries do not need to be enabled** deep thinking mode
412+
- **Disabling deep thinking** can achieve faster response speed
413+
339414
### Prerequisites
340415

341416
**The following conditions must be met to use this feature:**

README.md

Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,8 @@
3636

3737
[] VLLM API 模型支持 & 可选关闭深度思考模式
3838

39+
[] 上下文窗口控制
40+
3941

4042
> [!CAUTION]
4143
> 声明:此项目只发布于 Github,基于 MIT 协议,免费且作为开源学习使用。并且不会有任何形式的卖号、付费服务、讨论群、讨论组等行为。谨防受骗。
@@ -460,6 +462,71 @@ Current time: {current_time}
460462
- 每个会话可以独立控制是否使用搜索功能
461463

462464

465+
## 上下文窗口控制
466+
467+
> [!TIP]
468+
> 上下文窗口控制功能可以让用户灵活管理 AI 对话中的上下文信息,优化模型性能和对话效果。
469+
470+
### 功能特性
471+
472+
- **上下文管理**: 控制模型可以参考的聊天记录数量
473+
- **按对话控制**: 每个对话可以独立开启或关闭上下文窗口
474+
- **实时切换**: 在对话过程中可以随时切换上下文模式
475+
- **记忆管理**: 灵活控制 AI 的记忆范围和连续性
476+
- **可配置数量**: 管理员可设置上下文消息的最大数量
477+
478+
### 工作原理
479+
480+
上下文窗口决定了在生成过程中,模型可以参考的当前会话下聊天记录的量:
481+
482+
- **合理的上下文窗口大小**有助于模型生成连贯且相关的文本
483+
- **避免因为参考过多的上下文**而导致混乱或不相关的输出
484+
- **关闭上下文窗口**会导致会话失去记忆,每次提问之间将完全独立
485+
486+
### 使用方式
487+
488+
#### 1. 启用/关闭上下文窗口
489+
490+
1. **进入对话界面**: 在任何对话会话中都可以使用此功能
491+
2. **找到控制开关**: 在对话界面中找到"上下文窗口"开关按钮
492+
3. **切换模式**:
493+
- **开启**: 模型会参考之前的聊天记录,保持对话连贯性
494+
- **关闭**: 模型不会参考历史记录,每个问题独立处理
495+
496+
#### 2. 使用场景
497+
498+
**建议开启上下文窗口的情况:**
499+
- 需要连续对话和上下文关联
500+
- 复杂主题的深入讨论
501+
- 多轮问答和逐步解决问题
502+
- 需要 AI 记住之前提到的信息
503+
504+
**建议关闭上下文窗口的情况:**
505+
- 独立的简单问题
506+
- 避免历史信息干扰新问题
507+
- 处理不相关的多个主题
508+
- 需要"重新开始"的场景
509+
510+
#### 3. 管理员配置
511+
512+
管理员可以在系统设置中配置:
513+
- **最大上下文数量**: 设置会话中包含的上下文消息数量
514+
- **默认状态**: 设置新对话的默认上下文窗口状态
515+
516+
### 技术实现
517+
518+
- **上下文截取**: 自动截取指定数量的历史消息
519+
- **状态持久化**: 每个对话独立保存上下文窗口开关状态
520+
- **实时生效**: 切换后立即对下一条消息生效
521+
- **内存优化**: 合理控制上下文长度,避免超出模型限制
522+
523+
### 注意事项
524+
525+
- **对话连贯性**: 关闭上下文窗口会影响对话的连续性
526+
- **Token 消耗**: 更多的上下文会增加 Token 使用量
527+
- **响应质量**: 适当的上下文有助于提高回答质量
528+
- **模型限制**: 需要考虑不同模型的上下文长度限制
529+
463530
## VLLM API 深度思考模式控制
464531

465532
> [!TIP]
@@ -472,6 +539,14 @@ Current time: {current_time}
472539
- **实时切换**: 在对话过程中可以随时切换深度思考模式
473540
- **性能优化**: 关闭深度思考可以提高响应速度,降低计算成本
474541

542+
### 工作原理
543+
544+
开启深度思考后,模型会用更多的计算资源以及消耗更长时间,模拟更复杂的思维链路进行逻辑推理:
545+
546+
- **适合复杂任务或高要求场景**,比如数学题推导、项目规划
547+
- **日常简单查询无需开启**深度思考模式
548+
- **关闭深度思考**可以获得更快的响应速度
549+
475550
### 使用前提
476551

477552
**必须满足以下条件才能使用此功能:**

src/components/common/HoverButton/index.vue

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ import Button from './Button.vue'
44
55
interface Props {
66
tooltip?: string
7+
tooltipHelp?: string
78
placement?: PopoverPlacement
89
}
910
@@ -13,6 +14,7 @@ interface Emit {
1314
1415
const props = withDefaults(defineProps<Props>(), {
1516
tooltip: '',
17+
tooltipHelp: '',
1618
placement: 'bottom',
1719
})
1820
@@ -33,7 +35,11 @@ function handleClick() {
3335
<slot />
3436
</Button>
3537
</template>
36-
{{ tooltip }}
38+
<span>{{ tooltip }}</span>
39+
<div v-if="tooltipHelp" class="whitespace-pre-line text-xs">
40+
<br>
41+
{{ tooltipHelp }}
42+
</div>
3743
</NTooltip>
3844
</div>
3945
<div v-else>

src/locales/en-US.ts

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,10 +61,13 @@ export default {
6161
turnOffThink: 'Deep thinking has been disabled for this chat.',
6262
clickTurnOnContext: 'Click to enable sending messages will carry previous chat records.',
6363
clickTurnOffContext: 'Click to disable sending messages will carry previous chat records.',
64+
contextHelp: 'The context window determines the amount of chat history from the current session that the model can reference during generation.\nA reasonable context window size helps the model generate coherent and relevant text.\nAvoid confusion or irrelevant output caused by referencing too much context.\nTurning off the context window will cause the session to lose memory, making each question completely independent.',
6465
clickTurnOnSearch: 'Click to enable web search for this chat.',
6566
clickTurnOffSearch: 'Click to disable web search for this chat.',
67+
searchHelp: 'The model\'s knowledge is based on previous training data, and the learned knowledge has a cutoff date (e.g., DeepSeek-R1 training data cutoff is October 2023).\nFor breaking news, industry news and other time-sensitive questions, the answers are often "outdated".\nAfter enabling web search, search engine interfaces will be called to get real-time information for the model to reference before generating results, but this will also increase response latency.\nFor non-time-sensitive questions, enabling web search may actually reduce answer quality due to the incorporation of latest data.',
6668
clickTurnOnThink: 'Click to enable deep thinking for this chat.',
6769
clickTurnOffThink: 'Click to disable deep thinking for this chat.',
70+
thinkHelp: 'After enabling deep thinking, the model will use more computational resources and take longer time to simulate more complex thinking chains for logical reasoning.\nSuitable for complex tasks or high-requirement scenarios, such as mathematical derivations and project planning.\nDaily simple queries do not need to be enabled.',
6871
showOnContext: 'Include context',
6972
showOffContext: 'Not include context',
7073
searchEnabled: 'Search enabled',
@@ -211,7 +214,7 @@ export default {
211214
info2FAStep3Tip1: 'Note: How to turn off two-step verification?',
212215
info2FAStep3Tip2: '1. After logging in, use the two-step verification on the Two-Step Verification page to disable it.',
213216
info2FAStep3Tip3: '2. Contact the administrator to disable two-step verification.',
214-
maxContextCount: 'Max Context Count',
217+
maxContextCount: 'Number of context messages included in the conversation',
215218
fastDelMsg: 'Fast Delete Message',
216219
},
217220
store: {

src/locales/ko-KR.ts

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,10 +61,13 @@ export default {
6161
turnOffThink: '이 채팅에서 깊은 사고가 비활성화되었습니다.',
6262
clickTurnOnContext: '클릭하여 컨텍스트 포함 켜기',
6363
clickTurnOffContext: '클릭하여 컨텍스트 포함 끄기',
64+
contextHelp: '컨텍스트 창은 생성 과정에서 모델이 참조할 수 있는 현재 세션의 채팅 기록의 양을 결정합니다.\n적절한 컨텍스트 창 크기는 모델이 일관되고 관련성 있는 텍스트를 생성하는 데 도움이 됩니다.\n너무 많은 컨텍스트를 참조하여 혼란스럽거나 관련 없는 출력이 나오는 것을 방지합니다.\n컨텍스트 창을 끄면 세션이 기억을 잃게 되어 각 질문이 완전히 독립적이 됩니다.',
6465
clickTurnOnSearch: '클릭하여 이 채팅의 웹 검색 활성화',
6566
clickTurnOffSearch: '클릭하여 이 채팅의 웹 검색 비활성화',
67+
searchHelp: '모델의 지식은 이전 훈련 데이터를 기반으로 하며, 학습된 지식에는 마감일이 있습니다(예: DeepSeek-R1 훈련 데이터 마감일은 2023년 10월).\n돌발 사건, 업계 뉴스 등 시간에 민감한 질문의 경우 답변이 종종 "구식"입니다.\n웹 검색을 활성화한 후 결과를 생성하기 전에 검색 엔진 인터페이스를 호출하여 모델이 참조할 실시간 정보를 얻지만, 이 작업은 응답 지연도 증가시킵니다.\n또한 시간에 민감하지 않은 질문의 경우 웹 검색을 활성화하면 최신 데이터를 결합하여 답변 품질이 저하될 수 있습니다.',
6668
clickTurnOnThink: '클릭하여 이 채팅의 깊은 사고 활성화',
6769
clickTurnOffThink: '클릭하여 이 채팅의 깊은 사고 비활성화',
70+
thinkHelp: '깊은 사고를 활성화한 후 모델은 더 많은 계산 리소스를 사용하고 더 오랜 시간을 소비하여 더 복잡한 사고 체인을 시뮬레이션하여 논리적 추론을 수행합니다.\n복잡한 작업이나 높은 요구 사항 시나리오에 적합합니다. 예: 수학 문제 유도, 프로젝트 계획.\n일상적인 간단한 조회는 활성화할 필요가 없습니다.',
6871
showOnContext: '컨텍스트 포함됨',
6972
showOffContext: '컨텍스트 미포함',
7073
searchEnabled: '검색 활성화됨',
@@ -197,7 +200,7 @@ export default {
197200
info2FAStep3Tip1: 'Note: How to turn off two-step verification?',
198201
info2FAStep3Tip2: '1. After logging in, use the two-step verification on the Two-Step Verification page to disable it.',
199202
info2FAStep3Tip3: '2. Contact the administrator to disable two-step verification.',
200-
maxContextCount: '최대 컨텍스트 수량',
203+
maxContextCount: '대화에 포함된 컨텍스트 메시지 수량',
201204
fastDelMsg: '빠르게 메시지 삭제',
202205
},
203206
store: {

src/locales/zh-CN.ts

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -59,14 +59,17 @@ export default {
5959
turnOffSearch: '已关闭网络搜索功能',
6060
turnOnThink: '已开启深度思考功能',
6161
turnOffThink: '已关闭深度思考功能',
62-
clickTurnOnContext: '点击开启包含上下文',
63-
clickTurnOffContext: '点击停止包含上下文',
62+
clickTurnOnContext: '点击开启上下文窗口',
63+
clickTurnOffContext: '点击关闭上下文窗口',
64+
contextHelp: '上下文窗口决定了在生成过程中,模型可以参考的当前会话下聊天记录的的量\n合理的上下文窗口大小有助于模型生成连贯且相关的文本\n避免因为参考过多的上下文而导致混乱或不相关的输出\n关闭上下文窗口会导致会话失去记忆,每次提问之间将完全独立',
6465
clickTurnOnSearch: '点击开启网络搜索功能',
6566
clickTurnOffSearch: '点击关闭网络搜索功能',
67+
searchHelp: '模型本身的知识是基于之前的训练数据,学习到的知识是有截止日期的(例如DeepSeek-R1训练数据截止时间为2023年十月份)\n对于突发事件、行业新闻等时效性很强的问题,回答很多时候是“过时”的\n开启联网搜索后,将在生成结果前调用搜索引擎接口获取实时信息给到模型参考,但此操作也会增加响应延迟\n另外对于非时效性问题,开启联网搜索反而可能由于结合了最新数据导致回答质量降低',
6668
clickTurnOnThink: '点击开启深度思考功能',
6769
clickTurnOffThink: '点击关闭深度思考功能',
68-
showOnContext: '包含上下文',
69-
showOffContext: '不含上下文',
70+
thinkHelp: '开启深度思考后,模型会用更多的计算资源以及消耗更长时间,模拟更复杂的思维链路进行逻辑推理\n适合复杂任务或高要求场景,比如数学题推导、项目规划\n日常简单查询无需开启',
71+
showOnContext: '上下文窗口已开启',
72+
showOffContext: '上下文窗口已关闭',
7073
searchEnabled: '联网搜索已开启',
7174
searchDisabled: '联网搜索已关闭',
7275
thinkEnabled: '深度思考已开启',
@@ -211,7 +214,7 @@ export default {
211214
info2FAStep3Tip1: '注意:如何关闭两步验证?',
212215
info2FAStep3Tip2: '1. 登录后,在 两步验证 页面使用两步验证码关闭。',
213216
info2FAStep3Tip3: '2. 联系管理员来关闭两步验证。',
214-
maxContextCount: '最大上下文数量',
217+
maxContextCount: '会话中包含的上下文消息数量',
215218
fastDelMsg: '快速删除消息',
216219
},
217220
store: {

src/locales/zh-TW.ts

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,10 +61,13 @@ export default {
6161
turnOffThink: '已關閉深度思考功能',
6262
clickTurnOnContext: '點擊開啟包含上下文',
6363
clickTurnOffContext: '點擊停止包含上下文',
64+
contextHelp: '上下文窗口決定了在生成過程中,模型可以參考的當前對話下聊天記錄的量\n合理的上下文窗口大小有助於模型生成連貫且相關的文字\n避免因為參考過多的上下文而導致混亂或不相關的輸出\n關閉上下文窗口會導致對話失去記憶,每次提問之間將完全獨立',
6465
clickTurnOnSearch: '點擊開啟網路搜尋功能',
6566
clickTurnOffSearch: '點擊關閉網路搜尋功能',
67+
searchHelp: '模型本身的知識是基於之前的訓練資料,學習到的知識是有截止日期的(例如DeepSeek-R1訓練資料截止時間為2023年十月份)\n對於突發事件、行業新聞等時效性很強的問題,回答很多時候是「過時」的\n開啟聯網搜尋後,將在生成結果前呼叫搜尋引擎介面獲取即時資訊給到模型參考,但此操作也會增加響應延遲\n另外對於非時效性問題,開啟聯網搜尋反而可能由於結合了最新資料導致回答品質降低',
6668
clickTurnOnThink: '點擊開啟深度思考功能',
6769
clickTurnOffThink: '點擊關閉深度思考功能',
70+
thinkHelp: '開啟深度思考後,模型會用更多的計算資源以及消耗更長時間,模擬更複雜的思維鏈路進行邏輯推理\n適合複雜任務或高要求場景,比如數學題推導、專案規劃\n日常簡單查詢無需開啟',
6871
showOnContext: '包含上下文',
6972
showOffContext: '不含上下文',
7073
searchEnabled: '搜尋已開啟',
@@ -199,7 +202,7 @@ export default {
199202
info2FAStep3Tip1: '注意:如何关闭两步验证?',
200203
info2FAStep3Tip2: '1. 登录后,在 两步验证 页面使用两步验证码关闭。',
201204
info2FAStep3Tip3: '2. 联系管理员来关闭两步验证。',
202-
maxContextCount: '最大上下文數量',
205+
maxContextCount: '對話中包含的上下文訊息數量',
203206
fastDelMsg: '快速刪除訊息',
204207
},
205208
store: {

0 commit comments

Comments
 (0)